0 Datasets
0 Files
Get instant academic access to this publication’s datasets.
Yes. After verification, you can browse and download datasets at no cost. Some premium assets may require author approval.
Files are stored on encrypted storage. Access is restricted to verified users and all downloads are logged.
Yes, message the author after sign-up to request supplementary files or replication code.
Join 50,000+ researchers worldwide. Get instant access to peer-reviewed datasets, advanced analytics, and global collaboration tools.
✓ Immediate verification • ✓ Free institutional access • ✓ Global collaborationJoin our academic network to download verified datasets and collaborate with researchers worldwide.
Get Free AccessThe capability of a reinforcement learning (RL) agent heavily depends on the\ndiversity of the learning scenarios generated by the environment. Generation of\ndiverse realistic scenarios is challenging for real-time strategy (RTS)\nenvironments. The RTS environments are characterized by intelligent\nentities/non-RL agents cooperating and competing with the RL agents with large\nstate and action spaces over a long period of time, resulting in an infinite\nspace of feasible, but not necessarily realistic, scenarios involving complex\ninteraction among different RL and non-RL agents. Yet, most of the existing\nsimulators rely on randomly generating the environments based on predefined\nsettings/layouts and offer limited flexibility and control over the environment\ndynamics for researchers to generate diverse, realistic scenarios as per their\ndemand. To address this issue, for the first time, we formally introduce the\nbenefits of adopting an existing formal scenario specification language,\nSCENIC, to assist researchers to model and generate diverse scenarios in an RTS\nenvironment in a flexible, systematic, and programmatic manner. To showcase the\nbenefits, we interfaced SCENIC to an existing RTS environment Google Research\nFootball(GRF) simulator and introduced a benchmark consisting of 32 realistic\nscenarios, encoded in SCENIC, to train RL agents and testing their\ngeneralization capabilities. We also show how researchers/RL practitioners can\nincorporate their domain knowledge to expedite the training process by\nintuitively modeling stochastic programmatic policies with SCENIC.\n
Abdus Salam Azad, Edward Kim, Qiancheng Wu, Kimin Lee, Ion Stoica, Pieter Abbeel, Sanjit A. Seshia (2021). Scenic4RL: Programmatic Modeling and Generation of Reinforcement\n Learning Environments. , DOI: https://doi.org/10.48550/arxiv.2106.10365.
Datasets shared by verified academics with rich metadata and previews.
Authors choose access levels; downloads are logged for transparency.
Students and faculty get instant access after verification.
Type
Preprint
Year
2021
Authors
7
Datasets
0
Total Files
0
DOI
https://doi.org/10.48550/arxiv.2106.10365
Access datasets from 50,000+ researchers worldwide with institutional verification.
Get Free Access