
oxwhirl/smac: SMAC: The StarCraft Multi-Agent Challenge - GitHub
We make use of special RL units which never automatically start attacking the enemy. Here is the step-by-step guide on how to create new RL units based on existing SC2 units, Add the map information in smac_maps.py, The newly designed RL units have new ids which need to be handled in starcraft2.py.
[1902.04043] The StarCraft Multi-Agent Challenge - arXiv.org
Feb 11, 2019 · In this paper, we propose the StarCraft Multi-Agent Challenge (SMAC) as a benchmark problem to fill this gap. SMAC is based on the popular real-time strategy game StarCraft II and focuses on micromanagement challenges where each unit is controlled by an independent agent that must act based on local observations.
oxwhirl/smacv2 - GitHub
SMACv2 is an update to Whirl’s Starcraft Multi-Agent Challenge, which is a benchmark for research in the field of cooperative multi-agent reinforcement learning. SMAC and SMACv2 both focus on decentralised micromanagement scenarios in StarCraft II, rather than the full game.
SMAC — DI-engine 0.1.0 documentation - Read the Docs
SMAC is an environment for multi-agent collaborative reinforcement learning (MARL) on Blizzard StarCraft II. SMAC uses Blizzard StarCraft 2’s machine learning API and DeepMind’s PySC2 to provide a friendly interface for the interaction between agents and StarCraft 2, which is convenient for developers to observe and execute actions.
smac/docs/smac.md at master · oxwhirl/smac - GitHub
SMAC makes use of the StarCraft II Learning Environment (SC2LE) to communicate with the StarCraft II engine. SC2LE provides full control of the game by allowing to send commands and receive observations from the game. However, SMAC is conceptually different from the RL environment of SC2LE.
Deep reinforcement learning (RL) promises a scalable approach to solving arbitrary sequential decision-making problems, demanding only that a user must specify a reward function that expresses the desired behaviour.
SMAC - Papers With Code
SMAC is built using the StarCraft II game engine, creating a testbed for research in cooperative MARL where each game unit is an independent RL agent. The StarCraft Multi-Agent Challenge (SMAC) is a benchmark that provides elements of partial observability, challenging dynamics, and high-dimensional observation spaces.
[2212.07489] SMACv2: An Improved Benchmark for Cooperative …
Dec 14, 2022 · In cooperative multi-agent reinforcement learning, the StarCraft Multi-Agent Challenge (SMAC) has become a popular testbed for centralised training with decentralised execution. However, after years of sustained improvement on SMAC, algorithms now achieve near-perfect performance.
The StarCraft Multi-Agent Challenge - Papers With Code
Feb 11, 2019 · SMAC is based on the popular real-time strategy game StarCraft II and focuses on micromanagement challenges where each unit is controlled by an independent agent that must act based on local observations. We offer a diverse set of challenge maps and recommendations for best practices in benchmarking and evaluations.
The StarCraft Multi-Agent Challenge | Proceedings of the 18th ...
May 8, 2019 · In this paper, we propose the StarCraft Multi-Agent Challenge (SMAC) as a benchmark problem to fill this gap. SMAC is based on the popular real-time strategy game StarCraft II and focuses on micromanagement challenges where each unit is controlled by an independent agent that must act based on local observations.