Creation of a Reinforcement Learning agent that plays Starcraft II. This project emerges from the interest on reinforcement learning and the complexity of a stratigic game like starcraft.
This section will help you understand how the ecosystem works.
Displays the components and the interactions between each other.
Since the execution of the exosystem has two different times, since half part is asyncronous and the other is synchronus, the diagram tries to reflect what is the workflow.
- If you are using WSL, follow this steps: WSL Installation
- Starcraft II installed in your system
- Python >= 3.7
- Install dependencies
poetry install
- Add dependencies inside environment (
tensorflow
&&tf_agents
).venv/Script/Activate.ps1
(Windows)source .venv/script/activate
(Linux/MacOS)pip install tensorflow tf_agents
- Install dependecy python-sc2
- Create a folder named
logs
on parent folder - Create a folder named
Maps
under yourStarcraft II
installation folder. - Add maps (.SC2MAP files) under previous folder, they are available for download in sites such as sc2mapster.
- For this use case, scorpion map is used.
In case you are having trouble with environment, once done step 4.3, repeat step 3.
- Using docker run redis service
docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest
- Activate your environment and execute
python main.py
- Linux
source .venv/scripts/activate
- Windows
.venv/Scripts/Activate.ps1
- Linux
Setup created following setup tutorial by sendtex
[MODULE][FIX|ADD|DELETE] Summary of modifications
* [MODULE2][ADD] List of modifications from a general perspective
[SC2-RL][FIX] Diagrams && subprocess
* [README][ADD] Execution && Component diagram
* [BOT][ADD] Change from `aioredis` to `redis.asyncio`
* [RUN_GAME][DELETED]
* [SC2ENV][FIX] Subprocess platform