Description
What happened + What you expected to happen
Hello,
I followed the ray tutorial based on this website: https://medium.com/distributed-computing-with-ray/reinforcement-learning-with-rllib-in-the-unity-game-engine-1a98080a7c0d.
When running unity3d_env_local.py ' with unity, I get this error of 'NoneType' for the action spaces. I am not sure what is the error.
ray.exceptions.ActorDiedError: The actor died because of an error raised in its creation task, ray::PPO.__init__() (pid=8072, ip=192.168.0.18, actor_id=c33fa92387cf9551571700f201000000, repr=PPO(env=unity3d; env-runners=0; learners=0; multi-agent=True)) File "/home/brandon-ho/miniconda3/envs/mujoco_env/lib/python3.10/site-packages/ray/rllib/algorithms/algorithm.py", line 537, in __init__ super().__init__( File "/home/brandon-ho/miniconda3/envs/mujoco_env/lib/python3.10/site-packages/ray/tune/trainable/trainable.py", line 157, in __init__ self.setup(copy.deepcopy(self.config)) File "/home/brandon-ho/miniconda3/envs/mujoco_env/lib/python3.10/site-packages/ray/rllib/algorithms/algorithm.py", line 645, in setup self.env_runner_group = EnvRunnerGroup( File "/home/brandon-ho/miniconda3/envs/mujoco_env/lib/python3.10/site-packages/ray/rllib/env/env_runner_group.py", line 198, in __init__ self._setup( File "/home/brandon-ho/miniconda3/envs/mujoco_env/lib/python3.10/site-packages/ray/rllib/env/env_runner_group.py", line 292, in _setup self._local_env_runner = self._make_worker( File "/home/brandon-ho/miniconda3/envs/mujoco_env/lib/python3.10/site-packages/ray/rllib/env/env_runner_group.py", line 1292, in _make_worker return self.env_runner_cls(**kwargs) File "/home/brandon-ho/miniconda3/envs/mujoco_env/lib/python3.10/site-packages/ray/rllib/env/multi_agent_env_runner.py", line 115, in __init__ self.make_env() File "/home/brandon-ho/miniconda3/envs/mujoco_env/lib/python3.10/site-packages/ray/rllib/env/multi_agent_env_runner.py", line 815, in make_env self.env = make_vec( File "/home/brandon-ho/miniconda3/envs/mujoco_env/lib/python3.10/site-packages/ray/rllib/env/vector/registration.py", line 69, in make_vec env = SyncVectorMultiAgentEnv( File "/home/brandon-ho/miniconda3/envs/mujoco_env/lib/python3.10/site-packages/ray/rllib/env/vector/sync_vector_multi_agent_env.py", line 37, in __init__ self.single_action_spaces = self.envs[0].unwrapped.action_spaces or dict( TypeError: 'NoneType' object is not iterable
Versions / Dependencies
ray 2.4.6
mlagents 1.1.0
torch 2.4.1+cu121
torchaudio 2.4.1+cu121
torchvision 0.19.1+cu121
Reproduction script
python rllib/examples/envs/unity3d_env_local.py --env 3DBall
Issue Severity
None