8000 Tags · rlberry-py/rlberry · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Tags: rlberry-py/rlberry

Tags

v0.7.3

Toggle v0.7.3's commit message
update version number

v0.7.2

Toggle v0.7.2's commit message
Writing version with github action [skip ci]

v0.7.1

Toggle v0.7.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Update README.md

v0.6.0

Toggle v0.6.0's commit message
Writing version with github action [skip ci]

v0.5.0

Toggle v0.5.0's commit message
Release 0.5.0

v0.4.1

Toggle v0.4.1's commit message
Release 0.4.1

v0.4.0

Toggle v0.4.0's commit message
change version

v0.3.0

Toggle v0.3.0's commit message
Version 0.3.0

v0.2.1

Toggle 592A v0.2.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Version 0.2.1 (#65)

* feat(project): setup dev for v0.2.1

* feat(agents): DefaultWriter is now initialized in base class. New method for generating unique IDs for objects (metadata_utils.get_unique_id()). Agent (base class) now has a property unique_id.

* feat(network): message about how many bytes are being sent and received

* feat(agent,manager,writers): DefaultWriter now wraps (optionally) a tensorboard SummaryWriter. AgentManager has a new option to enable tensorboard logging (enable_tensorboard) and handles automatically the log_dirs of each SummaryWriter (inside DefaultWriter).

* fix(dqn): tests

* fix(agent): set_writer() fixed

* minor improvements

* feat(network): Tensorboard files now can be sent from server to client (using zip files)

* fix(network): bug

* fix(manager): Fix bug in AgentHandler that was not sending eval_env in the kwargs to initialize/load the agent instance.

* feat(writer, manager): Option to limit data stored in memory by DefaultWriter, AgentManager can now receive extra params for DefaultWriter.

* fix(torch dqn): set_writer bug

* feat(wrapper, gym_make): Add option to wrap observation_space and action_space using rlberry.spaces. Seeding from gym (in Dict spaces)  might be too slow in some computers, as I've been told, and rlberry.spaces should solve seeding issues.

* feat(process_env, Agent): Replaced assertion by warning if Agent is not reseeded.

* feat(agent, manager): Agent now accepts an output dir as argument. This directory is set automatically by AgentManager

* fix(manager): minor bug

* fix(agent): out dir bug

* feat(manager): Now it is possible to pass init_kwargs to each instance separately.

* feat(gridworld): remove default terminal_states

* fix(gridworld): fix get_layout_img method

* feat(manager): method get_agent_instances() that returns trained instances

* feat(manager/evaluation): In plot_writer_data, handling better the kwargs for sns.lineplot

* feat(plot_writer_data): Possibility to set xtag (tag used for x-axis)

* update version
0