Skip to content

Commit

Permalink
Ccn patch (#97)
Browse files Browse the repository at this point in the history
* updating readme with short snippets

* default variables in place for agents and 2d

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* smal fixes

* checking all notebooks

* stachenfeld works

* fixed bug in weber plus I test in the backen runner

* update path

* updating installation to colab

* updating colab link

* alter the way Whittington2020 is imported in tests

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update path for TEM

* Update README.md

* updated readme TOC

* saved model

* small adjustments to plotting functions and documentation

* pre-commit

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: clementine <clementine.domine98@gmail.com>
Co-authored-by: niksirbi <niko.sirbiladze@gmail.com>
  • Loading branch information
4 people authored Aug 24, 2023
1 parent 8685528 commit e6d5819
Show file tree
Hide file tree
Showing 10 changed files with 206 additions and 82 deletions.
69 changes: 69 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,15 @@
<!-- ALL-CONTRIBUTORS-BADGE:END -->

# NeuralPlayground


## *A standardised environment for the hippocampus and entorhinal cortex models.* <a href="https://githubtocolab.com/SainsburyWellcomeCentre/NeuralPlayground/blob/main/examples/colab_example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>



<img src="images/NPG_GH-social-preview_white-bg.jpg" alt="NeuralPlayground Logo" width="500"/>


<!-- TOC -->
- [Introduction](#introduction)
- [Installation](#installation)
Expand Down Expand Up @@ -122,6 +127,7 @@ All relevant documents can be found in [Documents](https://github.com/Clementine
### Agent Arena interaction
You can pick an Agent, an Arena of your choice to run a simulation.
arenas and simulations have a simple interface to interact with each other as in [OpenAI gymnasium](https://gymnasium.farama.org/).
Expand Down Expand Up @@ -181,6 +187,69 @@ folder with the name you provide, keeping track of any errors and logs. You can
to run multiple simulations at once, save the results, keep run of each run and possible errors for easy debugging, and other functions.
```python
You can pick an Agent, an Arena of your choice to run a simulation.
arenas and simulations have a simple interface to interact with each other as in [OpenAI gymnasium](https://gymnasium.farama.org/).
```python
# import an agent based on a plasticity model of grid cells
from neuralplayground.agents import Weber2018
# import a square 2D arena
from neuralplayground.arenas import Simple2D
# Initialise the agent
agent = Weber2018()
# Initialise the arena
arena = Simple2D()
```
To make the agent interact with the arena, a very simple loop can be the following:
```python
iterations = 1000
obs, state = arena.reset()
for j in range(iterations):
# Observe to choose an action
action = agent.act(obs)
# Run environment for given action
obs, state, reward = arena.step(action)
# Update agent parameters
update_output = agent.update()
```
This process is the base of our package. We provide a more detailed example in <a href="https://githubtocolab.com/SainsburyWellcomeCentre/NeuralPlayground/blob/main/examples/colab_example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>.
Also, specific examples of how to use each module can be found in [agent](https://github.com/ClementineDomine/NeuralPlayground/tree/main/examples/agent_examples),
[arena](https://github.com/SainsburyWellcomeCentre/NeuralPlayground/blob/main/examples/arena_examples/arena_examples.ipynb)
and [experiment](https://github.com/SainsburyWellcomeCentre/NeuralPlayground/blob/main/examples/experimental_examples/experimental_data_examples.ipynb) jupyter notebooks.
> **Note**
>
> Check our Tolman-Eichenbaum Machine Implementation in
[this branch](https://github.com/ClementineDomine/NeuralPlayground/tree/whittington_2020) (work in progress), you will also need to install [pytorch](https://pytorch.org/) ro run it.</strong>
### Simulation Manager
We provide some backend tools to run simulations and compare the results with experimental data in the background,
including some methods to keep track of your runs, and a comparison board to visualise the results. You can check
the details in [Simulation Manager](https://github.com/SainsburyWellcomeCentre/NeuralPlayground/blob/main/examples/comparisons_examples/simulation_manager.ipynb)
and [Comparison Board](https://github.com/SainsburyWellcomeCentre/NeuralPlayground/blob/main/examples/comparisons_examples/comparison_from_manager.ipynb) jupyters. In addition,
we have some [default simulations](https://github.com/SainsburyWellcomeCentre/NeuralPlayground/blob/main/neuralplayground/backend/default_simulation.py)
you can try out, for which you don't need to write much code, since they are
implemented using a [SingleSim](https://github.com/SainsburyWellcomeCentre/NeuralPlayground/blob/55085e792f5dc446e0c3a808cd0d9901a37484a8/neuralplayground/backend/simulation_manager.py#L211)
class. For example
```python
# Import default simulation, which is a SingleSim
from neuralplayground.backend.default_simulation import stachenfeld_in_2d
from neuralplayground.backend.default_simulation import weber_in_2d
stachenfeld_in_2d.run_sim(save_path="my_results")
```
This class allows you to run a simulation with a single line of code, and it will automatically save the results in a
folder with the name you provide, keeping track of any errors and logs. You can also use our
[Simulation Manager](https://github.com/SainsburyWellcomeCentre/NeuralPlayground/blob/main/examples/comparisons_examples/simulation_manager.ipynb)
to run multiple simulations at once, save the results, keep run of each run and possible errors for easy debugging, and other functions.
```python
# Import Simulation Manager
from neuralplayground.backend import SimulationManager
Expand Down
10 changes: 5 additions & 5 deletions examples/comparisons_examples/comparison_board.ipynb

Large diffs are not rendered by default.

77 changes: 32 additions & 45 deletions examples/comparisons_examples/comparison_from_manager.ipynb

Large diffs are not rendered by default.

8 changes: 6 additions & 2 deletions neuralplayground/arenas/arena_core.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@

class Environment(Env):
"""Abstract parent environment class
Attributes
----------
environment_name : str
Expand Down Expand Up @@ -49,6 +50,9 @@ class Environment(Env):
Restore environment saved using save_environment method
get_trajectory_data(self):
Returns interaction history
reward_function(self, action, state):
Reward curriculum as a function of action, state
and attributes of the environment
"""

def __init__(self, environment_name: str = "Environment", time_step_size: float = 1.0, **env_kwargs):
Expand Down Expand Up @@ -192,8 +196,8 @@ def get_trajectory_data(self):
return self.history

def reward_function(self, action, state):
"""Code reward curriculum here as a function of action and state
and attributes of the environment if you want
"""Reward curriculum as a function of action, state
and attributes of the environment
Parameters
----------
Expand Down
51 changes: 34 additions & 17 deletions neuralplayground/arenas/simple2d.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,27 @@

class Simple2D(Environment):
"""
Methods (Some in addition to Environment class)
----------
__init__(self, environment_name="2DEnv", **env_kwargs):
Initialise the class
reset(self):
Reset the environment variables
step(self, action):
Increment the global step count of the agent in the environment and moves
the agent in a random direction with a fixed step size
plot_trajectory(self, history_data=None, ax=None):
Plot the Trajectory of the agent in the environment. In addition to environment class.
validate_action(self, pre_state, action, new_state):
Check if the new state is crossing any walls in the arena.
render(self, history_length=30):
Render the environment live through iterations as in OpenAI gym.
_create_default_walls(self):
Generates outer border of the 2D environment based on the arena limits
_create_custom_walls(self):
Custom walls method. In this case is empty since the environment is a simple square room.
Override this method to generate more walls, see jupyter notebook with examples.
Attributes (Some in addition to the Environment class)
----------
state: ndarray
Expand All @@ -25,32 +46,30 @@ class Simple2D(Environment):
Saved history over simulation steps (action, state, new_state, reward, global_steps)
global_steps: int
Counter of the number of steps in the environment
arena_x_limits: float
Size of the environment in the x direction (width)
arena_y_limits: float
Size of the environment in the y direction (depth)
room_width: int
Size of the environment in the x direction
room_depth: int
Size of the environment in the y direction
metadata: dict
Dictionary containing the metadata
state_dims_labels: list
List of the labels of the dimensions of the state
observation_space: gym.spaces
specify the range of observations as in openai gym
action_space: gym.spaces
specify the range of actions as in openai gym
wall_list: list
List of the walls in the environment
observation: ndarray
Fully observable environment, make_observation returns the state
Array of the observation of the agent in the environment (Could be modified as the environments are evolves)
agent_step_size: float
Size of the step when executing movement, agent_step_size*global_steps will give
a measure of the total distance traversed by the agent
Methods (Some in addition to Environment class)
----------
__init__(self, environment_name="2DEnv", **env_kwargs):
Initialise the class
reset(self):
Reset the environment variables
step(self, action):
Increment the global step count of the agent in the environment and moves
the agent in a random direction with a fixed step size
plot_trajectory(self, history_data=None, ax=None):
Plot the Trajectory of the agent in the environment. In addition to environment class.
_create_default_walls(self):
Generates outer border of the 2D environment based on the arena limits
"""

def __init__(
Expand Down Expand Up @@ -103,11 +122,9 @@ def __init__(
dtype=np.float64,
)
self.state_dims_labels = ["x_pos", "y_pos"]

self._create_default_walls()
self._create_custom_walls()
self.wall_list = self.default_walls + self.custom_walls
self.state_dims_labels = ["x_pos", "y_pos"]
self.reset()

def _create_default_walls(self):
Expand Down Expand Up @@ -331,7 +348,7 @@ def plot_trajectory(
return ax

def render(self, history_length=30):
"""Render the environment live through iterations"""
"""Render the environment live through iterations as in OpenAI gym"""
f, ax = plt.subplots(1, 1, figsize=(8, 6))
canvas = FigureCanvas(f)
history = self.history[-history_length:]
Expand Down
8 changes: 5 additions & 3 deletions neuralplayground/config/default_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,13 @@ plot_config:
grid: False

agent_comparison_plot:
figsize: [12, 15]
fontsize: 10
figsize: [7, 12]
fontsize: 12
plot_sac_exp: True
plot_sac_agt: True

text_fontsize: 12
horizontal_axis_spacing: 0.0
vertical_axis_spacing: 0.4

table_plot:
table_fontsize: 7
Expand Down
3 changes: 3 additions & 0 deletions neuralplayground/config/plot_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,9 @@ def __init__(self, **kwargs):
self.FONTSIZE = kwargs["fontsize"]
self.PLOT_SAC_EXP = kwargs["plot_sac_exp"]
self.PLOT_SAC_AGT = kwargs["plot_sac_agt"]
self.TEXT_FONTSIZE = kwargs["text_fontsize"]
self.HORIZONTAL_AXIS_SPACING = kwargs["horizontal_axis_spacing"]
self.VERTICAL_AXIS_SPACING = kwargs["vertical_axis_spacing"]


class TableConfig(NPGConfig):
Expand Down
17 changes: 16 additions & 1 deletion neuralplayground/experiments/experiment_core.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,22 @@


class Experiment(object):
"""Main abstract experiment class, created just for consistency purposes"""
"""Main abstract experiment class, created just for consistency purposes
Attributes
----------
experiment_name: str
Name of the experiment
data_url: str
URL to the data used in the experiment, make sure it is publicly available for usage and download
paper_url: str
URL to the paper describing the experiment
Methods
-------
_find_data_path(data_path: str = None)
Fetch data from NeuralPlayground data repository if no data path is supplied by the user
"""

def __init__(self, experiment_name: str = "abstract_experiment", data_url: str = None, paper_url: str = None):
"""Constructor for the abstract Experiment class
Expand Down
2 changes: 2 additions & 0 deletions neuralplayground/experiments/hafting_2008_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ class Hafting2008Data(Experiment):
Get identifiers to sort the experimental data
get_recording_data(recording_index: int = None)
Get experimental data for a given recordin index
get_tetrode_data(session_data: str = None, tetrode_id: str = None)
Return time stamp, position and spikes for a given session and tetrode
plot_recording_tetr(recording_index: Union[int, tuple, list] = None,
save_path: Union[str, tuple, list] = None,
ax: Union[mpl.axes.Axes, tuple, list] = None,
Expand Down
43 changes: 34 additions & 9 deletions neuralplayground/plotting/plot_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,18 @@ def render_mpl_table(data, ax=None, **kwargs):
return ax.get_figure(), ax


def make_agent_comparison(envs, parameters, agents, exps=None, recording_index=None, tetrode_id=None, GridScorer=None):
def make_agent_comparison(
envs,
parameters,
agents,
exps=None,
recording_index=None,
tetrode_id=None,
GridScorer=None,
figsize=None,
horizontal_axis_spacing=None,
vertical_axis_spacing=None,
):
"""Plot function to compare agents in a given environment
Parameters
Expand All @@ -214,7 +225,16 @@ def make_agent_comparison(envs, parameters, agents, exps=None, recording_index=N
ax: mpl.axes._subplots.AxesSubplot (matplotlib axis from subplots)
Modified axis where the comparison is plotted
"""
from neuralplayground.config import PLOT_CONFIG

config_vars = PLOT_CONFIG.AGENT_COMPARISON
if figsize is not None:
config_vars.FIGSIZE = figsize
if horizontal_axis_spacing is not None:
config_vars.HORIZONTAL_AXIS_SPACING = horizontal_axis_spacing
if vertical_axis_spacing is not None:
config_vars.VERTICAL_AXIS_SPACING = vertical_axis_spacing

exp_data = False
for j, env in enumerate(envs):
if hasattr(env, "show_data"):
Expand Down Expand Up @@ -243,20 +263,23 @@ def make_agent_comparison(envs, parameters, agents, exps=None, recording_index=N
3, len(agents) + len(envs), figsize=(config_vars.FIGSIZE[0] * (len(agents) + 1), config_vars.FIGSIZE[1])
)

plt.subplots_adjust(wspace=config_vars.HORIZONTAL_AXIS_SPACING)
plt.subplots_adjust(hspace=config_vars.VERTICAL_AXIS_SPACING)

for k, env in enumerate(envs):
# render_mpl_table( pd.DataFrame([parameters[0]["env_params"]]),ax=ax[0, k],)
ax[0, k].text(0, 1.1, env.environment_name, fontsize=config_vars.FONTSIZE)
ax[0, k].text(0, 1.1, env.environment_name, fontsize=config_vars.TEXT_FONTSIZE)
ax[0, k].set_axis_off()
for p, text in enumerate(parameters[k]["env_params"]):
ax[0, k].text(0, 1, "Event param", fontsize=10)
ax[0, k].text(0, 1, "Env param", fontsize=config_vars.TEXT_FONTSIZE)
variable = parameters[k]["env_params"][text]
ax[0, k].text(0, 0.9 - ((p) * 0.1), text + ": " + str(variable), fontsize=10)
ax[0, k].text(0, 0.9 - ((p) * 0.1), text + ": " + str(variable), fontsize=config_vars.TEXT_FONTSIZE)
ax[0, k].set_axis_off()

if hasattr(env, "plot_trajectory"):
env.plot_trajectory(ax=ax[1, k])
else:
ax[1, k].text(0, 1, "No Trajectory map", fontsize=10)
ax[1, k].text(0, 1, "No Trajectory map", fontsize=config_vars.TEXT_FONTSIZE)
ax[1, k].set_axis_off()

if hasattr(env, "show_data"):
Expand All @@ -282,12 +305,14 @@ def make_agent_comparison(envs, parameters, agents, exps=None, recording_index=N
for j, text in enumerate(parameters[i]["agent_params"]):
if j > 9:
variable = parameters[i]["agent_params"][text]
ax[0, i + k + 1].text(0.7, 1 - ((j - 9) * 0.1), text + ": " + str(variable), fontsize=10)
ax[0, i + k + 1].text(
0.6, 1 - ((j - 9) * 0.1), text + ": " + str(variable), fontsize=config_vars.TEXT_FONTSIZE
)
ax[0, i + k + 1].set_axis_off()
else:
ax[0, i + k + 1].text(0, 1, "Agent param", fontsize=10)
ax[0, i + k + 1].text(0, 1, "Agent param", fontsize=config_vars.TEXT_FONTSIZE)
variable = parameters[i]["agent_params"][text]
ax[0, i + k + 1].text(0, 0.9 - ((j) * 0.1), text + ": " + str(variable), fontsize=10)
ax[0, i + k + 1].text(0, 0.9 - ((j) * 0.1), text + ": " + str(variable), fontsize=config_vars.TEXT_FONTSIZE)
ax[0, i + k + 1].set_axis_off()
if hasattr(agent, "plot_rate_map"):
agent.plot_rate_map(ax=ax[1][1 + i + k])
Expand All @@ -296,7 +321,7 @@ def make_agent_comparison(envs, parameters, agents, exps=None, recording_index=N
r_out_im=agent.get_rate_map_matrix(), plot=config_vars.PLOT_SAC_AGT, ax=ax[2, 1 + i + k]
)
else:
ax[1, i + k + 1].text(0, 1, "No Rate map", fontsize=10)
ax[1, i + k + 1].text(0, 1, "No Rate map", fontsize=config_vars.TEXT_FONTSIZE)
ax[1][i + k + 1].set_axis_off()
ax[2][i + k + 1].set_axis_off()
# render_mpl_table(data=pd.DataFrame([parameters[i]["agent_params"]]), ax=ax[0, 1 + i + k])
Expand Down

0 comments on commit e6d5819

Please sign in to comment.