Skip to content

Commit

Permalink
Merge pull request #80 from hadar-simulator/release/v0.3.1
Browse files Browse the repository at this point in the history
Release/v0.3.1
  • Loading branch information
FrancoisJ authored Jul 10, 2020
2 parents 6d8ce72 + d45d68d commit 56c7581
Show file tree
Hide file tree
Showing 39 changed files with 973 additions and 706 deletions.
73 changes: 37 additions & 36 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
# Hadar
![PyPI](https://img.shields.io/pypi/v/hadar)
![GitHub Workflow Status (branch)](https://img.shields.io/github/workflow/status/hadar-simulator/hadar/main/master)
![https://sonarcloud.io/dashboard?id=hadar-solver_hadar](https://sonarcloud.io/api/project_badges/measure?project=hadar-solver_hadar&metric=alert_status)
![https://sonarcloud.io/dashboard?id=hadar-solver_hadar](https://sonarcloud.io/api/project_badges/measure?project=hadar-solver_hadar&metric=coverage)
![GitHub](https://img.shields.io/github/license/hadar-simulator/hadar)
[![PyPI](https://img.shields.io/pypi/v/hadar)](https://pypi.org/project/hadar/)
[![GitHub Workflow Status (branch)](https://img.shields.io/github/workflow/status/hadar-simulator/hadar/main/master)](https://github.com/hadar-simulator/hadar/action)
[![https://sonarcloud.io/dashboard?id=hadar-solver_hadar](https://sonarcloud.io/api/project_badges/measure?project=hadar-solver_hadar&metric=alert_status)](https://sonarcloud.io/dashboard?id=hadar-solver_hadar)
[![https://sonarcloud.io/dashboard?id=hadar-solver_hadar](https://sonarcloud.io/api/project_badges/measure?project=hadar-solver_hadar&metric=coverage)](https://sonarcloud.io/dashboard?id=hadar-solver_hadar)
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/hadar-simulator/hadar/master?filepath=examples)
[![website](https://img.shields.io/badge/website-hadar--simulator.org-blue)](https://www.hadar-simulator.org/)
[![GitHub](https://img.shields.io/github/license/hadar-simulator/hadar)](https://github.com/hadar-simulator/hadar/blob/master/LICENSE)


Hadar is a adequacy python library for deterministic and stochastic computation
Expand All @@ -15,48 +17,47 @@ Each kind of network has a needs of adequacy. On one side, some network nodes ne
items such as watt, litter, package. And other side, some network nodes produce items.
Applying adequacy on network, is tring to find the best available exchanges to avoid any lack at the best cost.

For example, a electric grid can have some nodes wich produce too more power and some nodes wich produce not enough power.
```
+---------+ +---------+
| Node A | | Node B |
| | | |
| load=20 +-------------+ load=20 |
| prod=30 | | prod=10 |
| | | |
+---------+ +---------+
```
For example, a electric grid can have some nodes wich produce too more power and some nodes which produce not enough power.

![adequacy](examples/Get%20Started/figure.png)

In this case, A produce 10 more and B need 10 more. Perform adequecy is quiet easy : A will share 10 to B
```
+---------+ +---------+
| Node A | | Node B |
| | share 10 | |
| load=20 +------------>+ load=20 |
| prod=30 | | prod=10 |
| | | |
+---------+ +---------+
```

### Complexity comes soon
Above example is simple, but problem become very tricky with 10, 20 or 500 nodes !

Moreovore all have a price ! Node can have many type of production, and each kind of production has its unit cost. Node can have also many consumptions with specific unavailability cost. Links between node have also max capacity and cost.
Moreover all have a price ! Node can have many type of production, and each kind of production has its unit cost. Node can have also many consumptions with specific unavailability cost. Links between node have also max capacity and cost.

Network adequacy is not simple.

## Hadar
Hadar compute adequacy from simple to complex network. For example, to compute above network, just few line need:
Hadar computes adequacy from simple to complex network. For example, to compute above network, just few lines need:

``` python
from hadar.solver.input import *
from hadar.solver.study import solve
import hadar as hd

study = hd.Study(horizon=3)\
.network()\
.node('a')\
.consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')\
.production(cost=10, quantity=[30, 20, 10], name='prod')\
.node('b')\
.consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')\
.production(cost=10, quantity=[10, 20, 30], name='prod')\
.link(src='a', dest='b', quantity=[10, 10, 10], cost=2)\
.link(src='b', dest='a', quantity=[10, 10, 10], cost=2)\
.build()

optimizer = hd.LPOptimizer()
res = optimizer.solve(study)
```

study = Study(['a', 'b']) \
.add_on_node('a', data=Consumption(cost=10 ** 6, quantity=[20], type='load')) \
.add_on_node('a', data=Production(cost=10, quantity=[30], type='prod')) \
.add_on_node('b', data=Consumption(cost=10 ** 6, quantity=[20], type='load')) \
.add_on_node('b', data=Production(cost=20, quantity=[10], type='prod')) \
.add_border(src='a', dest='b', quantity=[10], cost=2) \
And few more lines to display graphics results.

res = solve(study)
```python
plot = hd.HTMLPlotting(agg=hd.ResultAnalyzer(study, res),
node_coord={'a': [2.33, 48.86], 'b': [4.38, 50.83]})
plot.network().node('a').stack()
plot.network().map(t=0, zoom=2.5)
```

Get more information and examples at [https://www.hadar-simulator.org/](https://www.hadar-simulator.org/)
1 change: 1 addition & 0 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ plotly
jupyter
matplotlib
requests
progress
sphinx
sphinx-rtd-theme
sphinx-autobuild
40 changes: 24 additions & 16 deletions docs/source/architecture/analyzer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Today, there is only :code:`ResultAnalyzer`, with two features level:
Before speaking about this features, let's see how data are transformed.

Flatten Data
---------
------------

As said above, object is nice to encapsulate data and represent it into agnostic form. Objects can be serialized into JSON or something else to be used by another software maybe in another language. But keep object to analyze data is awful.

Expand Down Expand Up @@ -77,14 +77,14 @@ Link follow the same pattern. Hierarchical structure naming change. There are no
+------+------+------+------+------+------+------+
| 10 | 100 | 81 | fr | uk | 1 | 1 |
+------+------+------+------+------+------+------+
| ... | ... | ... | ... | ... | .. | ... |
| ... | ... | ... | ... | ... | .. | .. |
+------+------+------+------+------+------+------+

It's done by :code:`_build_link(study: Study, result: Result) -> pd.Dataframe` method.


Low level analysis
------------------
Low level analysis power with a *FluentAPISelector*
---------------------------------------------------

When you observe flat data, there are two kind of data. *Content* like cost, given, asked and *index* describes by node, name, scn, t.

Expand Down Expand Up @@ -114,23 +114,29 @@ If first index like node and scenario has only one element, there are removed.
This result can be done by this line of code. ::

agg = hd.ResultAnalyzer(study, result)
df = agg.agg_prod(agg.inode['fr'], agg.scn[0], agg.itime[50:60], agg.iname)
df = agg.network().node('fr').scn(0).time(slice(50, 60)).production()

As you can see, user select index hierarchy by sorting :code:`agg.ixxx` . Then user specify filter by :code:`agg.ixxx[yy]`.
For analyzer, Fluent API respect these rules:

Behind this mechanism, there are :code:`Index` objects. As you can see directly in the code ::
* API flow begin by :code:`network()`

* API flow must contain strictly one of :code:`node()` , :code:`time()`, :code:`scn()` element

* API flow must contain only one of element inside :code:`link()` , :code:`production()` , :code:`consumption()`

@property
def inode(self) -> NodeIndex:
"""
Get a node index to specify node slice to aggregate consumption or production.
* Except for :code:`network()`, API has no order. Order is free for user to give hierarchy data.

:return: new instance of NodeIndex()
"""
return NodeIndex()
* Therefore above rules, API will always be 5 elements length.

Behind this mechanism, there are :code:`Index` objects. As you can see directly in the code ::

...
self.consumption = lambda x=None: self._append(ConsIndex(x))
...
self.time = lambda x=None: self._append(TimeIndex(x))
...

Each kind of index has to inherent from this class. :code:`Index` object encapsulate column metadata to use and range of filtered elements to keep (accessible by overriding :code:`__getitem__` method). Then, Hadar has child classes with good parameters : :code:`NameIndex` , :code:`NodeIndex` , :code:`ScnIndex` , :code:`TimeIndex` , :code:`SrcIndex` , :code:`DestIndex` . For example you can find below :code:`NodeIndex` implementation ::
Each kind of index has to inherent from this class. :code:`Index` object encapsulate column metadata to use and range of filtered elements to keep (accessible by overriding :code:`__getitem__` method). Then, Hadar has child classes with good parameters : :code:`ConsIndex` , :code:`ProdIndex` , :code:`NodeIndex` , :code:`ScnIndex` , :code:`TimeIndex` , :code:`LinkIndex` , :code:`DestIndex` . For example you can find below :code:`NodeIndex` implementation ::

class NodeIndex(Index[str]):
"""Index implementation to filter nodes"""
Expand All @@ -139,7 +145,9 @@ Each kind of index has to inherent from this class. :code:`Index` object encapsu


.. image:: /_static/architecture/analyzer/ulm-index.png
Index instantiation are completely hidden for user. It created implicitly when user types :code:`agg.ixxx[yy]`. Then, hadar will


Index instantiation are completely hidden for user. Then, hadar will

#. check that mandatory indexes are given with :code:`_assert_index` method.

Expand Down
33 changes: 24 additions & 9 deletions docs/source/architecture/optimizer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -146,22 +146,37 @@ Study

Most important attribute could be :code:`quantity` which represent quantity of power used in network. For link, is a transfert capacity. For production is a generation capacity. For consumption is a forced load to sustain.

User can construct Study step by step thanks to a *fluent API* ::
Fluent API Selector
*******************

import hadar as hd
User can construct Study step by step thanks to a *Fluent API* Selector ::

study = hd.Study(['a', 'b'], horizon=3) \
.add_on_node('a', data=hd.Consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')) \
.add_on_node('a', data=hd.Production(cost=10, quantity=[30, 20, 10], name='prod')) \
.add_on_node('b', data=hd.Consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')) \
.add_on_node('b', data=hd.Production(cost=20, quantity=[10, 20, 30], name='prod')) \
.add_link(src='a', dest='b', quantity=[10, 10, 10], cost=2) \
.add_link(src='b', dest='a', quantity=[10, 10, 10], cost=2) \
import hadar as hd

study = hd.Study(horizon=3)\
.network()\
.node('a')\
.consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')\
.production(cost=10, quantity=[30, 20, 10], name='prod')\
.node('b')\
.consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')\
.production(cost=10, quantity=[10, 20, 30], name='prod')\
.link(src='a', dest='b', quantity=[10, 10, 10], cost=2)\
.link(src='b', dest='a', quantity=[10, 10, 10], cost=2)\
.build()

optim = hd.LPOptimizer()
res = optim.solve(study)

In the case of optimizer, *Fluent API Selector* is represented by :code:`NetworkFluentAPISelector` , and
:code:`NodeFluentAPISelector` classes. As you assume with above example, optimizer rules for API Selector are :

* API flow begin by :code:`network()` and end by :code:`build()`

* You can only downstream deeper step by step (i.e. :code:`network()` then :code:`node()`, then :code:`consumption()` )

* But you can upstream as you want (i.e. go direcly from :code:`consumption()` to :code:`network()` )

To help user, quantity field is flexible:

* lists are converted to numpy array
Expand Down
25 changes: 14 additions & 11 deletions docs/source/architecture/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,25 +61,28 @@ Scikit-learn is the best example of high abstraction level API. For example, if
How many people using this feature know that scikit-learn tries to project data into higher space to find a linear regression inside. And to accelerate computation, it uses mathematics a feature called *a kernel trick* because problem respect strict requirements ? Perhaps just few people and it's all the beauty of an high level API, it hidden background gear.


Hadar tries to keep this high abstraction features. Look at the *Get Started* example ::
Hadar tries to keep this high abstraction features. Look at the `Get Started <https://www.hadar-simulator.org/tutorial/?name=Get%20Started>`_ example ::

import hadar as hd
study = hd.Study(['a', 'b'], horizon=3) \
.add_on_node('a', data=hd.Consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')) \
.add_on_node('a', data=hd.Production(cost=10, quantity=[30, 20, 10], name='prod')) \
.add_on_node('b', data=hd.Consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')) \
.add_on_node('b', data=hd.Production(cost=20, quantity=[10, 20, 30], name='prod')) \
.add_link(src='a', dest='b', quantity=[10, 10, 10], cost=2) \
.add_link(src='b', dest='a', quantity=[10, 10, 10], cost=2) \
study = hd.Study(horizon=3)\
.network()\
.node('a')\
.consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')\
.production(cost=10, quantity=[30, 20, 10], name='prod')\
.node('b')\
.consumption(cost=10 ** 6, quantity=[20, 20, 20], name='load')\
.production(cost=10, quantity=[10, 20, 30], name='prod')\
.link(src='a', dest='b', quantity=[10, 10, 10], cost=2)\
.link(src='b', dest='a', quantity=[10, 10, 10], cost=2)\
.build()

optim = hd.LPOptimizer()
res = optim.solve(study)


Create a study like you will draw it on a paper. Put your nodes, attach some production, consumption, link and run optimizer.

Optimizer, Analayzer and Viewer parts are build around the same API called inside code *Fluent API Selector*. Each part has its flavours.

Go Next
-------
Expand Down
22 changes: 21 additions & 1 deletion docs/source/architecture/viewer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,24 @@ Even with the highest level analyzer features. Data remains simple matrix or tab

Viewer use Analyzer API to build plots. It like an extract layer to convert numeric result to visual result.

There are many viewers, all inherent from :code:`ABCPlotting` abstract class. Available plots are identical between viewers, only technologies used to build these plots change. Today, we have one type of plotting :code:`HTMLPlotting` which is coded upon plotly library to build html interactive plots.
Viewer is split in two domains. First part implements the *FluentAPISelector*, use ResultAnalyzer to compute result and perform last compute before display graphics. This behaviour are coded inside all :code:`*FluentAPISelector` classes.

These classes are directly used by user when asking for a graphics ::

plot = ...
plot.network().node('fr').consumption('load').gaussian(t=4)
plot.network().map(t=0, scn=0)
plot.network().node('de').stack(scn=7)

For Viewer, Fluent API has these rules:

* API begins by :code:`network`.

* User can only go downstream step by step into data. He must specify element choice at each step.

* When he reaches wanted scope (network, node, production, etc), he can call graphics available for the current scope.


Second part belonging to Viewer is only for plotting. Hadar can handle many different libraries and technologies for plotting. New plotting has just to implement :code:`ABCPlotting` and :code:`ABCElementPlotting` . Today one HTML implementation exist with plotly library inside :code:`HTMLPlotting` and :code:`HTMLElementPlotting`.

Data send to plotting classes are complete, pre-computed and ready to display.
4 changes: 2 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
author = 'RTE'

# The full version, including alpha/beta/rc tags
release = '0.1.0'
release = hadar.__version__


# -- General configuration ---------------------------------------------------
Expand Down Expand Up @@ -59,4 +59,4 @@

nbsphinx_execute = 'never'

autodoc_mock_imports = ['pandas', 'numpy', 'ortools', 'plotly', 'jupyter', 'matplotlib', 'requests']
autodoc_mock_imports = ['pandas', 'numpy', 'ortools', 'plotly', 'jupyter', 'matplotlib', 'requests', 'progress']
4 changes: 2 additions & 2 deletions docs/source/dev-guide/contributing.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
How to Contribute
================
=================


First off, thank you to considering contributing to Hadar. We believe technology can change the world. But only great community and open source can improve the world.
Expand All @@ -24,7 +24,7 @@ You can participate on Hadar from many ways:
**Issue tracker are only for features, bug or improvment; not for support. If you have some question please go to TODO . Any support issue will be closed.**

Feature / Improvement
--------------------
---------------------

Little changes can be directly send into a pull request. Like :

Expand Down
2 changes: 1 addition & 1 deletion docs/source/dev-guide/repository.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Hadar `repository <https://hadar-simulator/hadar>`_ is split in many parts.
* :code:`.github/` github configuration to use Github Action for CI.

Ticketing
------
---------

We use all github features to organize development. We implement a Agile methodology and try to recreate Jira behavior in github. Therefore we swap Jira features to Github such as :

Expand Down
4 changes: 2 additions & 2 deletions docs/source/mathematics/linear-model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ Then productions and edges need to be bounded
Lack of adequacy
--------------
----------------

Variables
*********
Expand All @@ -116,7 +116,7 @@ Objective has a new term
\end{array}
Constraints
**********
***********

Kirschhoff law needs an update too. Lost of Load is represented like a *fantom* import of energy to reach adequacy.

Expand Down
8 changes: 0 additions & 8 deletions docs/source/reference/hadar.viewer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,6 @@ hadar.viewer.html module
:undoc-members:
:show-inheritance:

hadar.viewer.jupyter module
---------------------------

.. automodule:: hadar.viewer.jupyter
:members:
:undoc-members:
:show-inheritance:


Module contents
---------------
Expand Down
4 changes: 2 additions & 2 deletions examples/Analyze Result/Analyze Result.ipynb
Git LFS file not shown
4 changes: 2 additions & 2 deletions examples/Begin Stochastic/Begin Stochastic.ipynb
Git LFS file not shown
Git LFS file not shown
4 changes: 2 additions & 2 deletions examples/FR-DE Adequacy/FR-DE Adequacy.ipynb
Git LFS file not shown
Loading

0 comments on commit 56c7581

Please sign in to comment.