Skip to content

Commit

Permalink
Merge pull request #99 from christophbrgr/master
Browse files Browse the repository at this point in the history
Adds dict passthrough for Python API and updates docs
  • Loading branch information
christophbrgr authored Sep 23, 2019
2 parents 31fd85a + de016e4 commit 7e58920
Show file tree
Hide file tree
Showing 11 changed files with 292 additions and 65 deletions.
6 changes: 3 additions & 3 deletions docs/source/modelio.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ http://localhost:80/api/predict?fileurl=http://example.org/cutedogsandcats.jpg
The API then returns the prediction in the specified format. For a thorough description of the API, have a look at its [documentation](https://modelhub.readthedocs.io/en/latest/modelhubapi.html). <br/><br/>

#### As a Collaborator submitting a new Model
For single inputs, please create a configuration for your model according to the [example configuration](https://github.com/modelhub-ai/modelhub/blob/master/example_config_single_input.json). It is important that you keep the key `"single"` in the config, as the API uses this for accessing the dimension constraints when loading an image. Populate the rest of the configuration file as stated in the contribution guide and the [schema](https://github.com/modelhub-ai/modelhub/blob/master/config_schema.json). Validate your config file against our config schema with a JSON validator, e.g. [this one](https://www.jsonschemavalidator.net).<br/>
For single inputs, please create a configuration for your model according to the [example configuration](https://github.com/modelhub-ai/modelhub/blob/master/examples/example_config_single_input.json). It is important that you keep the key `"single"` in the config, as the API uses this for accessing the dimension constraints when loading an image. Populate the rest of the configuration file as stated in the contribution guide and the [schema](https://github.com/modelhub-ai/modelhub/blob/master/config_schema.json). Validate your config file against our config schema with a JSON validator, e.g. [this one](https://www.jsonschemavalidator.net).<br/>
Take care to choose the right MIME type for your input, this format will be checked by the API when users call the predict function and load a file. We support a few extra MIME types in addition to the standard MIME types:
<table>
<thead>
Expand Down Expand Up @@ -45,7 +45,7 @@ If you need other types not supported in the standard MIME types and by our exte
### Input Configuration for Multiple Inputs

#### As a User
When you use a model that needs more than a single input file for a prediction, you have to pass a JSON file with all the inputs needed for that model. You can have a look at an example [here](https://github.com/modelhub-ai/modelhub/blob/master/example_input_file_multiple_inputs.json). <br/>
When you use a model that needs more than a single input file for a prediction, you have to pass a JSON file with all the inputs needed for that model. You can have a look at an example [here](https://github.com/modelhub-ai/modelhub/blob/master/examples/example_input_file_multiple_inputs.json). <br/>
The important points to keep in mind are:
- There has to be a `format` key with `"application/json"` so that the API can handle the file
- Each of the other keys describes one input and has to have a `format` (see the MIME types above) and a `fileurl`
Expand All @@ -61,7 +61,7 @@ For a thorough description of the API, have a look at its [documentation](https:


#### As a Collaborator submitting a new Model
For multiple inputs, please create a configuration for your model according to the [example configuration](https://github.com/modelhub-ai/modelhub/blob/master/example_config_multiple_inputs.json). The `format` key has to be present at the `input` level and must be equal to `application/json` as all input files will be passed in a json to the API.
For multiple inputs, please create a configuration for your model according to the [example configuration](https://github.com/modelhub-ai/modelhub/blob/master/examples/example_config_multiple_inputs.json). The `format` key has to be present at the `input` level and must be equal to `application/json` as all input files will be passed in a json to the API.
<br/>
The other keys stand for one input file each and must contain a valid format (e.g. `application/dicom`) and dimensions. You can additionally add a description for the input.
<br/>
Expand Down
15 changes: 7 additions & 8 deletions docs/source/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,26 +17,25 @@ But since you are here, follow these steps to get modelhub running on your local
Python 2.7 or Python 3.6 (or higher).
<br/><br/>

3. **Download modelhub start script**
3. **Install the modelhub-ai package**

Download [_start.py_](https://raw.githubusercontent.com/modelhub-ai/modelhub/master/start.py)
(right click -> "save link as") from the [modelhub repository](https://github.com/modelhub-ai/modelhub) and place it into an empty folder.
Install the `modelhub-ai` package from PyPi using pip: `pip install modelhub-ai`.
<br/><br/>

4. **Run a model using start.py**

Open a terminal and navigate to the folder that contains _start.py_. For running models, write access
Open a terminal and navigate to a folder you want to work in. For running models, write access
is required in the current folder.

Execute `python start.py squeezenet` in the terminal to run the squeezenet model from the modelhub collection.
Execute `modelhub-run squeezenet` in the terminal to run the squeezenet model from the modelhub collection.
This will download all required model files (only if they do not exist yet) and start the model. Follow the
instructions given on the terminal to access the web interface to explore the model.

Replace `squeezenet` by any other model name in the collection to start a different model. To see a list of
all available models execute `python start.py -l`.
all available models execute `modelhub-list` or `modelhub -l`.

You can also access a jupyter notebook that allows you to experiment with a model by starting a model with
the "-e" option, e.g. `python start.py squeezenet -e`. Follow the instructions on the terminal to open the notebook.
the "-e" option, e.g. `modelhub-run squeezenet -e`. Follow the instructions on the terminal to open the notebook.

See additional starting options by executing `python start.py -h`.
See additional starting options by executing `modelhub-run -h`.
<br/><br/>
7 changes: 5 additions & 2 deletions framework/modelhubapi/pythonapi.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,10 +90,11 @@ def predict(self, input_file_path, numpyToFile=True, url_root=""):
Preforms the model's inference on the given input.
Args:
input_file_path (str): Path to input file to run inference on.
input_file_path (str or dict): Path to input file to run inference on.
Either a direct input file or a json containing paths to all
input files needed for the model to predict. The appropriate
structure for the json can be found in the documentation.
If used directly, you can also pass a dict with the keys.
numpyToFile (bool): Only effective if prediction is a numpy array.
Indicates if numpy outputs should be saved and a path to it is
returned. If false, a json-serializable list representation of
Expand Down Expand Up @@ -153,7 +154,9 @@ def _unpack_inputs(self, file_path):
returns the file_path unchanged for single inputs
It also converts the fileurl to a valid string (avoids html escaping)
"""
if file_path.lower().endswith('.json'):
if isinstance(file_path, dict):
return self._check_input_compliance(file_path)
elif file_path.lower().endswith('.json'):
input_dict = self._load_json(file_path)
for key, value in input_dict.items():
if key == "format":
Expand Down
10 changes: 5 additions & 5 deletions framework/modelhubapi/restapi.py
Original file line number Diff line number Diff line change
Expand Up @@ -223,13 +223,13 @@ def _delete_temp_files(self, folder):
"""
Removes all files in the given folder
"""
for file in os.listdir(folder):
file_path = os.path.join(folder, file)
try:
try:
for file in os.listdir(folder):
file_path = os.path.join(folder, file)
if os.path.isfile(file_path):
os.unlink(file_path)
except Exception as e:
print(e)
except Exception as e:
print(e)

def _jsonify(self, content):
"""
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
{
"format": ["application/json"],
"t1c": {
"type": ["application/nii-gzip"],
"fileurl":"https://raw.githubusercontent.com/christophbrgr/modelhub-tests/master/testimage_nifti_91x109x91.nii.gz"
},
"t2": {
"type": ["application/nii-gzip"],
"fileurl":"https://raw.githubusercontent.com/christophbrgr/modelhub-tests/master/testimage_nifti_91x109x91.nii.gz"
},
"flair": {
"type": ["application/nii-gzip"],
"fileurl":"https://raw.githubusercontent.com/christophbrgr/modelhub-tests/master/testimage_nifti_91x109x91.nii.gz"
}
}
22 changes: 22 additions & 0 deletions framework/modelhubapi_tests/pythonapi_test.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import unittest
import os
import numpy
import json
import shutil
from modelhubapi import ModelHubAPI
from .apitestbase import TestAPIBase
Expand Down Expand Up @@ -102,6 +103,12 @@ def tearDown(self):
def test_predict_accepts_and_processes_valid_json(self):
result = self.api.predict(self.this_dir + "/mockmodels/contrib_src_mi/sample_data/valid_input_list.json")
self.assertEqual(result["output"][0]["prediction"][0], True)

def test_predict_accepts_and_processes_valid_dict(self):
with open(self.this_dir + "/mockmodels/contrib_src_mi/sample_data/valid_input_list.json", "r") as f:
input_dict = json.load(f)
result = self.api.predict(input_dict)
self.assertEqual(result["output"][0]["prediction"][0], True)

def test_predict_rejects_invalid_file(self):
result = self.api.predict(self.this_dir + "/mockmodels/contrib_src_si/sample_data/testimage_ramp_4x2.png")
Expand Down Expand Up @@ -141,7 +148,22 @@ def setUp(self):
contrib_src_dir = os.path.join(self.this_dir, "mockmodels", "contrib_src_si")
self.api = ModelHubAPI(model, contrib_src_dir)

class TestModelHubAPIUtitilyFunctions(unittest.TestCase):

def setUp(self):
model = ModelReturnsOneLabelList()
self.this_dir = os.path.dirname(os.path.realpath(__file__))
# load config version 2 with only one output specified
contrib_src_dir = os.path.join(self.this_dir, "mockmodels", "contrib_src_mi")
self.api = ModelHubAPI(model, contrib_src_dir)


def tearDown(self):
pass

def test_json_write_fails_for_invalid_path(self):
result = self.api._write_json({"some":"key"}, "this/does/not/exist/johndoe.json")
self.assertIn("error", result)

class TestModelHubAPIModelReturnsOneLabelList(unittest.TestCase):

Expand Down
Loading

0 comments on commit 7e58920

Please sign in to comment.