Skip to content

Commit

Permalink
Port samples/dynamic_shapes/ to PyTorch using SHARK-Turbine. (iree-or…
Browse files Browse the repository at this point in the history
…g#15255)

Progress on iree-org#15117 and towards
documenting the "advanced AOT toolkit" on
https://www.iree.dev/guides/ml-frameworks/pytorch/ (trying out the
various features so I can write about them).

This notebook produces a program with the same interface as the existing
TensorFlow notebook used by this sample.
* `export_name="module"` could be removed, but then I'd want to somehow
change the TF program and update the C code from
`iree_make_cstring_view("module.reduce_sum_2d")` to
`iree_make_cstring_view("dynamic_shapes.reduce_sum_2d")`. Keeping that
messiness at least as long as the TF notebook remains.
* Yes, I want to delete the TF notebook and rebase on PyTorch + JAX...

New notebook preview for review:
https://colab.research.google.com/github/scotttodd/iree/blob/samples-pytorch/samples/dynamic_shapes/pytorch_dynamic_shapes.ipynb
  • Loading branch information
ScottTodd authored Oct 20, 2023
1 parent 20e2112 commit 3323519
Show file tree
Hide file tree
Showing 4 changed files with 605 additions and 24 deletions.
60 changes: 40 additions & 20 deletions samples/dynamic_shapes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,21 @@

This sample shows how to

1. Create a TensorFlow program that includes dynamic shapes in program inputs
and outputs
1. Create a program that includes dynamic shapes in program inputs and outputs
2. Import that program into IREE's compiler
3. Compile that program to an IREE VM bytecode module
4. Load the compiled program using IREE's high level runtime C API
5. Call exported functions on the loaded program

Steps 1-2 are performed in Python via the
[`dynamic_shapes.ipynb`](./dynamic_shapes.ipynb)
[Colab](https://research.google.com/colaboratory/) notebook:
[`pytorch_dynamic_shapes.ipynb`](./pytorch_dynamic_shapes.ipynb) or
[`tensorflow_dynamic_shapes.ipynb`](./tensorflow_dynamic_shapes.ipynb)
[Colab](https://colab.google/) notebooks:

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openxla/iree/blob/main/samples/dynamic_shapes/dynamic_shapes.ipynb)
| Framework | Notebook |
| --------- | -------- |
PyTorch | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openxla/iree/blob/main/samples/dynamic_shapes/pytorch_dynamic_shapes.ipynb)
TensorFlow | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openxla/iree/blob/main/samples/dynamic_shapes/tensorflow_dynamic_shapes.ipynb)

Step 3 should be performed on your development host machine

Expand All @@ -23,33 +26,51 @@ The program used to demonstrate includes functions with varying uses of
dynamic shapes:

```python
class DynamicShapesModule(tf.Module):
import torch
import shark_turbine.aot as aot

class DynamicShapesModule(aot.CompiledModule, export_name="module"):
# reduce_sum_1d (dynamic input size, static output size)
# tensor<?xi32> -> tensor<i32>
# e.g. [1, 2, 3] -> 6
@tf.function(input_signature=[tf.TensorSpec([None], tf.int32)])
def reduce_sum_1d(self, values):
return tf.math.reduce_sum(values)
def reduce_sum_1d(self, values=aot.AbstractTensor(None, dtype=torch.int32)):
return self.compute_reduce_sum_1d(values)

@aot.jittable
def compute_reduce_sum_1d(values):
return torch.sum(values, dtype=torch.int32)

# reduce_sum_2d (partially dynamic input size, static output size)
# tensor<?x3xi32> -> tensor<3xi32>
# e.g. [[1, 2, 3], [10, 20, 30]] -> [11, 22, 33]
@tf.function(input_signature=[tf.TensorSpec([None, 3], tf.int32)])
def reduce_sum_2d(self, values):
return tf.math.reduce_sum(values, 0)
def reduce_sum_2d(self, values=aot.AbstractTensor(None, 3, dtype=torch.int32)):
return self.compute_reduce_sum_2d(values)

@aot.jittable
def compute_reduce_sum_2d(values):
return torch.sum(values, 0, dtype=torch.int32)

# add_one (dynamic input size, dynamic output size)
# tensor<?xi32>) -> tensor<?xi32>
# e.g. [1, 2, 3] -> [2, 3, 4]
@tf.function(input_signature=[tf.TensorSpec([None], tf.int32)])
def add_one(self, values):
return tf.math.add(values, tf.constant(1, dtype=tf.int32))
def add_one(self, values=aot.AbstractTensor(None, dtype=torch.int32)):
return self.compute_add_one(values)

@aot.jittable
def compute_add_one(values):
return values + 1
```

## Background

Tensors are multi-dimensional arrays with a uniform type (e.g. int32, float32)
and a shape. Shapes consist of a rank and a list of dimensions and may be
static (i.e. fully known and fixed) or varying degrees of dynamic. See
TensorFlow's [Introduction to Tensors](https://www.tensorflow.org/guide/tensor)
for more information on how tensors are used in TensorFlow programs.
static (i.e. fully known and fixed) or varying degrees of dynamic. For more
information, see these references:
* PyTorch:
[Compiler dynamic shapes](https://pytorch.org/docs/stable/torch.compiler_dynamic_shapes.html),
[`torch.Tensor`](https://pytorch.org/docs/stable/tensors.html)
* TensorFlow: [Introduction to Tensors](https://www.tensorflow.org/guide/tensor)

Dynamic shapes are useful for passing variable sized batches as input,
receiving variable length sentences of text as output, etc.
Expand All @@ -64,7 +85,7 @@ them.

## Instructions

1. Run the Colab notebook and download the `dynamic_shapes.mlir` file it
1. Run either Colab notebook and download the `dynamic_shapes.mlir` file it
generates

2. Build the `iree-compile` tool (see
Expand All @@ -83,7 +104,6 @@ them.
```
../iree-build/tools/iree-compile \
--iree-hal-target-backends=llvm-cpu \
--iree-input-type=stablehlo \
dynamic_shapes.mlir -o dynamic_shapes_cpu.vmfb
```
Expand Down
Loading

0 comments on commit 3323519

Please sign in to comment.