Skip to content

Commit

Permalink
Add the description for the underhood device change
Browse files Browse the repository at this point in the history
  • Loading branch information
EikanWang committed Dec 1, 2021
1 parent 9318fae commit 4cf1d3e
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ Intel® Extension for PyTorch\* extends PyTorch with optimizations for extra per

Intel® Extension for PyTorch\* is loaded as a Python module for Python programs or linked as a C++ library for C++ programs. Users can enable it dynamically in script by importing `intel_extension_for_pytorch`. It covers optimizations for both imperative mode and graph mode. Optimized operators and kernels are registered through PyTorch dispatching mechanism. These operators and kernels are accelerated from native vectorization feature and matrix calculation feature of Intel hardware. During execution, Intel® Extension for PyTorch\* intercepts invocation of ATen operators, and replace the original ones with these optimized ones. In graph mode, further operator fusions are applied manually by Intel engineers or through a tool named *oneDNN Graph* to reduce operator/kernel invocation overheads, and thus increase performance.

Starting from the 1.10 release of Intel Extension for PyTorch*, the optimizations are registered directly to CPU device so users don’t need to convert model and tensor to xpu device in user application code anymore. Details pls. refer to the 1.10 [release notes](https://intel.github.io/intel-extension-for-pytorch/tutorials/releases.html#highlights). The old xpu code is archived at xpu-cpu branch but won’t be compatible with PyTorch 1.10 and future PyTorch releases.

More detailed tutorials are available at [**Intel® Extension for PyTorch\* online document website**](https://intel.github.io/intel-extension-for-pytorch/).

## Installation
Expand Down

0 comments on commit 4cf1d3e

Please sign in to comment.