Skip to content

Commit 2c1889f

Browse files
authored
Add PyTorch 2.0 note (#100)
1 parent 588c6ac commit 2c1889f

File tree

1 file changed

+13
-1
lines changed

1 file changed

+13
-1
lines changed

README.md

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -206,7 +206,7 @@ complex execution modes and dynamic shapes. If not specified, all are enabled by
206206

207207
`ENABLE_TENSOR_FUSER`
208208

209-
### Important Note
209+
### Important Notes
210210

211211
* The execution of PyTorch model on GPU is asynchronous in nature. See
212212
[here](https://pytorch.org/docs/stable/notes/cuda.html#asynchronous-execution)
@@ -223,3 +223,15 @@ a List of Strings as input(s) / produces a List of String as output(s). For thes
223223
Triton allows users to pass String input(s)/receive String output(s) using the String
224224
datatype. As a limitation of using List instead of Tensor for String I/O, only for
225225
1-dimensional input(s)/output(s) are supported for I/O of String type.
226+
227+
#### PyTorch 2.0
228+
229+
Currently, the
230+
[PyTorch Backend](https://github.com/triton-inference-server/pytorch_backend)
231+
relies on LibTorch/TorchScript (C++) which has been deprecated from
232+
[PyTorch 2.0](https://pytorch.org/get-started/pytorch-2.0/).
233+
234+
So, users interested in new features introduced in PyTorch 2.0 should try the
235+
[Python Backend](https://github.com/triton-inference-server/python_backend)
236+
route instead.
237+

0 commit comments

Comments
 (0)