Skip to content

Commit 12625c1

Browse files
authored
[docs] pipeline docs for latte (#8844)
* add pipeline docs for latte * add inference time to latte docs * apply review suggestions
1 parent c1dc2ae commit 12625c1

File tree

2 files changed

+77
-0
lines changed

2 files changed

+77
-0
lines changed

docs/source/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -332,6 +332,8 @@
332332
title: Latent Consistency Models
333333
- local: api/pipelines/latent_diffusion
334334
title: Latent Diffusion
335+
- local: api/pipelines/latte
336+
title: Latte
335337
- local: api/pipelines/ledits_pp
336338
title: LEDITS++
337339
- local: api/pipelines/lumina
Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
<!-- # Copyright 2024 The HuggingFace Team. All rights reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License. -->
14+
15+
# Latte
16+
17+
![latte text-to-video](https://github.com/Vchitect/Latte/blob/52bc0029899babbd6e9250384c83d8ed2670ff7a/visuals/latte.gif?raw=true)
18+
19+
[Latte: Latent Diffusion Transformer for Video Generation](https://arxiv.org/abs/2401.03048) from Monash University, Shanghai AI Lab, Nanjing University, and Nanyang Technological University.
20+
21+
The abstract from the paper is:
22+
23+
*We propose a novel Latent Diffusion Transformer, namely Latte, for video generation. Latte first extracts spatio-temporal tokens from input videos and then adopts a series of Transformer blocks to model video distribution in the latent space. In order to model a substantial number of tokens extracted from videos, four efficient variants are introduced from the perspective of decomposing the spatial and temporal dimensions of input videos. To improve the quality of generated videos, we determine the best practices of Latte through rigorous experimental analysis, including video clip patch embedding, model variants, timestep-class information injection, temporal positional embedding, and learning strategies. Our comprehensive evaluation demonstrates that Latte achieves state-of-the-art performance across four standard video generation datasets, i.e., FaceForensics, SkyTimelapse, UCF101, and Taichi-HD. In addition, we extend Latte to text-to-video generation (T2V) task, where Latte achieves comparable results compared to recent T2V models. We strongly believe that Latte provides valuable insights for future research on incorporating Transformers into diffusion models for video generation.*
24+
25+
**Highlights**: Latte is a latent diffusion transformer proposed as a backbone for modeling different modalities (trained for text-to-video generation here). It achieves state-of-the-art performance across four standard video benchmarks - [FaceForensics](https://arxiv.org/abs/1803.09179), [SkyTimelapse](https://arxiv.org/abs/1709.07592), [UCF101](https://arxiv.org/abs/1212.0402) and [Taichi-HD](https://arxiv.org/abs/2003.00196). To prepare and download the datasets for evaluation, please refer to [this https URL](https://github.com/Vchitect/Latte/blob/main/docs/datasets_evaluation.md).
26+
27+
<Tip>
28+
29+
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers.md) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading.md#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.
30+
31+
</Tip>
32+
33+
### Inference
34+
35+
Use [`torch.compile`](https://huggingface.co/docs/diffusers/main/en/tutorials/fast_diffusion#torchcompile) to reduce the inference latency.
36+
37+
First, load the pipeline:
38+
39+
```python
40+
import torch
41+
from diffusers import LattePipeline
42+
43+
pipeline = LattePipeline.from_pretrained(
44+
"maxin-cn/Latte-1", torch_dtype=torch.float16
45+
).to("cuda")
46+
```
47+
48+
Then change the memory layout of the pipelines `transformer` and `vae` components to `torch.channels-last`:
49+
50+
```python
51+
pipeline.transformer.to(memory_format=torch.channels_last)
52+
pipeline.vae.to(memory_format=torch.channels_last)
53+
```
54+
55+
Finally, compile the components and run inference:
56+
57+
```python
58+
pipeline.transformer = torch.compile(pipeline.transformer)
59+
pipeline.vae.decode = torch.compile(pipeline.vae.decode)
60+
61+
video = pipeline(prompt="A dog wearing sunglasses floating in space, surreal, nebulae in background").frames[0]
62+
```
63+
64+
The [benchmark](https://gist.github.com/a-r-r-o-w/4e1694ca46374793c0361d740a99ff19) results on an 80GB A100 machine are:
65+
66+
```
67+
Without torch.compile(): Average inference time: 16.246 seconds.
68+
With torch.compile(): Average inference time: 14.573 seconds.
69+
```
70+
71+
## LattePipeline
72+
73+
[[autodoc]] LattePipeline
74+
- all
75+
- __call__

0 commit comments

Comments
 (0)