|
2 | 2 |
|
3 | 3 | ## Introduction |
4 | 4 |
|
5 | | -Facebook AI's [wav2vec 2.0](https://github.com/pytorch/fairseq/tree/master/examples/wav2vec) is one of the leading models in speech recognition. It is also available in the [Huggingface Transformers](https://github.com/huggingface/transformers) library, which is also used in another PyTorch iOS demo app for [Question Answering](https://github.com/pytorch/ios-demo-app/tree/master/QuestionAnswering). |
| 5 | +Facebook AI's [wav2vec 2.0](https://github.com/pytorch/fairseq/tree/master/examples/wav2vec) is one of the leading models in speech recognition. It is also available in the [Hugging Face Transformers](https://github.com/huggingface/transformers) library, which is also used in another PyTorch iOS demo app for [Question Answering](https://github.com/pytorch/ios-demo-app/tree/master/QuestionAnswering). |
6 | 6 |
|
7 | | -In this demo app, we'll show how to quantize, trace, and optimize the wav2vec2 model for mobile and how to use the converted model on an iOS demo app to perform speech recognition. |
| 7 | +In this demo app, we'll show how to quantize, trace, and optimize the wav2vec2 model, powered by the newly released torchaudio 0.9.0, and how to use the converted model on an iOS demo app to perform speech recognition. |
8 | 8 |
|
9 | 9 | ## Prerequisites |
10 | 10 |
|
11 | | -* PyTorch 1.9 (Optional) |
| 11 | +* PyTorch 1.9.0 and torchaudio 0.9.0 (Optional) |
12 | 12 | * Python 3.8 or above (Optional) |
13 | | -* iOS PyTorch pod library 1.9 |
14 | | -* Xcode 12 or later |
| 13 | +* iOS PyTorch Cocoapods library LibTorch 1.9.0 |
| 14 | +* Xcode 12.4 or later |
15 | 15 |
|
16 | 16 | ## Quick Start |
17 | 17 |
|
18 | | -### 1. Prepare the Model |
| 18 | +### 1. Get the Repo |
| 19 | + |
| 20 | +Simply run the commands below: |
19 | 21 |
|
20 | | -First, run the following commands on a Terminal: |
21 | 22 | ``` |
22 | 23 | git clone https://github.com/pytorch/ios-demo-app |
23 | 24 | cd ios-demo-app/SpeechRecognition |
24 | 25 | ``` |
25 | 26 |
|
26 | | -If you don't have PyTorch 1.9 installed or want to have a quick try of the demo app, you can download the quantized scripted wav2vec2 model file [here](https://drive.google.com/file/d/1RcCy3K3gDVN2Nun5IIdDbpIDbrKD-XVw/view?usp=sharing), then drag and drop to the project, and continue to Step 2. |
| 27 | +If you don't have PyTorch 1.9.0 and torchaudio 0.9.0 installed or want to have a quick try of the demo app, you can download the quantized scripted wav2vec2 model file [here](https://drive.google.com/file/d/1RcCy3K3gDVN2Nun5IIdDbpIDbrKD-XVw/view?usp=sharing), then drag and drop to the project, and continue to Step 3. |
| 28 | + |
| 29 | +Be aware that the downloadable model file was created with PyTorch 1.9.0 and torchaudio 0.9.0, matching the iOS LibTorch library 1.9.0 specified in the `Podfile`. If you use a different version of PyTorch to create your model by following the instructions below, make sure you specify the same iOS LibTorch version in the `Podfile` to avoid possible errors caused by the version mismatch. Furthermore, if you want to use the latest prototype features in the PyTorch master branch to create the model, follow the steps at [Building PyTorch iOS Libraries from Source](https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source) on how to use the model in iOS. |
| 30 | + |
27 | 31 |
|
28 | | -Be aware that the downloadable model file was created with PyTorch 1.9 (and torchaudio 0.9), matching the iOS LibTorch library 1.9 specified in the `Podfile`. If you use a different version of PyTorch to create your model by following the instructions below, make sure you specify the same iOS LibTorch version in the `Podfile` to avoid possible errors caused by the version mismatch. Furthermore, if you want to use the latest prototype features in the PyTorch master branch to create the model, follow the steps at [Building PyTorch iOS Libraries from Source](https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source) on how to use the model in iOS. |
| 32 | +### 2. Prepare the Model |
| 33 | + |
| 34 | +To install PyTorch 1.9.0 and torchvision 0.10.0, you can do something like this: |
| 35 | + |
| 36 | +``` |
| 37 | +conda create -n wav2vec2 python=3.8.5 |
| 38 | +conda activate wav2vec2 |
| 39 | +pip install torch torchvision |
| 40 | +``` |
| 41 | + |
| 42 | +Now with PyTorch 1.9.0 and torchaudio 0.9.0 installed, run the following commands on a Terminal: |
29 | 43 |
|
30 | | -With PyTorch 1.9 and torchaudio 0.9 installed, run the following commands on a Terminal: |
31 | 44 | ``` |
32 | | -git clone https://github.com/pytorch/ios-demo-app |
33 | | -cd ios-demo-app/SpeechRecognition |
34 | 45 | python create_wav2vec2.py |
35 | 46 | ``` |
36 | | -This will create the model file `wav2vec2.pt`. |
| 47 | + |
| 48 | +This will create the model file `wav2vec2.pt` and save to the `SpeechRecognition` folder. |
37 | 49 |
|
38 | 50 | ### 2. Use LibTorch |
39 | 51 |
|
40 | 52 | Run the commands below: |
41 | 53 |
|
42 | 54 | ``` |
43 | | -cd SpeechRecognition |
44 | 55 | pod install |
45 | 56 | open SpeechRecognition.xcworkspace/ |
46 | 57 | ``` |
47 | 58 |
|
48 | 59 | ### 3. Build and run with Xcode |
49 | 60 |
|
50 | | -After the app runs, tap the Start button and start saying something; after 6 seconds, the model will infer to recognize your speech. Only basic decoding of the recognition result, in the form of an array of floating numbers of logits, to a list of tokens is provided in this demo app, but it is easy to see, without further post-processing, whether the model can recognize your utterances. Some example results are as follows: |
| 61 | +After the app runs, tap the Start button and start saying something; after 12 seconds (you can change `private let AUDIO_LEN_IN_SECOND = 12` in `ViewController.swift` for the recording length), the model will infer to recognize your speech. Some example results are as follows: |
51 | 62 |
|
52 | 63 |  |
53 | 64 |  |
|
0 commit comments