Skip to content

Releases: mozilla/DeepSpeech

v0.10.0-alpha.3

19 Dec 10:13
Compare
Choose a tag to compare
v0.10.0-alpha.3 Pre-release
Pre-release
Bump version to v0.10.0-alpha.3

DeepSpeech 0.9.3

10 Dec 15:58
Compare
Choose a tag to compare

General

This is the 0.9.3 release of Deep Speech, an open speech-to-text engine. In accord with semantic versioning, this version is not backwards compatible with earlier versions. However, models exported for 0.7.X and 0.8.X should work with this release. This is a bugfix release and retains compatibility with the 0.9.0, 0.9.1 and 0.9.2 models. All model files included here are identical to the ones in the 0.9.0 release. As with previous releases, this release includes the source code:

v0.9.3.tar.gz

Under the MPL-2.0 license. And the acoustic models:

deepspeech-0.9.3-models.pbmm
deepspeech-0.9.3-models.tflite

In addition we're releasing experimental Mandarin Chinese acoustic models trained on an internal corpus composed of 2000h of read speech:

deepspeech-0.9.3-models-zh-CN.pbmm
deepspeech-0.9.3-models-zh-CN.tflite

all under the MPL-2.0 license.

The model files with the ".pbmm" extension are memory mapped and thus memory efficient and fast to load. The model files with the ".tflite" extension are converted to use TensorFlow Lite, has post-training quantization enabled, and are more suitable for resource constrained environments.

The acoustic models were trained on American English with synthetic noise augmentation and the .pbmm model achieves an 7.06% word error rate on the LibriSpeech clean test corpus.

Note that the model currently performs best in low-noise environments with clear recordings and has a bias towards US male accents. This does not mean the model cannot be used outside of these conditions, but that accuracy may be lower. Some users may need to train the model further to meet their intended use-case.

In addition we release the scorer:

deepspeech-0.9.3-models.scorer

which takes the place of the language model and trie in older releases and which is also under the MPL-2.0 license.

There is also a corresponding scorer for the Mandarin Chinese model:

deepspeech-0.9.3-models-zh-CN.scorer

We also include example audio files:

audio-0.9.3.tar.gz

which can be used to test the engine, and checkpoint files for both the English and Mandarin models:

deepspeech-0.9.3-checkpoint.tar.gz
deepspeech-0.9.3-checkpoint-zh-CN.tar.gz

which are under the MPL-2.0 license and can be used as the basis for further fine-tuning.

Notable changes from the previous release

  • Add CI testing for hot word boosting on .NET bindings (#3416)
  • Improve error message on generate_scorer_package tooling (#3435)
  • Enable support for building static iOS framework (#3436)
  • Change Java binding package name from org.mozilla.deepspeech to org.deepspeech (#3454)
  • Expose Stream type on TypeScript binding (#3456)

Training Regimen + Hyperparameters for fine-tuning

The hyperparameters used to train the model are useful for fine tuning. Thus, we document them here along with the training regimen, hardware used (a server with 8 Quadro RTX 6000 GPUs each with 24GB of VRAM), and our use of cuDNN RNN.

In contrast to some previous releases, training for this release occurred as a fine tuning of the previous 0.8.2 checkpoint, with data augmentation options enabled. The following hyperparameters were used for the fine tuning. See the 0.8.2 release notes for the hyperparameters used for the base model.

  • train_files Fisher, LibriSpeech, Switchboard, Common Voice English, and approximately 1700 hours of transcribed WAMU (NPR) radio shows explicitly licensed to use as training corpora.
  • dev_files LibriSpeech clean dev corpus.
  • test_files LibriSpeech clean test corpus
  • train_batch_size 128
  • dev_batch_size 128
  • test_batch_size 128
  • n_hidden 2048
  • learning_rate 0.0001
  • dropout_rate 0.40
  • epochs 200
  • augment pitch[pitch=1~0.1]
  • augment tempo[factor=1~0.1]
  • augment overlay[p=0.9,source=${noise},layers=1,snr=12~4] (where ${noise} is a dataset of Freesound.org background noise recordings)
  • augment overlay[p=0.1,source=${voices},layers=10~2,snr=12~4] (where ${voices} is a dataset of audiobook snippets extracted from Librivox)
  • augment resample[p=0.2,rate=12000~4000]
  • augment codec[p=0.2,bitrate=32000~16000]
  • augment reverb[p=0.2,decay=0.7~0.15,delay=10~8]
  • augment volume[p=0.2,dbfs=-10~10]
  • cache_for_epochs 10

The weights with the best validation loss were selected at the end of 200 epochs using --noearly_stop.

The optimal lm_alpha and lm_beta values with respect to the LibriSpeech clean dev corpus remain unchanged from the previous release:

  • lm_alpha 0.931289039105002
  • lm_beta 1.1834137581510284

For the Mandarin Chinese model, the following values are recommended:

  • lm_alpha 0.6940122363709647
  • lm_beta 4.777924224113021

Bindings

This release also includes a Python based command line tool deepspeech, installed through

pip install deepspeech

Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:

pip install deepspeech-gpu

On Linux, macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:

pip install deepspeech-tflite

Also, it exposes bindings for the following languages

  • Python (Versions 3.5, 3.6, 3.7, 3.8 and 3.9) installed via

    pip install deepspeech

    Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:

    pip install deepspeech-gpu

    On Linux (AMD64), macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:

    pip install deepspeech-tflite
  • NodeJS (Versions 10.x, 11.x, 12.x, 13.x, 14.x and 15.x) installed via

    npm install deepspeech
    

    Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:

    npm install deepspeech-gpu
    

    On Linux (AMD64), macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:

    npm install deepspeech-tflite
  • ElectronJS versions 5.0, 6.0, 6.1, 7.0, 7.1, 8.0, 9.0, 9.1, 9.2, 10.0, 10.1, and 11.0 are also supported

  • C which requires the appropriate shared objects are installed from native_client.tar.xz (See the section in the main README which describes native_client.tar.xz installation.)

  • .NET which is installed by following the instructions on the NuGet package page.

In addition there are third party bindings that are supported by external developers, for example

  • Rust which is installed by following the instructions on the external Rust repo.
  • Go which is installed by following the instructions on the external Go repo.
  • V which is installed by following the instructions on the external Vlang repo.

Supported Platforms

  • Windows 8.1, 10, and Server 2012 R2 64-bits (at least AVX support, requires Redistribuable Visual C++ 2015 Update 3 (64-bits) for runtime).
  • OS X 10.10, 10.11, 10.12, 10.13, 10.14, and 10.15
  • Linux x86 64 bit with a modern CPU (at least AVX/FMA)
  • Linux x86 64 bit wit...
Read more

DeepSpeech 0.9.2

03 Dec 16:40
Compare
Choose a tag to compare

General

This is the 0.9.2 release of Deep Speech, an open speech-to-text engine. In accord with semantic versioning, this version is not completely backwards compatible with earlier versions. However, models exported for 0.7.X and 0.8.X should work with this release. This is a bugfix release and retains compatibility with the 0.9.0 and 0.9.1 models. All model files included here are identical to the ones in the 0.9.0 release. As with previous releases, this release includes the source code:

v0.9.2.tar.gz

Under the MPL-2.0 license. And the acoustic models:

deepspeech-0.9.2-models.pbmm
deepspeech-0.9.2-models.tflite

In addition we're releasing experimental Mandarin Chinese acoustic models trained on an internal corpus composed of 2000h of read speech:

deepspeech-0.9.2-models-zh-CN.pbmm
deepspeech-0.9.2-models-zh-CN.tflite

all under the MPL-2.0 license.

The model files with the ".pbmm" extension are memory mapped and thus memory efficient and fast to load. The model files with the ".tflite" extension are converted to use TensorFlow Lite, has post-training quantization enabled, and are more suitable for resource constrained environments.

The acoustic models were trained on American English with synthetic noise augmentation and the .pbmm model achieves an 7.06% word error rate on the LibriSpeech clean test corpus.

Note that the model currently performs best in low-noise environments with clear recordings and has a bias towards US male accents. This does not mean the model cannot be used outside of these conditions, but that accuracy may be lower. Some users may need to train the model further to meet their intended use-case.

In addition we release the scorer:

deepspeech-0.9.2-models.scorer

which takes the place of the language model and trie in older releases and which is also under the MPL-2.0 license.

There is also a corresponding scorer for the Mandarin Chinese model:

deepspeech-0.9.2-models-zh-CN.scorer

We also include example audio files:

audio-0.9.2.tar.gz

which can be used to test the engine, and checkpoint files for both the English and Mandarin models:

deepspeech-0.9.2-checkpoint.tar.gz
deepspeech-0.9.2-checkpoint-zh-CN.tar.gz

which are under the MPL-2.0 license and can be used as the basis for further fine-tuning.

Notable changes from the previous release

  • Add support for Python 3.9 for native client packages (#3409)
  • Add CI testing for hot word boosting on Java package (#3410)
  • Add importer for French dataset from Centre de Conférences Pierre Mendès-France (#3438)
  • Add support for ElectronJS v11.0 (#3441)
  • Correct documentation for needed versions of CUDA for training DeepSpeech (#3443)

Training Regimen + Hyperparameters for fine-tuning

The hyperparameters used to train the model are useful for fine tuning. Thus, we document them here along with the training regimen, hardware used (a server with 8 Quadro RTX 6000 GPUs each with 24GB of VRAM), and our use of cuDNN RNN.

In contrast to some previous releases, training for this release occurred as a fine tuning of the previous 0.8.2 checkpoint, with data augmentation options enabled. The following hyperparameters were used for the fine tuning. See the 0.8.2 release notes for the hyperparameters used for the base model.

  • train_files Fisher, LibriSpeech, Switchboard, Common Voice English, and approximately 1700 hours of transcribed WAMU (NPR) radio shows explicitly licensed to use as training corpora.
  • dev_files LibriSpeech clean dev corpus.
  • test_files LibriSpeech clean test corpus
  • train_batch_size 128
  • dev_batch_size 128
  • test_batch_size 128
  • n_hidden 2048
  • learning_rate 0.0001
  • dropout_rate 0.40
  • epochs 200
  • augment pitch[pitch=1~0.1]
  • augment tempo[factor=1~0.1]
  • augment overlay[p=0.9,source=${noise},layers=1,snr=12~4] (where ${noise} is a dataset of Freesound.org background noise recordings)
  • augment overlay[p=0.1,source=${voices},layers=10~2,snr=12~4] (where ${voices} is a dataset of audiobook snippets extracted from Librivox)
  • augment resample[p=0.2,rate=12000~4000]
  • augment codec[p=0.2,bitrate=32000~16000]
  • augment reverb[p=0.2,decay=0.7~0.15,delay=10~8]
  • augment volume[p=0.2,dbfs=-10~10]
  • cache_for_epochs 10

The weights with the best validation loss were selected at the end of 200 epochs using --noearly_stop.

The optimal lm_alpha and lm_beta values with respect to the LibriSpeech clean dev corpus remain unchanged from the previous release:

  • lm_alpha 0.931289039105002
  • lm_beta 1.1834137581510284

For the Mandarin Chinese model, the following values are recommended:

  • lm_alpha 0.6940122363709647
  • lm_beta 4.777924224113021

Bindings

This release also includes a Python based command line tool deepspeech, installed through

pip install deepspeech

Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:

pip install deepspeech-gpu

On Linux, macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:

pip install deepspeech-tflite

Also, it exposes bindings for the following languages

  • Python (Versions 3.5, 3.6, 3.7, 3.8 and 3.9) installed via

    pip install deepspeech

    Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:

    pip install deepspeech-gpu

    On Linux (AMD64), macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:

    pip install deepspeech-tflite
  • NodeJS (Versions 10.x, 11.x, 12.x, 13.x, 14.x and 15.x) installed via

    npm install deepspeech
    

    Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:

    npm install deepspeech-gpu
    

    On Linux (AMD64), macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:

    npm install deepspeech-tflite
  • ElectronJS versions 5.0, 6.0, 6.1, 7.0, 7.1, 8.0, 9.0, 9.1, 9.2, 10.0, 10.1, and 11.0 are also supported

  • C which requires the appropriate shared objects are installed from native_client.tar.xz (See the section in the main README which describes native_client.tar.xz installation.)

  • .NET which is installed by following the instructions on the NuGet package page.

In addition there are third party bindings that are supported by external developers, for example

  • Rust which is installed by following the instructions on the external Rust repo.
  • Go which is installed by following the instructions on the external Go repo.
  • V which is installed by following the instructions on the external Vlang repo.

Supported Platforms

  • Windows 8.1, 10, and Server 2012 R2 64-bits (at least AVX support, requires Redistribuable Visual C++ 2015 Update 3 (64-bits) for runtime).
  • OS X 10.10, 10.11, 10.12, 10.13, 10.14, and 10.15
  • Linux x86 64 bit with a modern CPU (at least AVX/FMA)
  • Li...
Read more

DeepSpech 0.9.1

04 Nov 16:56
Compare
Choose a tag to compare

General

This is the 0.9.1 release of Deep Speech, an open speech-to-text engine. In accord with semantic versioning, this version is not completely backwards compatible with earlier versions. However, models exported for 0.7.X and 0.8.X should work with this release. This is a bugfix release and retains compatibility with the 0.9.0 models. All model files included here are identical to the ones in the 0.9.0 release. As with previous releases, this release includes the source code:

v0.9.1.tar.gz

Under the MPL-2.0 license. And the acoustic models:

deepspeech-0.9.1-models.pbmm
deepspeech-0.9.1-models.tflite

In addition we're releasing experimental Mandarin Chinese acoustic models trained on an internal corpus composed of 2000h of read speech:

deepspeech-0.9.1-models-zh-CN.pbmm
deepspeech-0.9.1-models-zh-CN.tflite

all under the MPL-2.0 license.

The model files with the ".pbmm" extension are memory mapped and thus memory efficient and fast to load. The model files with the ".tflite" extension are converted to use TensorFlow Lite, has post-training quantization enabled, and are more suitable for resource constrained environments.

The acoustic models were trained on American English with synthetic noise augmentation and the .pbmm model achieves an 7.06% word error rate on the LibriSpeech clean test corpus.

Note that the model currently performs best in low-noise environments with clear recordings and has a bias towards US male accents. This does not mean the model cannot be used outside of these conditions, but that accuracy may be lower. Some users may need to train the model further to meet their intended use-case.

In addition we release the scorer:

deepspeech-0.9.1-models.scorer

which takes the place of the language model and trie in older releases and which is also under the MPL-2.0 license.

There is also a corresponding scorer for the Mandarin Chinese model:

deepspeech-0.9.1-models-zh-CN.scorer

We also include example audio files:

audio-0.9.1.tar.gz

which can be used to test the engine, and checkpoint files for both the English and Mandarin models:

deepspeech-0.9.1-checkpoint.tar.gz
deepspeech-0.9.1-checkpoint-zh-CN.tar.gz

which are under the MPL-2.0 license and can be used as the basis for further fine-tuning.

Notable changes from the previous release

  • Fixed problem with documentation build on ReadTheDocs.org (#3399)

Training Regimen + Hyperparameters for fine-tuning

The hyperparameters used to train the model are useful for fine tuning. Thus, we document them here along with the training regimen, hardware used (a server with 8 Quadro RTX 6000 GPUs each with 24GB of VRAM), and our use of cuDNN RNN.

In contrast to some previous releases, training for this release occurred as a fine tuning of the previous 0.8.2 checkpoint, with data augmentation options enabled. The following hyperparameters were used for the fine tuning. See the 0.8.2 release notes for the hyperparameters used for the base model.

  • train_files Fisher, LibriSpeech, Switchboard, Common Voice English, and approximately 1700 hours of transcribed WAMU (NPR) radio shows explicitly licensed to use as training corpora.
  • dev_files LibriSpeech clean dev corpus.
  • test_files LibriSpeech clean test corpus
  • train_batch_size 128
  • dev_batch_size 128
  • test_batch_size 128
  • n_hidden 2048
  • learning_rate 0.0001
  • dropout_rate 0.40
  • epochs 200
  • augment pitch[pitch=1~0.1]
  • augment tempo[factor=1~0.1]
  • augment overlay[p=0.9,source=${noise},layers=1,snr=12~4] (where ${noise} is a dataset of Freesound.org background noise recordings)
  • augment overlay[p=0.1,source=${voices},layers=10~2,snr=12~4] (where ${voices} is a dataset of audiobook snippets extracted from Librivox)
  • augment resample[p=0.2,rate=12000~4000]
  • augment codec[p=0.2,bitrate=32000~16000]
  • augment reverb[p=0.2,decay=0.7~0.15,delay=10~8]
  • augment volume[p=0.2,dbfs=-10~10]
  • cache_for_epochs 10

The weights with the best validation loss were selected at the end of 200 epochs using --noearly_stop.

The optimal lm_alpha and lm_beta values with respect to the LibriSpeech clean dev corpus remain unchanged from the previous release:

  • lm_alpha 0.931289039105002
  • lm_beta 1.1834137581510284

For the Mandarin Chinese model, the following values are recommended:

  • lm_alpha 0.6940122363709647
  • lm_beta 4.777924224113021

Bindings

This release also includes a Python based command line tool deepspeech, installed through

pip install deepspeech

Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:

pip install deepspeech-gpu

On Linux, macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:

pip install deepspeech-tflite

Also, it exposes bindings for the following languages

  • Python (Versions 3.5, 3.6, 3.7 and 3.8) installed via

    pip install deepspeech

    Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:

    pip install deepspeech-gpu

    On Linux (AMD64), macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:

    pip install deepspeech-tflite
  • NodeJS (Versions 10.x, 11.x, 12.x, 13.x, 14.x and 15.x) installed via

    npm install deepspeech
    

    Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:

    npm install deepspeech-gpu
    

    On Linux (AMD64), macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:

    npm install deepspeech-tflite
  • ElectronJS versions 5.0, 6.0, 6.1, 7.0, 7.1, 8.0, 9.0, 9.1, 9.2, 10.0 and 10.1 are also supported

  • C which requires the appropriate shared objects are installed from native_client.tar.xz (See the section in the main README which describes native_client.tar.xz installation.)

  • .NET which is installed by following the instructions on the NuGet package page.

In addition there are third party bindings that are supported by external developers, for example

  • Rust which is installed by following the instructions on the external Rust repo.
  • Go which is installed by following the instructions on the external Go repo.
  • V which is installed by following the instructions on the external Vlang repo.

Supported Platforms

  • Windows 8.1, 10, and Server 2012 R2 64-bits (at least AVX support, requires Redistribuable Visual C++ 2015 Update 3 (64-bits) for runtime).

  • OS X 10.10, 10.11, 10.12, 10.13, 10.14, and 10.15

  • Linux x86 64 bit with a modern CPU (at least AVX/FMA)

  • Linux x86 64 bit with a modern CPU (at least AVX/FMA) + NVIDIA GPU (Compute Capability at least 3.0, see NVIDIA docs)

  • Raspbian Buster on Raspberry Pi 3, Pi 4

  • Linux/ARM64 built against Debian/ARMbian Buster and tested on LePotato boards

  • Java Android (7.0-11.0) bindings (+ demo app). Tested on Google Pixel 2 ; Sony Xperia Z Premium ; Nokia 1.3, TF Lite model only.

  • iOS with Swift bindings (experimental). Tested on iPhone Xs.

  • TFLite Delegation API is here...

Read more

DeepSpech 0.9.0

02 Nov 13:07
Compare
Choose a tag to compare

General

This is the 0.9.0 release of Deep Speech, an open speech-to-text engine. In accord with semantic versioning, this version is not completely backwards compatible with earlier versions. However, models exported for 0.7.X and 0.8.X should work with this release. As with previous releases, this release includes the source code:

v0.9.0.tar.gz

Under the MPL-2.0 license. And the acoustic models:

deepspeech-0.9.0-models.pbmm
deepspeech-0.9.0-models.tflite

In addition we're releasing experimental Mandarin Chinese acoustic models trained on an internal corpus composed of 2000h of read speech:

deepspeech-0.9.0-models-zh-CN.pbmm
deepspeech-0.9.0-models-zh-CN.tflite

all under the MPL-2.0 license.

The model files with the ".pbmm" extension are memory mapped and thus memory efficient and fast to load. The model files with the ".tflite" extension are converted to use TensorFlow Lite, has post-training quantization enabled, and are more suitable for resource constrained environments.

The acoustic models were trained on American English with synthetic noise augmentation and the .pbmm model achieves an 7.06% word error rate on the LibriSpeech clean test corpus.

Note that the model currently performs best in low-noise environments with clear recordings and has a bias towards US male accents. This does not mean the model cannot be used outside of these conditions, but that accuracy may be lower. Some users may need to train the model further to meet their intended use-case.

In addition we release the scorer:

deepspeech-0.9.0-models.scorer

which takes the place of the language model and trie in older releases and which is also under the MPL-2.0 license.

There is also a corresponding scorer for the Mandarin Chinese model:

deepspeech-0.9.0-models-zh-CN.scorer

We also include example audio files:

audio-0.9.0.tar.gz

which can be used to test the engine, and checkpoint files for both the English and Mandarin models:

deepspeech-0.9.0-checkpoint.tar.gz
deepspeech-0.9.0-checkpoint-zh-CN.tar.gz

which are under the MPL-2.0 license and can be used as the basis for further fine-tuning.

Notable changes from the previous release

  • Fixed incorrect minimum OS version in macOS binaries (#3259)
  • Fixed bug in metadata output for Python package client (#3264)
  • Added ElectronJS v9.2 support (#3266)
  • Improved Bytes output mode documentation
  • (Optional) Layer Norm support in training
  • Add support for boosting scores for hot words during decoding (#3297)

Training Regimen + Hyperparameters for fine-tuning

The hyperparameters used to train the model are useful for fine tuning. Thus, we document them here along with the training regimen, hardware used (a server with 8 Quadro RTX 6000 GPUs each with 24GB of VRAM), and our use of cuDNN RNN.

In contrast to some previous releases, training for this release occurred as a fine tuning of the previous 0.8.2 checkpoint, with data augmentation options enabled. The following hyperparameters were used for the fine tuning. See the 0.8.2 release notes for the hyperparameters used for the base model.

  • train_files Fisher, LibriSpeech, Switchboard, Common Voice English, and approximately 1700 hours of transcribed WAMU (NPR) radio shows explicitly licensed to use as training corpora.
  • dev_files LibriSpeech clean dev corpus.
  • test_files LibriSpeech clean test corpus
  • train_batch_size 128
  • dev_batch_size 128
  • test_batch_size 128
  • n_hidden 2048
  • learning_rate 0.0001
  • dropout_rate 0.40
  • epochs 200
  • augment pitch[pitch=1~0.1]
  • augment tempo[factor=1~0.1]
  • augment overlay[p=0.9,source=${noise},layers=1,snr=12~4] (where ${noise} is a dataset of Freesound.org background noise recordings)
  • augment overlay[p=0.1,source=${voices},layers=102,snr=124] (where ${voices} is a dataset of audiobook snippets extracted from Librivox)
  • augment resample[p=0.2,rate=12000~4000]
  • augment codec[p=0.2,bitrate=32000~16000]
  • augment reverb[p=0.2,decay=0.70.15,delay=108]
  • augment volume[p=0.2,dbfs=-10~10]
  • cache_for_epochs 10

The weights with the best validation loss were selected at the end of 200 epochs using --noearly_stop.

The optimal lm_alpha and lm_beta values with respect to the LibriSpeech clean dev corpus remain unchanged from the previous release:

  • lm_alpha 0.931289039105002
  • lm_beta 1.1834137581510284

For the Mandarin Chinese model, the following values are recommended:

  • lm_alpha 0.6940122363709647
  • lm_beta 4.777924224113021

Bindings

This release also includes a Python based command line tool deepspeech, installed through

pip install deepspeech

Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:

pip install deepspeech-gpu

On Linux, macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:

pip install deepspeech-tflite

Also, it exposes bindings for the following languages

  • Python (Versions 3.5, 3.6, 3.7 and 3.8) installed via

    pip install deepspeech

    Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:

    pip install deepspeech-gpu

    On Linux (AMD64), macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:

    pip install deepspeech-tflite
  • NodeJS (Versions 10.x, 11.x, 12.x, 13.x, 14.x and 15.x) installed via

    npm install deepspeech
    

    Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:

    npm install deepspeech-gpu
    

    On Linux (AMD64), macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:

    npm install deepspeech-tflite
  • ElectronJS versions 5.0, 6.0, 6.1, 7.0, 7.1, 8.0, 9.0, 9.1 and 9.2 are also supported

  • C which requires the appropriate shared objects are installed from native_client.tar.xz (See the section in the main README which describes native_client.tar.xz installation.)

  • .NET which is installed by following the instructions on the NuGet package page.

In addition there are third party bindings that are supported by external developers, for example

  • Rust which is installed by following the instructions on the external Rust repo.
  • Go which is installed by following the instructions on the external Go repo.
  • V which is installed by following the instructions on the external Vlang repo.

Supported Platforms

  • Windows 8.1, 10, and Server 2012 R2 64-bits (at least AVX support, requires Redistribuable Visual C++ 2015 Update 3 (64-bits) for runtime).
  • OS X 10.10, 10.11, 10.12, 10.13, 10.14, and 10.15
  • Linux x86 64 bit with a modern CPU (at least AVX/FMA)
  • Linux x86 64 bit with a modern CPU (at least AVX/FMA) + NVIDIA GPU (Compute Capability at least 3.0, see NVIDIA docs)
  • Raspbian Buster on Raspberry Pi 3, Pi 4
  • Linux/ARM64 built against Debian/ARMbian Buster and tested on LePotato boards
  • Java Android (7...
Read more

v0.9.0-alpha.12

30 Oct 17:17
Compare
Choose a tag to compare
v0.9.0-alpha.12 Pre-release
Pre-release
Bump VERSION to 0.9.0-alpha.12

v0.9.0-alpha.11

09 Oct 13:26
Compare
Choose a tag to compare
v0.9.0-alpha.11 Pre-release
Pre-release
Bump VERSION to 0.9.0-alpha.11

v0.9.0-alpha.10

25 Sep 12:29
34a62bd
Compare
Choose a tag to compare
v0.9.0-alpha.10 Pre-release
Pre-release
Merge pull request #3337 from lissyx/bump-0.9.0a10

Bump VERSION to 0.9.0-alpha.10

v0.9.0-alpha.9

21 Sep 11:30
Compare
Choose a tag to compare
v0.9.0-alpha.9 Pre-release
Pre-release
Bump VERSION to 0.9.0-alpha.9

v0.9.0-alpha.8

10 Sep 08:14
ce95be1
Compare
Choose a tag to compare
v0.9.0-alpha.8 Pre-release
Pre-release
Merge pull request #3315 from lissyx/bump-v0.9.0-alpha.8

Bump VERSION to v0.9.0-alpha.8