Transcription

pipeline pipeline

The Transcription pipeline converts speech in audio files to text.

Example

The following shows a simple example using this pipeline.

  1. from txtai.pipeline import Transcription
  2. # Create and run pipeline
  3. transcribe = Transcription()
  4. transcribe("path to wav file")

See the link below for a more detailed example.

NotebookDescription
Transcribe audio to textConvert audio files to textOpen In Colab

Configuration-driven example

Pipelines are run with Python or configuration. Pipelines can be instantiated in configuration using the lower case name of the pipeline. Configuration-driven pipelines are run with workflows or the API.

config.yml

  1. # Create pipeline using lower case class name
  2. transcription:
  3. # Run pipeline with workflow
  4. workflow:
  5. transcribe:
  6. tasks:
  7. - action: transcription

Run with Workflows

  1. from txtai.app import Application
  2. # Create and run pipeline with workflow
  3. app = Application("config.yml")
  4. list(app.workflow("transcribe", ["path to wav file"]))

Run with API

  1. CONFIG=config.yml uvicorn "txtai.api:app" &
  2. curl \
  3. -X POST "http://localhost:8000/workflow" \
  4. -H "Content-Type: application/json" \
  5. -d '{"name":"transcribe", "elements":["path to wav file"]}'

Methods

Python documentation for the pipeline.

Source code in txtai/pipeline/audio/transcription.py

  1. 22
  2. 23
  3. 24
  4. 25
  5. 26
  6. 27
  1. def init(self, path=None, quantize=False, gpu=True, model=None, kwargs):
  2. if not SOUNDFILE:
  3. raise ImportError(“SoundFile library not installed or libsndfile not found”)
  4. # Call parent constructor
  5. super().init(“automatic-speech-recognition”, path, quantize, gpu, model, kwargs)

Transcribes audio files or data to text.

This method supports a single audio element or a list of audio. If the input is audio, the return type is a string. If text is a list, a list of strings is returned

Parameters:

NameTypeDescriptionDefault
audio

audio|list

required
rate

sample rate, only required with raw audio data

None
chunk

process audio in chunk second sized segments

10
join

if True (default), combine each chunk back together into a single text output. When False, chunks are returned as a list of dicts, each having raw associated audio and sample rate in addition to text

True

Returns:

TypeDescription

list of transcribed text

Source code in txtai/pipeline/audio/transcription.py

  1. 29
  2. 30
  3. 31
  4. 32
  5. 33
  6. 34
  7. 35
  8. 36
  9. 37
  10. 38
  11. 39
  12. 40
  13. 41
  14. 42
  15. 43
  16. 44
  17. 45
  18. 46
  19. 47
  20. 48
  21. 49
  22. 50
  23. 51
  24. 52
  25. 53
  26. 54
  27. 55
  28. 56
  29. 57
  30. 58
  1. def call(self, audio, rate=None, chunk=10, join=True):
  2. “””
  3. Transcribes audio files or data to text.
  4. This method supports a single audio element or a list of audio. If the input is audio, the return
  5. type is a string. If text is a list, a list of strings is returned
  6. Args:
  7. audio: audio|list
  8. rate: sample rate, only required with raw audio data
  9. chunk: process audio in chunk second sized segments
  10. join: if True (default), combine each chunk back together into a single text output.
  11. When False, chunks are returned as a list of dicts, each having raw associated audio and
  12. sample rate in addition to text
  13. Returns:
  14. list of transcribed text
  15. “””
  16. # Convert single element to list
  17. values = [audio] if not isinstance(audio, list) else audio
  18. # Read input audio
  19. speech = self.read(values, rate)
  20. # Apply transformation rules and store results
  21. results = self.batchprocess(speech, chunk) if chunk and not join else self.process(speech, chunk)
  22. # Return single element if single element passed in
  23. return results[0] if not isinstance(audio, list) else results