/ docs-website / versioned_docs / version-2.21 / pipeline-components / audio / localwhispertranscriber.mdx
localwhispertranscriber.mdx
 1  ---
 2  title: "LocalWhisperTranscriber"
 3  id: localwhispertranscriber
 4  slug: "/localwhispertranscriber"
 5  description: "Use `LocalWhisperTranscriber` to transcribe audio files using OpenAI's Whisper model using your local installation of Whisper."
 6  ---
 7  
 8  # LocalWhisperTranscriber
 9  
10  Use `LocalWhisperTranscriber` to transcribe audio files using OpenAI's Whisper model using your local installation of Whisper.
11  
12  <div className="key-value-table">
13  
14  |  |  |
15  | --- | --- |
16  | **Most common position in a pipeline** | As the first component in an indexing pipeline                                                |
17  | **Mandatory run variables**            | `sources`: A list of paths or binary streams that you want to transcribe                      |
18  | **Output variables**                   | `documents`: A list of documents                                                              |
19  | **API reference**                      | [Audio](/reference/audio-api)                                                                        |
20  | **GitHub link**                        | https://github.com/deepset-ai/haystack/blob/main/haystack/components/audio/whisper_local.py |
21  
22  </div>
23  
24  ## Overview
25  
26  The component also needs to know which Whisper model to work with. Specify this in the `model` parameter when initializing the component. All transcription is completed on the executing machine, and the audio is never sent to a third-party provider.
27  
28  See other optional parameters you can specify in our [API documentation](/reference/audio-api).
29  
30  See the [Whisper API documentation](https://platform.openai.com/docs/guides/speech-to-text) and the official Whisper [GitHub repo](https://github.com/openai/whisper) for the supported audio formats and languages.
31  
32  To work with the `LocalWhisperTranscriber`, install torch and [Whisper](https://github.com/openai/whisper) first with the following commands:
33  
34  ```python
35  pip install 'transformers[torch]'
36  pip install -U openai-whisper
37  ```
38  
39  ## Usage
40  
41  ### On its own
42  
43  Here’s an example of how to use `LocalWhisperTranscriber` on its own:
44  
45  ```python
46  import requests
47  from haystack.components.audio import LocalWhisperTranscriber
48  
49  response = requests.get(
50      "https://ia903102.us.archive.org/19/items/100-Best--Speeches/EK_19690725_64kb.mp3",
51  )
52  with open("kennedy_speech.mp3", "wb") as file:
53      file.write(response.content)
54  
55  transcriber = LocalWhisperTranscriber(model="tiny")
56  transcriber.warm_up()
57  
58  transcription = transcriber.run(sources=["./kennedy_speech.mp3"])
59  print(transcription["documents"][0].content)
60  ```
61  
62  ### In a pipeline
63  
64  The pipeline below fetches an audio file from a specified URL and transcribes it. It first retrieves the audio file using `LinkContentFetcher`, then transcribes the audio into text with `LocalWhisperTranscriber`, and finally outputs the transcription text.
65  
66  ```python
67  from haystack.components.audio import LocalWhisperTranscriber
68  from haystack.components.fetchers import LinkContentFetcher
69  from haystack import Pipeline
70  
71  pipe = Pipeline()
72  pipe.add_component("fetcher", LinkContentFetcher())
73  pipe.add_component("transcriber", LocalWhisperTranscriber(model="tiny"))
74  
75  pipe.connect("fetcher", "transcriber")
76  result = pipe.run(
77      data={
78          "fetcher": {
79              "urls": [
80                  "https://ia903102.us.archive.org/19/items/100-Best--Speeches/EK_19690725_64kb.mp3",
81              ],
82          },
83      },
84  )
85  print(result["transcriber"]["documents"][0].content)
86  ```
87  
88  ## Additional References
89  
90  🧑‍🍳 Cookbook: [Multilingual RAG from a podcast with Whisper, Qdrant and Mistral](https://haystack.deepset.ai/cookbook/multilingual_rag_podcast)