mistralchatgenerator.mdx
  1  ---
  2  title: "MistralChatGenerator"
  3  id: mistralchatgenerator
  4  slug: "/mistralchatgenerator"
  5  description: "This component enables chat completion using Mistral’s text generation models."
  6  ---
  7  
  8  # MistralChatGenerator
  9  
 10  This component enables chat completion using Mistral’s text generation models.
 11  
 12  <div className="key-value-table">
 13  
 14  |  |  |
 15  | --- | --- |
 16  | **Most common position in a pipeline** | After a [ChatPromptBuilder](../builders/chatpromptbuilder.mdx) |
 17  | **Mandatory init variables** | `api_key`: The Mistral API key. Can be set with `MISTRAL_API_KEY` env var. |
 18  | **Mandatory run variables** | `messages` A list of [`ChatMessage`](../../concepts/data-classes/chatmessage.mdx)  objects |
 19  | **Output variables** | `replies`: A list of [`ChatMessage`](../../concepts/data-classes/chatmessage.mdx)  objects  <br /> <br />`meta`: A list of dictionaries with the metadata associated with each reply, such as token count, finish reason, and so on |
 20  | **API reference** | [Mistral](/reference/integrations-mistral) |
 21  | **GitHub link** | https://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/mistral |
 22  
 23  </div>
 24  
 25  ## Overview
 26  
 27  This integration supports Mistral’s models provided through the generative endpoint. For a full list of available models, check out the [Mistral documentation](https://docs.mistral.ai/platform/endpoints/#generative-endpoints).
 28  
 29  `MistralChatGenerator` needs a Mistral API key to work. You can write this key in:
 30  
 31  - The `api_key` init parameter using [Secret API](../../concepts/secret-management.mdx)
 32  - The `MISTRAL_API_KEY` environment variable (recommended)
 33  
 34  Currently, available models are:
 35  
 36  - `mistral-tiny` (default)
 37  - `mistral-small`
 38  - `mistral-medium`(soon to be deprecated)
 39  - `mistral-large-latest`
 40  - `codestral-latest`
 41  
 42  This component needs a list of [`ChatMessage`](../../concepts/data-classes/chatmessage.mdx) objects to operate. `ChatMessage` is a data class that contains a message, a role (who generated the message, such as `user`, `assistant`, `system`, `function`), and optional metadata.
 43  
 44  Refer to the [Mistral API documentation](https://docs.mistral.ai/api/#operation/createChatCompletion) for more details on the parameters supported by the Mistral API, which you can provide with `generation_kwargs` when running the component.
 45  
 46  ### Tool Support
 47  
 48  `MistralChatGenerator` supports function calling through the `tools` parameter, which accepts flexible tool configurations:
 49  
 50  - **A list of Tool objects**: Pass individual tools as a list
 51  - **A single Toolset**: Pass an entire Toolset directly
 52  - **Mixed Tools and Toolsets**: Combine multiple Toolsets with standalone tools in a single list
 53  
 54  This allows you to organize related tools into logical groups while also including standalone tools as needed.
 55  
 56  ```python
 57  from haystack.tools import Tool, Toolset
 58  from haystack_integrations.components.generators.mistral import MistralChatGenerator
 59  
 60  # Create individual tools
 61  weather_tool = Tool(name="weather", description="Get weather info", ...)
 62  news_tool = Tool(name="news", description="Get latest news", ...)
 63  
 64  # Group related tools into a toolset
 65  math_toolset = Toolset([add_tool, subtract_tool, multiply_tool])
 66  
 67  # Pass mixed tools and toolsets to the generator
 68  generator = MistralChatGenerator(
 69      tools=[math_toolset, weather_tool, news_tool]  # Mix of Toolset and Tool objects
 70  )
 71  ```
 72  
 73  For more details on working with tools, see the [Tool](../../tools/tool.mdx) and [Toolset](../../tools/toolset.mdx) documentation.
 74  
 75  ### Streaming
 76  
 77  This Generator supports [streaming](guides-to-generators/choosing-the-right-generator.mdx#streaming-support) the tokens from the LLM directly in output. To do so, pass a function to the `streaming_callback` init parameter.
 78  
 79  ## Usage
 80  
 81  Install the `mistral-haystack` package to use the  `MistralChatGenerator`:
 82  
 83  ```shell
 84  pip install mistral-haystack
 85  ```
 86  
 87  #### On its own
 88  
 89  ```python
 90  from haystack_integrations.components.generators.mistral import MistralChatGenerator
 91  from haystack.components.generators.utils import print_streaming_chunk
 92  from haystack.dataclasses import ChatMessage
 93  from haystack.utils import Secret
 94  
 95  generator = MistralChatGenerator(
 96      api_key=Secret.from_env_var("MISTRAL_API_KEY"),
 97      streaming_callback=print_streaming_chunk,
 98  )
 99  message = ChatMessage.from_user("What's Natural Language Processing? Be brief.")
100  print(generator.run([message]))
101  ```
102  
103  With multimodal inputs:
104  
105  ```python
106  from haystack.dataclasses import ChatMessage, ImageContent
107  from haystack_integrations.components.generators.mistral import MistralChatGenerator
108  
109  llm = MistralChatGenerator(model="pixtral-12b-2409")
110  
111  image = ImageContent.from_file_path("apple.jpg")
112  user_message = ChatMessage.from_user(
113      content_parts=["What does the image show? Max 5 words.", image],
114  )
115  
116  response = llm.run([user_message])["replies"][0].text
117  print(response)
118  
119  # Red apple on straw.
120  ```
121  
122  #### In a Pipeline
123  
124  Below is an example RAG Pipeline where we answer questions based on the URL contents. We add the contents of the URL into our `messages` in the `ChatPromptBuilder` and generate an answer with the `MistralChatGenerator`.
125  
126  ```python
127  from haystack import Document
128  from haystack import Pipeline
129  from haystack.components.builders import ChatPromptBuilder
130  from haystack.components.generators.utils import print_streaming_chunk
131  from haystack.components.fetchers import LinkContentFetcher
132  from haystack.components.converters import HTMLToDocument
133  from haystack.dataclasses import ChatMessage
134  
135  from haystack_integrations.components.generators.mistral import MistralChatGenerator
136  
137  fetcher = LinkContentFetcher()
138  converter = HTMLToDocument()
139  prompt_builder = ChatPromptBuilder(variables=["documents"])
140  llm = MistralChatGenerator(
141      streaming_callback=print_streaming_chunk,
142      model="mistral-small",
143  )
144  
145  message_template = """Answer the following question based on the contents of the article: {{query}}\n
146                 Article: {{documents[0].content}} \n
147             """
148  messages = [ChatMessage.from_user(message_template)]
149  
150  rag_pipeline = Pipeline()
151  rag_pipeline.add_component(name="fetcher", instance=fetcher)
152  rag_pipeline.add_component(name="converter", instance=converter)
153  rag_pipeline.add_component("prompt_builder", prompt_builder)
154  rag_pipeline.add_component("llm", llm)
155  
156  rag_pipeline.connect("fetcher.streams", "converter.sources")
157  rag_pipeline.connect("converter.documents", "prompt_builder.documents")
158  rag_pipeline.connect("prompt_builder.prompt", "llm.messages")
159  
160  question = "What are the capabilities of Mixtral?"
161  
162  result = rag_pipeline.run(
163      {
164          "fetcher": {"urls": ["https://mistral.ai/news/mixtral-of-experts"]},
165          "prompt_builder": {
166              "template_variables": {"query": question},
167              "template": messages,
168          },
169          "llm": {"generation_kwargs": {"max_tokens": 165}},
170      },
171  )
172  ```
173  
174  ## Additional References
175  
176  🧑‍🍳 Cookbook: [Web QA with Mixtral-8x7B-Instruct-v0.1](https://haystack.deepset.ai/cookbook/mixtral-8x7b-for-web-qa)