stackitchatgenerator.mdx
  1  ---
  2  title: "STACKITChatGenerator"
  3  id: stackitchatgenerator
  4  slug: "/stackitchatgenerator"
  5  description: "This component enables chat completions using the STACKIT API."
  6  ---
  7  
  8  # STACKITChatGenerator
  9  
 10  This component enables chat completions using the STACKIT API.
 11  
 12  <div className="key-value-table">
 13  
 14  |  |  |
 15  | --- | --- |
 16  | **Most common position in a pipeline** | After a [`ChatPromptBuilder`](../builders/chatpromptbuilder.mdx) |
 17  | **Mandatory init variables** | `model`: The model used through the STACKIT API |
 18  | **Mandatory run variables** | `messages`: A list of [`ChatMessage`](../../concepts/data-classes/chatmessage.mdx)  objects |
 19  | **Output variables** | `replies`: A list of [`ChatMessage`](../../concepts/data-classes/chatmessage.mdx) objects  <br /> <br />`meta`: A list of dictionaries with the metadata associated with each reply (such as token count, finish reason, and so on) |
 20  | **API reference** | [STACKIT](/reference/integrations-stackit) |
 21  | **GitHub link** | https://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/stackit |
 22  
 23  </div>
 24  
 25  ## Overview
 26  
 27  `STACKITChatGenerator` enables text generation models served by STACKIT through their API.
 28  
 29  ### Parameters
 30  
 31  To use the `STACKITChatGenerator`, ensure you have set a `STACKIT_API_KEY` as an environment variable. Alternatively, provide the API key as another environment variable or a token by setting
 32  `api_key` and using Haystack’s [secret management](../../concepts/secret-management.mdx).
 33  
 34  Set your preferred supported model with the `model` parameter when initializing the component. See the full list of all supported models on the [STACKIT website](https://docs.stackit.cloud/stackit/en/models-licenses-319914532.html).
 35  
 36  Optionally, you can change the default `api_base_url`, which is `"https://api.openai-compat.model-serving.eu01.onstackit.cloud/v1"`.
 37  
 38  You can pass any text generation parameters valid for the STACKIT Chat Completion API directly to this component with the `generation_kwargs` parameter in the init or run methods.
 39  
 40  The component needs a list of `ChatMessage` objects to run. `ChatMessage` is a data class that contains a message, a role (who generated the message, such as `user`, `assistant`, `system`, `function`), and optional metadata. Find out more about it [ChatMessage documentation](../../concepts/data-classes/chatmessage.mdx).
 41  
 42  ### Streaming
 43  
 44  This ChatGenerator supports [streaming](guides-to-generators/choosing-the-right-generator.mdx#streaming-support) the tokens from the LLM directly into the output. To do so, pass a function to the `streaming_callback` init parameter.
 45  
 46  ## Usage
 47  
 48  Install the `stackit-haystack` package to use the `STACKITChatGenerator`:
 49  
 50  ```shell
 51  pip install stackit-haystack
 52  ```
 53  
 54  ### On its own
 55  
 56  ```python
 57  from haystack_integrations.components.generators.stackit import STACKITChatGenerator
 58  from haystack.dataclasses import ChatMessage
 59  
 60  generator = STACKITChatGenerator(model="neuralmagic/Meta-Llama-3.1-70B-Instruct-FP8")
 61  
 62  result = generator.run([ChatMessage.from_user("Tell me a joke.")])
 63  print(result)
 64  ```
 65  
 66  With multimodal inputs:
 67  
 68  ```python
 69  from haystack.dataclasses import ChatMessage, ImageContent
 70  from haystack_integrations.components.generators.stackit import STACKITChatGenerator
 71  
 72  llm = STACKITChatGenerator(model="meta-llama/Llama-3.2-11B-Vision-Instruct")
 73  
 74  image = ImageContent.from_file_path("apple.jpg")
 75  user_message = ChatMessage.from_user(
 76      content_parts=["What does the image show? Max 5 words.", image],
 77  )
 78  
 79  response = llm.run([user_message])["replies"][0].text
 80  print(response)
 81  
 82  # Red apple on straw.
 83  ```
 84  
 85  ### In a pipeline
 86  
 87  You can also use `STACKITChatGenerator` in your pipeline.
 88  
 89  ```python
 90  from haystack import Pipeline
 91  from haystack.components.builders import ChatPromptBuilder
 92  from haystack.dataclasses import ChatMessage
 93  
 94  from haystack_integrations.components.generators.stackit import STACKITChatGenerator
 95  
 96  prompt_builder = ChatPromptBuilder()
 97  llm = STACKITChatGenerator(model="neuralmagic/Meta-Llama-3.1-70B-Instruct-FP8")
 98  
 99  messages = [ChatMessage.from_user("Question: {{question}} \\n")]
100  
101  pipeline = Pipeline()
102  pipeline.add_component("prompt_builder", prompt_builder)
103  pipeline.add_component("llm", llm)
104  
105  pipeline.connect("prompt_builder.prompt", "llm.messages")
106  
107  result = pipeline.run(
108      {
109          "prompt_builder": {
110              "template_variables": {"question": "Tell me a joke."},
111              "template": messages,
112          },
113      },
114  )
115  
116  print(result)
117  ```
118  
119  For an example of streaming in a pipeline, refer to the examples in the STACKIT integration [repository](https://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/stackit/examples) and on its dedicated [integration page](https://haystack.deepset.ai/integrations/stackit).