migration.mdx
  1  ---
  2  title: "Migration Guide"
  3  id: migration
  4  slug: "/migration"
  5  description: "Learn how to make the move to Haystack 2.x from Haystack 1.x."
  6  ---
  7  
  8  # Migration Guide
  9  
 10  Learn how to make the move to Haystack 2.x from Haystack 1.x.
 11  
 12  This guide is designed for those with previous experience with Haystack and who are interested in understanding the differences between Haystack 1.x and Haystack 2.x. If you're new to Haystack, skip this page and proceed directly to Haystack 2.x [documentation](get-started.mdx).
 13  
 14  ## Major Changes
 15  
 16  Haystack 2.x represents a significant overhaul of Haystack 1.x, and it's important to note that certain key concepts outlined in this section don't have a direct correlation between the two versions.
 17  
 18  ### Package Name
 19  
 20  Haystack 1.x was distributed with a package called `farm-haystack`. To migrate your application, you must uninstall `farm-haystack` and install the new `haystack-ai` package for Haystack 2.x.
 21  
 22  :::warning
 23  Two versions of the project cannot coexist in the same Python environment.
 24  
 25  One of the options is to remove both packages if they are installed in the same environment, followed by installing only one of them:
 26  
 27  ```bash
 28  pip uninstall -y farm-haystack haystack-ai
 29  pip install haystack-ai
 30  ```
 31  :::
 32  
 33  ### Nodes
 34  
 35  While Haystack 2.x continues to rely on the `Pipeline` abstraction, the elements linked in a pipeline graph are now referred to as just _components_, replacing the terms _nodes_ and _pipeline components_ used in the previous versions. The [_Migrating Components_](#migrating-components) paragraph below outlines which component in Haystack 2.x can be used as a replacement for a specific 1.x node.
 36  
 37  ### Pipelines
 38  
 39  Pipelines continue to serve as the fundamental structure of all Haystack applications. While the concept of `Pipeline` abstraction remains consistent, Haystack 2.x introduces significant enhancements that address various limitations of its predecessor. For instance, the pipelines now support loops. Pipelines also offer greater flexibility in their input, which is no longer restricted to queries. The pipeline now allows to route the output of a component to multiple recipients. This increases flexibility, however, comes with notable differences in the pipeline definition process in Haystack 2.x compared to the previous version.
 40  
 41  In Haystack 1.x, a pipeline was built by adding one node after the other. In the resulting pipeline graph, edges are automatically added to connect those nodes in the order they were added.
 42  
 43  Building a pipeline in Haystack 2.x is a two-step process:
 44  
 45  1. Initially, components are added to the pipeline without any specific order by calling the `add_component` method.
 46  2. Subsequently, the components must be explicitly connected by calling the `connect` method to define the final graph.
 47  
 48  To migrate an existing pipeline, the first step is to go through the nodes and identify their counterparts in Haystack 2.x (see the following section,  [_Migrating Components_](#migrating-components), for guidance). If all the nodes can be replaced by corresponding components, they have to be added to the pipeline with `add_component` and explicitly connected with the appropriate calls to `connect`. Here is an example:
 49  
 50  **Haystack 1.x**
 51  
 52  ```python
 53  pipeline = Pipeline()
 54  
 55  node_1 = SomeNode()
 56  node_2 = AnotherNode()
 57  
 58  pipeline.add_node(node_1, name="Node_1", inputs=["Query"])
 59  pipeline.add_node(node_2, name="Node_2", inputs=["Node_1"])
 60  ```
 61  
 62  **Haystack 2.x**
 63  
 64  ```python
 65  pipeline = Pipeline()
 66  
 67  component_1 = SomeComponent()
 68  component_2 = AnotherComponent()
 69  
 70  pipeline.add_component("Comp_1", component_1)
 71  pipeline.add_component("Comp_2", component_2)
 72  
 73  pipeline.connect("Comp_1", "Comp_2")
 74  ```
 75  
 76  In case a specific replacement component is not available for one of your nodes, migrating the pipeline might still be possible by:
 77  
 78  - Either [creating a custom component](../concepts/components/custom-components.mdx), or
 79  - Changing the pipeline logic, as the last resort.
 80  
 81  :::note
 82  Check out the [Pipelines](../concepts/pipelines.mdx) section of our 2.x documentation to understand how new pipelines work more granularly.
 83  
 84  :::
 85  
 86  ### Document Stores
 87  
 88  The fundamental concept of Document Stores as gateways to access text and metadata stored in a database didn’t change in Haystack 2.x, but there are significant differences against Haystack 1.x.
 89  
 90  In Haystack 1.x, Document Stores were a special type of node that you can use in two ways:
 91  
 92  - As the last node in an indexing pipeline (such as a pipeline whose ultimate goal is storing data in a database).
 93  - As a normal Python instance passed to a Retriever node.
 94  
 95  In Haystack 2.x, the Document Store is not a component, so to migrate the two use cases above to version 2.x, you can respectively:
 96  
 97  - Replace the Document Store at the end of the pipeline with a [`DocumentWriter`](../pipeline-components/writers/documentwriter.mdx)  component.
 98  - Identify the right Retriever component and create it passing the Document Store instance, same as it is in Haystack 1.x.
 99  
100  ### Retrievers
101  
102  Haystack 1.x provided a set of nodes that filter relevant documents from different data sources according to a given query. Each of those nodes implements a certain retrieval algorithm and supports one or more types of Document Stores. For example, the `BM25Retriever` node in Haystack 1.x can work seamlessly with OpenSearch and Elasticsearch but not with Qdrant; the `EmbeddingRetriever`, on the contrary, can work with all the three databases.
103  
104  In Haystack 2.x, the concept is flipped, and each Document Store provides one or more retriever components, depending on which retrieval methods the underlying vector database supports. For example, the `OpenSearchDocumentStore` comes with [two Retriever components](../document-stores/opensearch-document-store.mdx#supported-retrievers), one relying on BM25, and the other on vector similarity.
105  
106  To migrate a 1.x retrieval pipeline to 2.x, the first step is to identify the Document Store being used and replace the Retriever node with the corresponding Retriever component from Haystack 2.x with the Document Store of choice. For example, a `BM25Retriever` node using Elasticsearch in a Haystack 1.x pipeline should be replaced with the [`ElasticsearchBM25Retriever`](../pipeline-components/retrievers/elasticsearchbm25retriever.mdx)  component.
107  
108  ### PromptNode
109  
110  The `PromptNode`  in Haystack 1.x represented the gateway to any Large Language Model (LLM) inference provider, whether it is locally available or remote. Based on the name of the model, Haystack infers the right provider to call and forward the query.
111  
112  In Haystack 2.x, the task of using LLMs is assigned to [Generators](../pipeline-components/generators.mdx). These are a set of components that are highly specialized and tailored for each inference provider.
113  
114  The first step when migrating a pipeline with a `PromptNode` is to identify the model provider used and to replace the node with two components:
115  
116  - A Generator component for the model provider of choice,
117  - A `PromptBuilder` or `ChatPromptBuilder` component to build the prompt to be used.
118  
119  The [_Migration examples_](#migration-examples) section below shows how to port a `PromptNode` using OpenAI with a prompt template to a corresponding Haystack 2.x pipeline using the `OpenAIGenerator` in conjunction with a `PromptBuilder` component.
120  
121  ### Agents
122  
123  The agentic approach facilitates the answering of questions that are significantly more complex than those typically addressed by extractive or generative question answering techniques.
124  
125  Haystack 1.x provided Agents, enabling the use of LLMs in a loop.
126  
127  Currently in Haystack 2.x, you can build Agents using three main elements in a pipeline: Chat Generators, ToolInvoker component, and Tools. A standalone Agent abstraction in Haystack 2.x is in an experimental phase.
128  
129  :::note
130  Agents Documentation Page
131  
132  Take a look at our 2.x [Agents](../concepts/agents.mdx) documentation page for more information and detailed examples.
133  :::
134  
135  ### REST API
136  
137  Haystack 1.x enabled the deployment of pipelines through a RESTful API over HTTP. This feature is facilitated by a separate application named `rest_api` which is exclusively accessible in the form of a [source code on GitHub](https://github.com/deepset-ai/haystack/tree/v1.x/rest_api).
138  
139  Haystack 2.x takes the same RESTful approach, but in this case, the application to be used to deploy pipelines is called [Hayhooks](../development/hayhooks.mdx) and can be installed with `pip install hayhooks`.
140  
141  At the moment, porting an existing Haystack 1.x deployment using the `rest_api` project to Hayhooks would require a complete rewrite of the application.
142  
143  ## Dependencies
144  
145  In order to minimize runtime errors, Haystack 1.x was distributed in a package that’s quite large, as it tries to set up the Python environment with as many dependencies as possible.
146  
147  In contrast, Haystack 2.x strives for a more streamlined approach, offering a minimal set of dependencies right out of the box. It features a system that issues a warning when an additional dependency is required, thereby providing the user with the necessary instructions.
148  
149  To make sure all the dependencies are satisfied when migrating a Haystack 1.x application to version 2.x, a good strategy is to run end-to-end tests and cover all the execution paths to ensure all the required dependencies are available in the target Python environment.
150  
151  ## Migrating Components
152  
153  This table outlines which component (or a group of components) can be used to replace a certain node when porting a Haystack 1.x pipeline to the latest 2.x version. It’s important to note that when a Haystack 2.x replacement is not available, this doesn’t necessarily mean we are planning this feature.
154  
155  If you need help migrating a 1.x node without a 2.x counterpart, open an [issue](https://github.com/deepset-ai/haystack/issues) in Haystack GitHub repository.
156  
157  ### Data Handling
158  
159  | Haystack 1.x               | Description                                                                                                                                                                             | Haystack 2.x                                                                         |
160  | -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
161  | Crawler                    | Scrapes text from websites. **Example usage:** To run searches on your website content.                                                                                                 | Not Available                                                                        |
162  | DocumentClassifier         | Classifies documents by attaching metadata to them. **Example usage:** Labeling documents by their characteristic (for example, sentiment).                                             | [TransformersZeroShotDocumentClassifier](../pipeline-components/classifiers/transformerszeroshotdocumentclassifier.mdx) |
163  | DocumentLanguageClassifier | Detects the language of the documents you pass to it and adds it to the document metadata.                                                                                              | [DocumentLanguageClassifier](../pipeline-components/classifiers/documentlanguageclassifier.mdx)                       |
164  | EntityExtractor            | Extracts predefined entities out of a piece of text. **Example usage:** Named entity extraction (NER).                                                                                  | [NamedEntityExtractor](../pipeline-components/extractors/namedentityextractor.mdx)                                   |
165  | FileClassifier             | Distinguishes between text, PDF, Markdown, Docx, and HTML files. **Example usage:** Routing files to appropriate converters (for example, it routes PDF files to `PDFToTextConverter`). | [FileTypeRouter](../pipeline-components/routers/filetyperouter.mdx)                                               |
166  | FileConverter              | Cleans and splits documents in different formats. **Example usage:** In indexing pipelines, extracting text from a file and casting it into the Document class format.                  | [Converters](../pipeline-components/converters.mdx)                                                       |
167  | PreProcessor               | Cleans and splits documents. **Example usage:** Normalizing white spaces, getting rid of headers and footers, splitting documents into smaller ones.                                    | [PreProcessors](../pipeline-components/preprocessors.mdx)                                                 |
168  
169  ### Semantic Search
170  
171  | Haystack 1.x      | Description                                                                                                                                                                                                                 | Haystack 2.x                                                                            |
172  | ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
173  | Ranker            | Orders documents based on how relevant they are to the query. **Example usage:** In a query pipeline, after a keyword-based Retriever to rank the documents it returns.                                                     | [Rankers](../pipeline-components/rankers.mdx)                                                                |
174  | Reader            | Finds an answer by selecting a text span in documents. **Example usage:** In a query pipeline when you want to know the location of the answer.                                                                             | [ExtractiveReader](../pipeline-components/readers/extractivereader.mdx)                                              |
175  | Retriever         | Fetches relevant documents from the Document Store. **Example usage:** Coupling Retriever with a Reader in a query pipeline to speed up the search (the Reader only goes through the documents it gets from the Retriever). | [Retrievers](../pipeline-components/retrievers.mdx)                                                          |
176  | QuestionGenerator | When given a document, it generates questions this document can answer. **Example usage:**
177  Auto-suggested questions in your search app.                                                                                     | Prompt [Builders](../pipeline-components/builders.mdx) with dedicated prompt, [Generators](../pipeline-components/generators.mdx) |
178  
179  ### Prompts and LLMs
180  
181  | Haystack 1.x | Description                                                                                                                                                                                                                   | Haystack 2.x                                                     |
182  | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------- |
183  | PromptNode   | Uses large language models to perform various NLP tasks in a pipeline or on its own. **Example usage:** It's a very versatile component that can perform tasks like summarization, question answering, translation, and more. | Prompt [Builders](../pipeline-components/builders.mdx),[Generators](../pipeline-components/generators.mdx) |
184  
185  ### Routing
186  
187  | Haystack 1.x | Description | Haystack 2.x |
188  | --- | --- | --- |
189  | QueryClassifier | Categorizes queries. **Example usage:** Distinguishing between keyword queries and natural language questions and routing them to the Retrievers that can handle them best. | [TransformersZeroShotTextRouter](../pipeline-components/routers/transformerszeroshottextrouter.mdx)  <br />[TransformersTextRouter](../pipeline-components/routers/transformerstextrouter.mdx) |
190  | RouteDocuments | Routes documents to different branches of your pipeline based on their content type or metadata field. **Example usage:** Routing table data to `TableReader` and text data to `TransfomersReader` for better handling. | [Routers](../pipeline-components/routers.mdx) |
191  
192  ### Utility Components
193  
194  | Haystack 1.x            | Description                                                                                                                                                                                                                                                                                                                                            | Haystack 2.x                                                                            |
195  | ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------- |
196  | DocumentMerger          | Concatenates multiple documents into a single one. **Example usage: **Merge the documents to summarize in a summarization pipeline.                                                                                                                                                                                                                    | Prompt [Builders](../pipeline-components/builders.mdx)                                                       |
197  | Docs2Answers            | Converts Documents into Answers. **Example usage:** When using REST API for document retrieval. REST API expects Answer as output, you can use `Doc2Answer` as the last node to convert the retrieved documents to answers.                                                                                                                            | [AnswerBuilder](../pipeline-components/builders/answerbuilder.mdx)                                                    |
198  | JoinAnswers             | Takes answers returned by multiple components and joins them in a single list of answers. **Example usage:** For running queries on different document types (for example, tables and text), where the documents are routed to different readers, and each reader returns a separate list of answers.                                                  | [AnswerJoiner](../pipeline-components/joiners/answerjoiner.mdx)                                                        |
199  | JoinDocuments           | Takes documents returned by different components and joins them to form one list of documents. **Example usage:** In document retrieval pipelines, where there are different types of documents, each routed to a different Retriever. Each Retriever returns a separate list of documents, and you can join them into one list using `JoinDocuments`. | [DocumentJoiner](../pipeline-components/joiners/documentjoiner.mdx)                                                  |
200  | Shaper                  | Currently functions mostly as `PromptNode` helper making sure the `PromptNode` input or output is correct. **Example usage:** In a question answering pipeline using `PromptNode`, where the `PromptTemplate` expects questions as input, while Haystack pipelines use query. You can use Shaper to rename queries to questions.                       | Prompt [Builders](../pipeline-components/builders.mdx)                                                       |
201  | Summarizer              | Creates an overview of a document. **Example usage:** To get a glimpse of the documents the Retriever is returning.                                                                                                                                                                                                                                    | Prompt [Builders](../pipeline-components/builders.mdx) with dedicated prompt, [Generators](../pipeline-components/generators.mdx) |
202  | TransformersImageToText | Generates captions for images. **Example usage:** Automatically generate captions for a list of images that you can later use in your knowledge base.                                                                                                                                                                                                  | [VertexAIImageQA](../pipeline-components/generators/vertexaiimageqa.mdx)                                                  |
203  | Translator              | Translates text from one language into another. **Example usage:** Running searches on documents in other languages.                                                                                                                                                                                                                                   | Prompt [Builders](../pipeline-components/builders.mdx) with dedicated prompt, [Generators](../pipeline-components/generators.mdx) |
204  
205  ### Extras
206  
207  | Haystack 1.x     | Description                                                                                                                                                                      | Haystack 2.x                                                                   |
208  | ---------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ |
209  | AnswerToSpeech   | Converts text answers into speech answers. **Example usage:** Improving accessibility of your search system by providing a way to have the answer and its context read out loud. | [ElevenLabs](https://haystack.deepset.ai/integrations/elevenlabs) Integration  |
210  | DocumentToSpeech | Converts text documents to speech documents. **Example usage:** Improving accessibility of a document retrieval pipeline by providing the option to read documents out loud.     | [ElevenLabs](https://haystack.deepset.ai/integrations/elevenlabs)  Integration |
211  
212  ## Migration examples
213  
214  :::note
215  This section might grow as we assist users with their use cases.
216  
217  :::
218  
219  ### Indexing Pipeline
220  
221  <details>
222  
223  <summary>Haystack 1.x</summary>
224  
225  ```python
226  from haystack.document_stores import InMemoryDocumentStore
227  from haystack.nodes.file_classifier import FileTypeClassifier
228  from haystack.nodes.file_converter import TextConverter
229  from haystack.nodes.preprocessor import PreProcessor
230  from haystack.pipelines import Pipeline
231  
232  ## Initialize a DocumentStore
233  document_store = InMemoryDocumentStore()
234  
235  ## Indexing Pipeline
236  indexing_pipeline = Pipeline()
237  
238  ## Makes sure the file is a TXT file (FileTypeClassifier node)
239  classifier = FileTypeClassifier()
240  indexing_pipeline.add_node(classifier, name="Classifier", inputs=["File"])
241  
242  ## Converts a file into text and performs basic cleaning (TextConverter node)
243  text_converter = TextConverter(remove_numeric_tables=True)
244  indexing_pipeline.add_node(
245      text_converter,
246      name="Text_converter",
247      inputs=["Classifier.output_1"],
248  )
249  
250  ## Pre-processes the text by performing splits and adding metadata to the text (Preprocessor node)
251  preprocessor = PreProcessor(
252      clean_whitespace=True,
253      clean_empty_lines=True,
254      split_length=100,
255      split_overlap=50,
256      split_respect_sentence_boundary=True,
257  )
258  indexing_pipeline.add_node(preprocessor, name="Preprocessor", inputs=["Text_converter"])
259  
260  ## - Writes the resulting documents into the document store
261  indexing_pipeline.add_node(
262      document_store,
263      name="Document_Store",
264      inputs=["Preprocessor"],
265  )
266  
267  ## Then we run it with the documents and their metadata as input
268  result = indexing_pipeline.run(file_paths=file_paths, meta=files_metadata)
269  ```
270  
271  </details>
272  
273  <details>
274  
275  <summary>Haystack 2.x</summary>
276  
277  ```python
278  from haystack import Pipeline
279  from haystack.components.routers import FileTypeRouter
280  from haystack.document_stores.in_memory import InMemoryDocumentStore
281  from haystack.components.converters import TextFileToDocument
282  from haystack.components.preprocessors import DocumentCleaner, DocumentSplitter
283  from haystack.components.writers import DocumentWriter
284  
285  ## Initialize a DocumentStore
286  document_store = InMemoryDocumentStore()
287  
288  ## Indexing Pipeline
289  indexing_pipeline = Pipeline()
290  
291  ## Makes sure the file is a TXT file (FileTypeRouter component)
292  classifier = FileTypeRouter(mime_types=["text/plain"])
293  indexing_pipeline.add_component("file_type_router", classifier)
294  
295  ## Converts a file into a Document (TextFileToDocument component)
296  text_converter = TextFileToDocument()
297  indexing_pipeline.add_component("text_converter", text_converter)
298  
299  ## Performs basic cleaning (DocumentCleaner component)
300  cleaner = DocumentCleaner(
301      remove_empty_lines=True,
302      remove_extra_whitespaces=True,
303  )
304  indexing_pipeline.add_component("cleaner", cleaner)
305  
306  ## Pre-processes the text by performing splits and adding metadata to the text (DocumentSplitter component)
307  preprocessor = DocumentSplitter(split_by="passage", split_length=100, split_overlap=50)
308  indexing_pipeline.add_component("preprocessor", preprocessor)
309  
310  ## - Writes the resulting documents into the document store
311  indexing_pipeline.add_component("writer", DocumentWriter(document_store))
312  
313  ## Connect all the components
314  indexing_pipeline.connect("file_type_router.text/plain", "text_converter")
315  indexing_pipeline.connect("text_converter", "cleaner")
316  indexing_pipeline.connect("cleaner", "preprocessor")
317  indexing_pipeline.connect("preprocessor", "writer")
318  
319  ## Then we run it with the documents and their metadata as input
320  result = indexing_pipeline.run({"file_type_router": {"sources": file_paths}})
321  ```
322  
323  </details>
324  
325  ### Query Pipeline
326  
327  <details>
328  
329  <summary>Haystack 1.x</summary>
330  
331  ```python
332  from haystack.document_stores import InMemoryDocumentStore
333  from haystack.pipelines import ExtractiveQAPipeline
334  from haystack import Document
335  from haystack.nodes import BM25Retriever
336  from haystack.nodes import FARMReader
337  
338  document_store = InMemoryDocumentStore(use_bm25=True)
339  document_store.write_documents(
340      [
341          Document(content="Paris is the capital of France."),
342          Document(content="Berlin is the capital of Germany."),
343          Document(content="Rome is the capital of Italy."),
344          Document(content="Madrid is the capital of Spain."),
345      ],
346  )
347  
348  retriever = BM25Retriever(document_store=document_store)
349  reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
350  extractive_qa_pipeline = ExtractiveQAPipeline(reader, retriever)
351  
352  query = "What is the capital of France?"
353  result = extractive_qa_pipeline.run(
354      query=query,
355      params={"Retriever": {"top_k": 10}, "Reader": {"top_k": 5}},
356  )
357  ```
358  
359  </details>
360  
361  <details>
362  
363  <summary>Haystack 2.x</summary>
364  
365  ```python
366  from haystack.document_stores.in_memory import InMemoryDocumentStore
367  from haystack import Document, Pipeline
368  from haystack.components.retrievers.in_memory import InMemoryBM25Retriever
369  from haystack.components.readers import ExtractiveReader
370  
371  document_store = InMemoryDocumentStore()
372  document_store.write_documents(
373      [
374          Document(content="Paris is the capital of France."),
375          Document(content="Berlin is the capital of Germany."),
376          Document(content="Rome is the capital of Italy."),
377          Document(content="Madrid is the capital of Spain."),
378      ],
379  )
380  
381  retriever = InMemoryBM25Retriever(document_store)
382  reader = ExtractiveReader(model="deepset/roberta-base-squad2")
383  extractive_qa_pipeline = Pipeline()
384  extractive_qa_pipeline.add_component("retriever", retriever)
385  extractive_qa_pipeline.add_component("reader", reader)
386  extractive_qa_pipeline.connect("retriever", "reader")
387  
388  query = "What is the capital of France?"
389  result = extractive_qa_pipeline.run(
390      data={
391          "retriever": {"query": query, "top_k": 3},
392          "reader": {"query": query, "top_k": 2},
393      },
394  )
395  ```
396  
397  </details>
398  
399  ### RAG Pipeline
400  
401  <details>
402  
403  <summary>Haystack 1.x</summary>
404  
405  ```python
406  from datasets import load_dataset
407  
408  from haystack.pipelines import Pipeline
409  from haystack.document_stores import InMemoryDocumentStore
410  from haystack.nodes import EmbeddingRetriever, PromptNode, PromptTemplate, AnswerParser
411  
412  document_store = InMemoryDocumentStore(embedding_dim=384)
413  dataset = load_dataset("bilgeyucel/seven-wonders", split="train")
414  document_store.write_documents(dataset)
415  retriever = EmbeddingRetriever(
416      embedding_model="sentence-transformers/all-MiniLM-L6-v2",
417      document_store=document_store,
418      top_k=2,
419  )
420  document_store.update_embeddings(retriever)
421  
422  rag_prompt = PromptTemplate(
423      prompt="""Synthesize a comprehensive answer from the following text for the given question.
424                               Provide a clear and concise response that summarizes the key points and information presented in the text.
425                               Your answer should be in your own words and be no longer than 50 words.
426                               \n\n Related text: {join(documents)} \n\n Question: {query} \n\n Answer:""",
427      output_parser=AnswerParser(),
428  )
429  
430  prompt_node = PromptNode(
431      model_name_or_path="gpt-3.5-turbo",
432      api_key=OPENAI_API_KEY,
433      default_prompt_template=rag_prompt,
434  )
435  
436  pipe = Pipeline()
437  pipe.add_node(component=retriever, name="retriever", inputs=["Query"])
438  pipe.add_node(component=prompt_node, name="prompt_node", inputs=["retriever"])
439  
440  output = pipe.run(query="What does Rhodes Statue look like?")
441  ```
442  
443  </details>
444  
445  <details>
446  
447  <summary>Haystack 2.x</summary>
448  
449  ```python
450  from datasets import load_dataset
451  
452  from haystack import Document, Pipeline
453  from haystack.document_stores.in_memory import InMemoryDocumentStore
454  from haystack.components.builders import PromptBuilder
455  from haystack.components.generators import OpenAIGenerator
456  from haystack.components.embedders import SentenceTransformersDocumentEmbedder
457  from haystack.components.embedders import SentenceTransformersTextEmbedder
458  from haystack.components.retrievers import InMemoryEmbeddingRetriever
459  
460  document_store = InMemoryDocumentStore()
461  dataset = load_dataset("bilgeyucel/seven-wonders", split="train")
462  embedder = SentenceTransformersDocumentEmbedder(
463      "sentence-transformers/all-MiniLM-L6-v2",
464  )
465  embedder.warm_up()
466  output = embedder.run([Document(**ds) for ds in dataset])
467  document_store.write_documents(output.get("documents"))
468  
469  template = """
470  Given the following information, answer the question.
471  
472  Context:
473  {% for document in documents %}
474      {{ document.content }}
475  {% endfor %}
476  
477  Question: {{question}}
478  Answer:
479  """
480  prompt_builder = PromptBuilder(template=template)
481  
482  retriever = InMemoryEmbeddingRetriever(document_store=document_store, top_k=2)
483  generator = OpenAIGenerator(model="gpt-3.5-turbo")
484  query_embedder = SentenceTransformersTextEmbedder(
485      model="sentence-transformers/all-MiniLM-L6-v2",
486  )
487  
488  basic_rag_pipeline = Pipeline()
489  basic_rag_pipeline.add_component("text_embedder", query_embedder)
490  basic_rag_pipeline.add_component("retriever", retriever)
491  basic_rag_pipeline.add_component("prompt_builder", prompt_builder)
492  basic_rag_pipeline.add_component("llm", generator)
493  
494  basic_rag_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")
495  basic_rag_pipeline.connect("retriever", "prompt_builder.documents")
496  basic_rag_pipeline.connect("prompt_builder", "llm")
497  
498  query = "What does Rhodes Statue look like?"
499  output = basic_rag_pipeline.run(
500      {"text_embedder": {"text": query}, "prompt_builder": {"question": query}},
501  )
502  ```
503  
504  </details>
505  
506  ## Documentation and Tutorials for Haystack 1.x
507  
508  You can access old tutorials in the [GitHub history](https://github.com/deepset-ai/haystack-tutorials/tree/5917718cbfbb61410aab4121ee6fe754040a5dc7) and download the Haystack 1.x documentation as a [ZIP file](https://core-engineering.s3.eu-central-1.amazonaws.com/public/docs/haystack-v1-docs.zip).
509  
510  The ZIP file contains documentation for all minor releases from version 1.0 to 1.26.
511  
512  To download documentation for a specific release, replace the version number in the following URL: `https://core-engineering.s3.eu-central-1.amazonaws.com/public/docs/v1.26.zip`.