documentmapevaluator.mdx
  1  ---
  2  title: "DocumentMAPEvaluator"
  3  id: documentmapevaluator
  4  slug: "/documentmapevaluator"
  5  description: "The `DocumentMAPEvaluator` evaluates documents retrieved by Haystack pipelines using ground truth labels. It checks to what extent the list of retrieved documents contains only relevant documents as specified in the ground truth labels or also non-relevant documents. This metric is called mean average precision (MAP)."
  6  ---
  7  
  8  # DocumentMAPEvaluator
  9  
 10  The `DocumentMAPEvaluator` evaluates documents retrieved by Haystack pipelines using ground truth labels. It checks to what extent the list of retrieved documents contains only relevant documents as specified in the ground truth labels or also non-relevant documents. This metric is called mean average precision (MAP).
 11  
 12  <div className="key-value-table">
 13  
 14  |  |  |
 15  | --- | --- |
 16  | **Most common position in a pipeline** | On its own or in an evaluation pipeline. To be used after a separate pipeline that has generated the inputs for the Evaluator. |
 17  | **Mandatory run variables** | `ground_truth_documents`: A list of a list of ground truth documents. This accounts for one list of ground truth documents per question.  <br /> <br />`retrieved_documents`: A list of a list of retrieved documents. This accounts for one list of retrieved documents per question. |
 18  | **Output variables** | A dictionary containing:  <br /> <br />\- `score`: A number from 0.0 to 1.0 that represents the mean average precision  <br /> <br />- `individual_scores`: A list of the individual average precision scores ranging from 0.0 to 1.0 for each input pair of a list of retrieved documents and a list of ground truth documents |
 19  | **API reference** | [Evaluators](/reference/evaluators-api) |
 20  | **GitHub link** | https://github.com/deepset-ai/haystack/blob/main/haystack/components/evaluators/document_map.py |
 21  
 22  </div>
 23  
 24  ## Overview
 25  
 26  You can use the `DocumentMAPEvaluator` component to evaluate documents retrieved by a Haystack pipeline, such as a RAG pipeline, against ground truth labels. A higher mean average precision is better, indicating that the list of retrieved documents contains many relevant documents and only a few non-relevant documents or none at all.
 27  
 28  To initialize a `DocumentMAPEvaluator`, there are no parameters required.
 29  
 30  ## Usage
 31  
 32  ### On its own
 33  
 34  Below is an example where we use a `DocumentMAPEvaluator` component to evaluate documents retrieved for two queries. For the first query, there is one ground truth document and one retrieved document. For the second query, there are two ground truth documents and three retrieved documents.
 35  
 36  ```python
 37  from haystack import Document
 38  from haystack.components.evaluators import DocumentMAPEvaluator
 39  
 40  evaluator = DocumentMAPEvaluator()
 41  result = evaluator.run(
 42      ground_truth_documents=[
 43          [Document(content="France")],
 44          [Document(content="9th century"), Document(content="9th")],
 45      ],
 46      retrieved_documents=[
 47          [Document(content="France")],
 48          [
 49              Document(content="9th century"),
 50              Document(content="10th century"),
 51              Document(content="9th"),
 52          ],
 53      ],
 54  )
 55  print(result["individual_scores"])
 56  ## [1.0, 0.8333333333333333]
 57  print(result["score"])
 58  ## 0.9166666666666666
 59  ```
 60  
 61  ### In a pipeline
 62  
 63  Below is an example where we use a `DocumentMAPEvaluator` and a `DocumentMRREvaluator` in a pipeline to evaluate two answers and compare them to ground truth answers. Running a pipeline instead of the individual components simplifies calculating more than one metric.
 64  
 65  ```python
 66  from haystack import Document, Pipeline
 67  from haystack.components.evaluators import DocumentMRREvaluator, DocumentMAPEvaluator
 68  
 69  pipeline = Pipeline()
 70  mrr_evaluator = DocumentMRREvaluator()
 71  map_evaluator = DocumentMAPEvaluator()
 72  pipeline.add_component("mrr_evaluator", mrr_evaluator)
 73  pipeline.add_component("map_evaluator", map_evaluator)
 74  
 75  ground_truth_documents = [
 76      [Document(content="France")],
 77      [Document(content="9th century"), Document(content="9th")],
 78  ]
 79  retrieved_documents = [
 80      [Document(content="France")],
 81      [
 82          Document(content="9th century"),
 83          Document(content="10th century"),
 84          Document(content="9th"),
 85      ],
 86  ]
 87  
 88  result = pipeline.run(
 89      {
 90          "mrr_evaluator": {
 91              "ground_truth_documents": ground_truth_documents,
 92              "retrieved_documents": retrieved_documents,
 93          },
 94          "map_evaluator": {
 95              "ground_truth_documents": ground_truth_documents,
 96              "retrieved_documents": retrieved_documents,
 97          },
 98      },
 99  )
100  
101  for evaluator in result:
102      print(result[evaluator]["individual_scores"])
103  ## [1.0, 1.0]
104  ## [1.0, 0.8333333333333333]
105  for evaluator in result:
106      print(result[evaluator]["score"])
107  ## 1.0
108  ## 0.9166666666666666
109  ```