/ docs-website / versioned_docs / version-2.18 / pipeline-components / evaluators / documentmapevaluator.mdx
documentmapevaluator.mdx
1 --- 2 title: "DocumentMAPEvaluator" 3 id: documentmapevaluator 4 slug: "/documentmapevaluator" 5 description: "The `DocumentMAPEvaluator` evaluates documents retrieved by Haystack pipelines using ground truth labels. It checks to what extent the list of retrieved documents contains only relevant documents as specified in the ground truth labels or also non-relevant documents. This metric is called mean average precision (MAP)." 6 --- 7 8 # DocumentMAPEvaluator 9 10 The `DocumentMAPEvaluator` evaluates documents retrieved by Haystack pipelines using ground truth labels. It checks to what extent the list of retrieved documents contains only relevant documents as specified in the ground truth labels or also non-relevant documents. This metric is called mean average precision (MAP). 11 12 | | | 13 | --- | --- | 14 | **Most common position in a pipeline** | On its own or in an evaluation pipeline. To be used after a separate pipeline that has generated the inputs for the Evaluator. | 15 | **Mandatory run variables** | "ground_truth_documents": A list of a list of ground truth documents. This accounts for one list of ground truth documents per question. <br /> <br />"retrieved_documents": A list of a list of retrieved documents. This accounts for one list of retrieved documents per question. | 16 | **Output variables** | A dictionary containing: <br /> <br />\- `score`: A number from 0.0 to 1.0 that represents the mean average precision <br /> <br />- `individual_scores`: A list of the individual average precision scores ranging from 0.0 to 1.0 for each input pair of a list of retrieved documents and a list of ground truth documents | 17 | **API reference** | [Evaluators](/reference/evaluators-api) | 18 | **GitHub link** | https://github.com/deepset-ai/haystack/blob/main/haystack/components/evaluators/document_map.py | 19 20 ## Overview 21 22 You can use the `DocumentMAPEvaluator` component to evaluate documents retrieved by a Haystack pipeline, such as a RAG pipeline, against ground truth labels. A higher mean average precision is better, indicating that the list of retrieved documents contains many relevant documents and only a few non-relevant documents or none at all. 23 24 To initialize a `DocumentMAPEvaluator`, there are no parameters required. 25 26 ## Usage 27 28 ### On its own 29 30 Below is an example where we use a `DocumentMAPEvaluator` component to evaluate documents retrieved for two queries. For the first query, there is one ground truth document and one retrieved document. For the second query, there are two ground truth documents and three retrieved documents. 31 32 ```python 33 from haystack import Document 34 from haystack.components.evaluators import DocumentMAPEvaluator 35 36 evaluator = DocumentMAPEvaluator() 37 result = evaluator.run( 38 ground_truth_documents=[ 39 [Document(content="France")], 40 [Document(content="9th century"), Document(content="9th")], 41 ], 42 retrieved_documents=[ 43 [Document(content="France")], 44 [ 45 Document(content="9th century"), 46 Document(content="10th century"), 47 Document(content="9th"), 48 ], 49 ], 50 ) 51 print(result["individual_scores"]) 52 ## [1.0, 0.8333333333333333] 53 print(result["score"]) 54 ## 0.9166666666666666 55 ``` 56 57 ### In a pipeline 58 59 Below is an example where we use a `DocumentMAPEvaluator` and a `DocumentMRREvaluator` in a pipeline to evaluate two answers and compare them to ground truth answers. Running a pipeline instead of the individual components simplifies calculating more than one metric. 60 61 ```python 62 from haystack import Document, Pipeline 63 from haystack.components.evaluators import DocumentMRREvaluator, DocumentMAPEvaluator 64 65 pipeline = Pipeline() 66 mrr_evaluator = DocumentMRREvaluator() 67 map_evaluator = DocumentMAPEvaluator() 68 pipeline.add_component("mrr_evaluator", mrr_evaluator) 69 pipeline.add_component("map_evaluator", map_evaluator) 70 71 ground_truth_documents = [ 72 [Document(content="France")], 73 [Document(content="9th century"), Document(content="9th")], 74 ] 75 retrieved_documents = [ 76 [Document(content="France")], 77 [ 78 Document(content="9th century"), 79 Document(content="10th century"), 80 Document(content="9th"), 81 ], 82 ] 83 84 result = pipeline.run( 85 { 86 "mrr_evaluator": { 87 "ground_truth_documents": ground_truth_documents, 88 "retrieved_documents": retrieved_documents, 89 }, 90 "map_evaluator": { 91 "ground_truth_documents": ground_truth_documents, 92 "retrieved_documents": retrieved_documents, 93 }, 94 }, 95 ) 96 97 for evaluator in result: 98 print(result[evaluator]["individual_scores"]) 99 # [1.0, 1.0] 100 # [1.0, 0.8333333333333333] 101 for evaluator in result: 102 print(result[evaluator]["score"]) 103 ## 1.0 104 ## 0.9166666666666666 105 ```