README.md
1 # π§ A2A Multi-Agent Fact Checker 2 3 This project demonstrates a **collaborative multi-agent system** built with the **Agent2Agent SDK** ([A2A]) 4 and [OpenAI](https://platform.openai.com/api-keys), where a top-level Auditor agent coordinates the workflow 5 to verify facts. The Critic agent gathers evidence via live internet searches using **DuckDuckGo** through 6 the Model Context Protocol (**MCP**), while the Reviser agent analyzes and refines the conclusion using 7 internal reasoning alone. The system showcases how agents with distinct roles and tools can 8 **collaborate under orchestration**. 9 10 > [!Tip] 11 > β¨ No configuration needed β run it with a single command. 12 13 <p align="center"> 14 <img src="demo.gif" 15 alt="A2A Multi-Agent Fact Check Demo" 16 width="500" 17 style="border: 1px solid #ccc; border-radius: 8px;" /> 18 </p> 19 20 # π Getting Started 21 22 ### Requirements 23 24 + **[Docker Desktop] 4.43.0+ or [Docker Engine]** installed. 25 + **A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you 26 don't have a GPU, you can alternatively use **[Docker Offload]**. 27 + If you're using [Docker Engine] on Linux or [Docker Desktop] on Windows, ensure that the 28 [Docker Model Runner requirements] are met (specifically that GPU 29 support is enabled) and the necessary drivers are installed. 30 + If you're using Docker Engine on Linux, ensure you have [Docker Compose] 2.38.1 or later installed. 31 + An [OpenAI API Key](https://platform.openai.com/api-keys) π. 32 33 ### Run the project 34 35 Create a `secret.openai-api-key` file with your OpenAI API key: 36 37 ```plaintext 38 sk-... 39 ``` 40 41 Then run: 42 43 ```sh 44 docker compose up --build 45 ``` 46 47 Everything runs from the container. Open `http://localhost:8080` in your browser and then chat with 48 the agents. 49 50 # π§ Inference Options 51 52 By default, this project uses [OpenAI](https://platform.openai.com) to handle LLM inference. If you'd prefer 53 to use a local LLM instead, run: 54 55 ```sh 56 docker compose -f compose.dmr.yaml up 57 ``` 58 59 Using [**Docker Offload**](https://www.docker.com/products/docker-offload) with GPU support, you can run the 60 same demo with a larger model that takes advantage of a more powerful GPU on the remote instance: 61 62 ```sh 63 docker compose -f compose.dmr.yaml -f compose.offload.yaml up --build 64 ``` 65 66 # β What Can It Do? 67 68 This system performs multi-agent fact verification, coordinated by an **Auditor**: 69 70 + π§ββοΈ **Auditor**: 71 * Orchestrates the process from input to verdict. 72 * Delegates tasks to Critic and Reviser agents. 73 + π§ **Critic**: 74 * Uses DuckDuckGo via MCP to gather real-time external evidence. 75 + βοΈ **Reviser**: 76 * Refines and verifies the Criticβs conclusions using only reasoning. 77 78 **π§ All agents use the Docker Model Runner for LLM-based inference.** 79 80 Example question: 81 82 > βIs the universe infinite?" 83 84 # π§± Project Structure 85 86 | **File/Folder** | **Purpose** | 87 | --------------- | --------------------------------------- | 88 | `compose.yaml` | Launches app and MCP DuckDuckGo Gateway | 89 | `Dockerfile` | Builds the agent container | 90 | `src/AgentKit` | Agent runtime | 91 | `agents/*.yaml` | Agent definitions | 92 93 # π§ Architecture Overview 94 95 ```mermaid 96 97 flowchart TD 98 input[π User Question] --> auditor[π§ββοΈ Auditor Sequential Agent] 99 auditor --> critic[π§ Critic Agent] 100 critic -->|uses| mcp[MCP Gateway<br/>DuckDuckGo Search] 101 mcp --> duck[π DuckDuckGo API] 102 duck --> mcp --> critic 103 critic --> reviser[(βοΈ Reviser Agent<br/>No tools)] 104 auditor --> reviser 105 reviser --> auditor 106 auditor --> result[β Final Answer] 107 108 critic -->|inference| model[(π§ Docker Model Runner<br/>LLM)] 109 reviser -->|inference| model 110 111 subgraph Infra 112 mcp 113 model 114 end 115 116 ``` 117 118 + The Auditor is a Sequential Agent, it coordinates Critic and Reviser agents to verify user-provided claims. 119 + The Critic agent performs live web searches through DuckDuckGo using an MCP-compatible gateway. 120 + The Reviser agent refines the Criticβs conclusions using internal reasoning alone. 121 + All agents run inference through a Docker-hosted Model Runner, enabling fully containerized LLM reasoning. 122 123 # π€ Agent Roles 124 125 | **Agent** | **Tools Used** | **Role Description** | 126 | ----------- | --------------------- | ---------------------------------------------------------------------------- | 127 | **Auditor** | β None | Coordinates the entire fact-checking workflow and delivers the final answer. | 128 | **Critic** | β DuckDuckGo via MCP | Gathers evidence to support or refute the claim | 129 | **Reviser** | β None | Refines and finalizes the answer without external input | 130 131 # π§Ή Cleanup 132 133 To stop and remove containers and volumes: 134 135 ```sh 136 docker compose down -v 137 ``` 138 139 # π Credits 140 141 + [A2A] 142 + [DuckDuckGo] 143 + [Docker Compose] 144 145 [A2A]: https://github.com/a2aproject/a2a-python 146 [DuckDuckGo]: https://duckduckgo.com 147 [Docker Compose]: https://github.com/docker/compose 148 [Docker Desktop]: https://www.docker.com/products/docker-desktop/ 149 [Docker Engine]: https://docs.docker.com/engine/ 150 [Docker Model Runner requirements]: https://docs.docker.com/ai/model-runner/ 151 [Docker Offload]: https://www.docker.com/products/docker-offload/