/ a2a / README.md
README.md
  1  # 🧠 A2A Multi-Agent Fact Checker
  2  
  3  This project demonstrates a **collaborative multi-agent system** built with the **Agent2Agent SDK** ([A2A])
  4  and [OpenAI](https://platform.openai.com/api-keys), where a top-level Auditor agent coordinates the workflow
  5  to verify facts. The Critic agent gathers evidence via live internet searches using **DuckDuckGo** through
  6  the Model Context Protocol (**MCP**), while the Reviser agent analyzes and refines the conclusion using
  7  internal reasoning alone. The system showcases how agents with distinct roles and tools can
  8  **collaborate under orchestration**.
  9  
 10  > [!Tip]
 11  > ✨ No configuration needed β€” run it with a single command.
 12  
 13  <p align="center">
 14    <img src="demo.gif"
 15         alt="A2A Multi-Agent Fact Check Demo"
 16         width="500"
 17         style="border: 1px solid #ccc; border-radius: 8px;" />
 18  </p>
 19  
 20  # πŸš€ Getting Started
 21  
 22  ### Requirements
 23  
 24  + **[Docker Desktop] 4.43.0+ or [Docker Engine]** installed.
 25  + **A laptop or workstation with a GPU** (e.g., a MacBook) for running open models locally. If you
 26    don't have a GPU, you can alternatively use **[Docker Offload]**.
 27  + If you're using [Docker Engine] on Linux or [Docker Desktop] on Windows, ensure that the
 28    [Docker Model Runner requirements] are met (specifically that GPU
 29    support is enabled) and the necessary drivers are installed.
 30  + If you're using Docker Engine on Linux, ensure you have [Docker Compose] 2.38.1 or later installed.
 31  + An [OpenAI API Key](https://platform.openai.com/api-keys) πŸ”‘.
 32  
 33  ### Run the project
 34  
 35  Create a `secret.openai-api-key` file with your OpenAI API key:
 36  
 37  ```plaintext
 38  sk-...
 39  ```
 40  
 41  Then run:
 42  
 43  ```sh
 44  docker compose up --build
 45  ```
 46  
 47  Everything runs from the container. Open `http://localhost:8080` in your browser and then chat with
 48  the agents.
 49  
 50  # 🧠 Inference Options
 51  
 52  By default, this project uses [OpenAI](https://platform.openai.com) to handle LLM inference. If you'd prefer
 53  to use a local LLM instead, run:
 54  
 55  ```sh
 56  docker compose -f compose.dmr.yaml up
 57  ```
 58  
 59  Using [**Docker Offload**](https://www.docker.com/products/docker-offload) with GPU support, you can run the
 60  same demo with a larger model that takes advantage of a more powerful GPU on the remote instance:
 61  
 62  ```sh
 63  docker compose -f compose.dmr.yaml -f compose.offload.yaml up --build
 64  ```
 65  
 66  # ❓ What Can It Do?
 67  
 68  This system performs multi-agent fact verification, coordinated by an **Auditor**:
 69  
 70  + πŸ§‘β€βš–οΈ **Auditor**:
 71    * Orchestrates the process from input to verdict.
 72    * Delegates tasks to Critic and Reviser agents.
 73  + 🧠 **Critic**:
 74    * Uses DuckDuckGo via MCP to gather real-time external evidence.
 75  + ✍️ **Reviser**:
 76    * Refines and verifies the Critic’s conclusions using only reasoning.
 77  
 78  **🧠 All agents use the Docker Model Runner for LLM-based inference.**
 79  
 80  Example question:
 81  
 82  > β€œIs the universe infinite?"
 83  
 84  # 🧱 Project Structure
 85  
 86  | **File/Folder** | **Purpose**                             |
 87  | --------------- | --------------------------------------- |
 88  | `compose.yaml`  | Launches app and MCP DuckDuckGo Gateway |
 89  | `Dockerfile`    | Builds the agent container              |
 90  | `src/AgentKit`  | Agent runtime                           |
 91  | `agents/*.yaml` | Agent definitions                       |
 92  
 93  # πŸ”§ Architecture Overview
 94  
 95  ```mermaid
 96  
 97  flowchart TD
 98      input[πŸ“ User Question] --> auditor[πŸ§‘β€βš–οΈ Auditor Sequential Agent]
 99      auditor --> critic[🧠 Critic Agent]
100      critic -->|uses| mcp[MCP Gateway<br/>DuckDuckGo Search]
101      mcp --> duck[🌐 DuckDuckGo API]
102      duck --> mcp --> critic
103      critic --> reviser[(✍️ Reviser Agent<br/>No tools)]
104      auditor --> reviser
105      reviser --> auditor
106      auditor --> result[βœ… Final Answer]
107  
108      critic -->|inference| model[(🧠 Docker Model Runner<br/>LLM)]
109      reviser -->|inference| model
110  
111      subgraph Infra
112        mcp
113        model
114      end
115  
116  ```
117  
118  + The Auditor is a Sequential Agent, it coordinates Critic and Reviser agents to verify user-provided claims.
119  + The Critic agent performs live web searches through DuckDuckGo using an MCP-compatible gateway.
120  + The Reviser agent refines the Critic’s conclusions using internal reasoning alone.
121  + All agents run inference through a Docker-hosted Model Runner, enabling fully containerized LLM reasoning.
122  
123  # 🀝 Agent Roles
124  
125  | **Agent**   | **Tools Used**        | **Role Description**                                                         |
126  | ----------- | --------------------- | ---------------------------------------------------------------------------- |
127  | **Auditor** | ❌ None               | Coordinates the entire fact-checking workflow and delivers the final answer. |
128  | **Critic**  | βœ… DuckDuckGo via MCP | Gathers evidence to support or refute the claim                              |
129  | **Reviser** | ❌ None               | Refines and finalizes the answer without external input                      |
130  
131  # 🧹 Cleanup
132  
133  To stop and remove containers and volumes:
134  
135  ```sh
136  docker compose down -v
137  ```
138  
139  # πŸ“Ž Credits
140  
141  + [A2A]
142  + [DuckDuckGo]
143  + [Docker Compose]
144  
145  [A2A]: https://github.com/a2aproject/a2a-python
146  [DuckDuckGo]: https://duckduckgo.com
147  [Docker Compose]: https://github.com/docker/compose
148  [Docker Desktop]: https://www.docker.com/products/docker-desktop/
149  [Docker Engine]: https://docs.docker.com/engine/
150  [Docker Model Runner requirements]: https://docs.docker.com/ai/model-runner/
151  [Docker Offload]: https://www.docker.com/products/docker-offload/