/ mcp_python_lib_docs.md
mcp_python_lib_docs.md
  1  # MCP Python SDK
  2  
  3  <div align="center">
  4  
  5  <strong>Python implementation of the Model Context Protocol (MCP)</strong>
  6  
  7  [![PyPI][pypi-badge]][pypi-url]
  8  [![MIT licensed][mit-badge]][mit-url]
  9  [![Python Version][python-badge]][python-url]
 10  [![Documentation][docs-badge]][docs-url]
 11  [![Specification][spec-badge]][spec-url]
 12  [![GitHub Discussions][discussions-badge]][discussions-url]
 13  
 14  </div>
 15  
 16  <!-- omit in toc -->
 17  ## Table of Contents
 18  
 19  - [MCP Python SDK](#mcp-python-sdk)
 20    - [Overview](#overview)
 21    - [Installation](#installation)
 22      - [Adding MCP to your python project](#adding-mcp-to-your-python-project)
 23      - [Running the standalone MCP development tools](#running-the-standalone-mcp-development-tools)
 24    - [Quickstart](#quickstart)
 25    - [What is MCP?](#what-is-mcp)
 26    - [Core Concepts](#core-concepts)
 27      - [Server](#server)
 28      - [Resources](#resources)
 29      - [Tools](#tools)
 30      - [Prompts](#prompts)
 31      - [Images](#images)
 32      - [Context](#context)
 33    - [Running Your Server](#running-your-server)
 34      - [Development Mode](#development-mode)
 35      - [Claude Desktop Integration](#claude-desktop-integration)
 36      - [Direct Execution](#direct-execution)
 37      - [Mounting to an Existing ASGI Server](#mounting-to-an-existing-asgi-server)
 38    - [Examples](#examples)
 39      - [Echo Server](#echo-server)
 40      - [SQLite Explorer](#sqlite-explorer)
 41    - [Advanced Usage](#advanced-usage)
 42      - [Low-Level Server](#low-level-server)
 43      - [Writing MCP Clients](#writing-mcp-clients)
 44      - [MCP Primitives](#mcp-primitives)
 45      - [Server Capabilities](#server-capabilities)
 46    - [Documentation](#documentation)
 47    - [Contributing](#contributing)
 48    - [License](#license)
 49  
 50  [pypi-badge]: https://img.shields.io/pypi/v/mcp.svg
 51  [pypi-url]: https://pypi.org/project/mcp/
 52  [mit-badge]: https://img.shields.io/pypi/l/mcp.svg
 53  [mit-url]: https://github.com/modelcontextprotocol/python-sdk/blob/main/LICENSE
 54  [python-badge]: https://img.shields.io/pypi/pyversions/mcp.svg
 55  [python-url]: https://www.python.org/downloads/
 56  [docs-badge]: https://img.shields.io/badge/docs-modelcontextprotocol.io-blue.svg
 57  [docs-url]: https://modelcontextprotocol.io
 58  [spec-badge]: https://img.shields.io/badge/spec-spec.modelcontextprotocol.io-blue.svg
 59  [spec-url]: https://spec.modelcontextprotocol.io
 60  [discussions-badge]: https://img.shields.io/github/discussions/modelcontextprotocol/python-sdk
 61  [discussions-url]: https://github.com/modelcontextprotocol/python-sdk/discussions
 62  
 63  ## Overview
 64  
 65  The Model Context Protocol allows applications to provide context for LLMs in a standardized way, separating the concerns of providing context from the actual LLM interaction. This Python SDK implements the full MCP specification, making it easy to:
 66  
 67  - Build MCP clients that can connect to any MCP server
 68  - Create MCP servers that expose resources, prompts and tools
 69  - Use standard transports like stdio and SSE
 70  - Handle all MCP protocol messages and lifecycle events
 71  
 72  ## Installation
 73  
 74  ### Adding MCP to your python project
 75  
 76  We recommend using [uv](https://docs.astral.sh/uv/) to manage your Python projects. In a uv managed python project, add mcp to dependencies by:
 77  
 78  ```bash
 79  uv add "mcp[cli]"
 80  ```
 81  
 82  Alternatively, for projects using pip for dependencies:
 83  ```bash
 84  pip install mcp
 85  ```
 86  
 87  ### Running the standalone MCP development tools
 88  
 89  To run the mcp command with uv:
 90  
 91  ```bash
 92  uv run mcp
 93  ```
 94  
 95  ## Quickstart
 96  
 97  Let's create a simple MCP server that exposes a calculator tool and some data:
 98  
 99  ```python
100  # server.py
101  from mcp.server.fastmcp import FastMCP
102  
103  # Create an MCP server
104  mcp = FastMCP("Demo")
105  
106  
107  # Add an addition tool
108  @mcp.tool()
109  def add(a: int, b: int) -> int:
110      """Add two numbers"""
111      return a + b
112  
113  
114  # Add a dynamic greeting resource
115  @mcp.resource("greeting://{name}")
116  def get_greeting(name: str) -> str:
117      """Get a personalized greeting"""
118      return f"Hello, {name}!"
119  ```
120  
121  You can install this server in [Claude Desktop](https://claude.ai/download) and interact with it right away by running:
122  ```bash
123  mcp install server.py
124  ```
125  
126  Alternatively, you can test it with the MCP Inspector:
127  ```bash
128  mcp dev server.py
129  ```
130  
131  ## What is MCP?
132  
133  The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:
134  
135  - Expose data through **Resources** (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
136  - Provide functionality through **Tools** (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
137  - Define interaction patterns through **Prompts** (reusable templates for LLM interactions)
138  - And more!
139  
140  ## Core Concepts
141  
142  ### Server
143  
144  The FastMCP server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:
145  
146  ```python
147  # Add lifespan support for startup/shutdown with strong typing
148  from contextlib import asynccontextmanager
149  from collections.abc import AsyncIterator
150  from dataclasses import dataclass
151  
152  from fake_database import Database  # Replace with your actual DB type
153  
154  from mcp.server.fastmcp import Context, FastMCP
155  
156  # Create a named server
157  mcp = FastMCP("My App")
158  
159  # Specify dependencies for deployment and development
160  mcp = FastMCP("My App", dependencies=["pandas", "numpy"])
161  
162  
163  @dataclass
164  class AppContext:
165      db: Database
166  
167  
168  @asynccontextmanager
169  async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:
170      """Manage application lifecycle with type-safe context"""
171      # Initialize on startup
172      db = await Database.connect()
173      try:
174          yield AppContext(db=db)
175      finally:
176          # Cleanup on shutdown
177          await db.disconnect()
178  
179  
180  # Pass lifespan to server
181  mcp = FastMCP("My App", lifespan=app_lifespan)
182  
183  
184  # Access type-safe lifespan context in tools
185  @mcp.tool()
186  def query_db(ctx: Context) -> str:
187      """Tool that uses initialized resources"""
188      db = ctx.request_context.lifespan_context["db"]
189      return db.query()
190  ```
191  
192  ### Resources
193  
194  Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects:
195  
196  ```python
197  from mcp.server.fastmcp import FastMCP
198  
199  mcp = FastMCP("My App")
200  
201  
202  @mcp.resource("config://app")
203  def get_config() -> str:
204      """Static configuration data"""
205      return "App configuration here"
206  
207  
208  @mcp.resource("users://{user_id}/profile")
209  def get_user_profile(user_id: str) -> str:
210      """Dynamic user data"""
211      return f"Profile data for user {user_id}"
212  ```
213  
214  ### Tools
215  
216  Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects:
217  
218  ```python
219  import httpx
220  from mcp.server.fastmcp import FastMCP
221  
222  mcp = FastMCP("My App")
223  
224  
225  @mcp.tool()
226  def calculate_bmi(weight_kg: float, height_m: float) -> float:
227      """Calculate BMI given weight in kg and height in meters"""
228      return weight_kg / (height_m**2)
229  
230  
231  @mcp.tool()
232  async def fetch_weather(city: str) -> str:
233      """Fetch current weather for a city"""
234      async with httpx.AsyncClient() as client:
235          response = await client.get(f"https://api.weather.com/{city}")
236          return response.text
237  ```
238  
239  ### Prompts
240  
241  Prompts are reusable templates that help LLMs interact with your server effectively:
242  
243  ```python
244  from mcp.server.fastmcp import FastMCP
245  from mcp.server.fastmcp.prompts import base
246  
247  mcp = FastMCP("My App")
248  
249  
250  @mcp.prompt()
251  def review_code(code: str) -> str:
252      return f"Please review this code:\n\n{code}"
253  
254  
255  @mcp.prompt()
256  def debug_error(error: str) -> list[base.Message]:
257      return [
258          base.UserMessage("I'm seeing this error:"),
259          base.UserMessage(error),
260          base.AssistantMessage("I'll help debug that. What have you tried so far?"),
261      ]
262  ```
263  
264  ### Images
265  
266  FastMCP provides an `Image` class that automatically handles image data:
267  
268  ```python
269  from mcp.server.fastmcp import FastMCP, Image
270  from PIL import Image as PILImage
271  
272  mcp = FastMCP("My App")
273  
274  
275  @mcp.tool()
276  def create_thumbnail(image_path: str) -> Image:
277      """Create a thumbnail from an image"""
278      img = PILImage.open(image_path)
279      img.thumbnail((100, 100))
280      return Image(data=img.tobytes(), format="png")
281  ```
282  
283  ### Context
284  
285  The Context object gives your tools and resources access to MCP capabilities:
286  
287  ```python
288  from mcp.server.fastmcp import FastMCP, Context
289  
290  mcp = FastMCP("My App")
291  
292  
293  @mcp.tool()
294  async def long_task(files: list[str], ctx: Context) -> str:
295      """Process multiple files with progress tracking"""
296      for i, file in enumerate(files):
297          ctx.info(f"Processing {file}")
298          await ctx.report_progress(i, len(files))
299          data, mime_type = await ctx.read_resource(f"file://{file}")
300      return "Processing complete"
301  ```
302  
303  ## Running Your Server
304  
305  ### Development Mode
306  
307  The fastest way to test and debug your server is with the MCP Inspector:
308  
309  ```bash
310  mcp dev server.py
311  
312  # Add dependencies
313  mcp dev server.py --with pandas --with numpy
314  
315  # Mount local code
316  mcp dev server.py --with-editable .
317  ```
318  
319  ### Claude Desktop Integration
320  
321  Once your server is ready, install it in Claude Desktop:
322  
323  ```bash
324  mcp install server.py
325  
326  # Custom name
327  mcp install server.py --name "My Analytics Server"
328  
329  # Environment variables
330  mcp install server.py -v API_KEY=abc123 -v DB_URL=postgres://...
331  mcp install server.py -f .env
332  ```
333  
334  ### Direct Execution
335  
336  For advanced scenarios like custom deployments:
337  
338  ```python
339  from mcp.server.fastmcp import FastMCP
340  
341  mcp = FastMCP("My App")
342  
343  if __name__ == "__main__":
344      mcp.run()
345  ```
346  
347  Run it with:
348  ```bash
349  python server.py
350  # or
351  mcp run server.py
352  ```
353  
354  ### Mounting to an Existing ASGI Server
355  
356  You can mount the SSE server to an existing ASGI server using the `sse_app` method. This allows you to integrate the SSE server with other ASGI applications.
357  
358  ```python
359  from starlette.applications import Starlette
360  from starlette.routes import Mount, Host
361  from mcp.server.fastmcp import FastMCP
362  
363  
364  mcp = FastMCP("My App")
365  
366  # Mount the SSE server to the existing ASGI server
367  app = Starlette(
368      routes=[
369          Mount('/', app=mcp.sse_app()),
370      ]
371  )
372  
373  # or dynamically mount as host
374  app.router.routes.append(Host('mcp.acme.corp', app=mcp.sse_app()))
375  ```
376  
377  For more information on mounting applications in Starlette, see the [Starlette documentation](https://www.starlette.io/routing/#submounting-routes).
378  
379  ## Examples
380  
381  ### Echo Server
382  
383  A simple server demonstrating resources, tools, and prompts:
384  
385  ```python
386  from mcp.server.fastmcp import FastMCP
387  
388  mcp = FastMCP("Echo")
389  
390  
391  @mcp.resource("echo://{message}")
392  def echo_resource(message: str) -> str:
393      """Echo a message as a resource"""
394      return f"Resource echo: {message}"
395  
396  
397  @mcp.tool()
398  def echo_tool(message: str) -> str:
399      """Echo a message as a tool"""
400      return f"Tool echo: {message}"
401  
402  
403  @mcp.prompt()
404  def echo_prompt(message: str) -> str:
405      """Create an echo prompt"""
406      return f"Please process this message: {message}"
407  ```
408  
409  ### SQLite Explorer
410  
411  A more complex example showing database integration:
412  
413  ```python
414  import sqlite3
415  
416  from mcp.server.fastmcp import FastMCP
417  
418  mcp = FastMCP("SQLite Explorer")
419  
420  
421  @mcp.resource("schema://main")
422  def get_schema() -> str:
423      """Provide the database schema as a resource"""
424      conn = sqlite3.connect("database.db")
425      schema = conn.execute("SELECT sql FROM sqlite_master WHERE type='table'").fetchall()
426      return "\n".join(sql[0] for sql in schema if sql[0])
427  
428  
429  @mcp.tool()
430  def query_data(sql: str) -> str:
431      """Execute SQL queries safely"""
432      conn = sqlite3.connect("database.db")
433      try:
434          result = conn.execute(sql).fetchall()
435          return "\n".join(str(row) for row in result)
436      except Exception as e:
437          return f"Error: {str(e)}"
438  ```
439  
440  ## Advanced Usage
441  
442  ### Low-Level Server
443  
444  For more control, you can use the low-level server implementation directly. This gives you full access to the protocol and allows you to customize every aspect of your server, including lifecycle management through the lifespan API:
445  
446  ```python
447  from contextlib import asynccontextmanager
448  from collections.abc import AsyncIterator
449  
450  from fake_database import Database  # Replace with your actual DB type
451  
452  from mcp.server import Server
453  
454  
455  @asynccontextmanager
456  async def server_lifespan(server: Server) -> AsyncIterator[dict]:
457      """Manage server startup and shutdown lifecycle."""
458      # Initialize resources on startup
459      db = await Database.connect()
460      try:
461          yield {"db": db}
462      finally:
463          # Clean up on shutdown
464          await db.disconnect()
465  
466  
467  # Pass lifespan to server
468  server = Server("example-server", lifespan=server_lifespan)
469  
470  
471  # Access lifespan context in handlers
472  @server.call_tool()
473  async def query_db(name: str, arguments: dict) -> list:
474      ctx = server.request_context
475      db = ctx.lifespan_context["db"]
476      return await db.query(arguments["query"])
477  ```
478  
479  The lifespan API provides:
480  - A way to initialize resources when the server starts and clean them up when it stops
481  - Access to initialized resources through the request context in handlers
482  - Type-safe context passing between lifespan and request handlers
483  
484  ```python
485  import mcp.server.stdio
486  import mcp.types as types
487  from mcp.server.lowlevel import NotificationOptions, Server
488  from mcp.server.models import InitializationOptions
489  
490  # Create a server instance
491  server = Server("example-server")
492  
493  
494  @server.list_prompts()
495  async def handle_list_prompts() -> list[types.Prompt]:
496      return [
497          types.Prompt(
498              name="example-prompt",
499              description="An example prompt template",
500              arguments=[
501                  types.PromptArgument(
502                      name="arg1", description="Example argument", required=True
503                  )
504              ],
505          )
506      ]
507  
508  
509  @server.get_prompt()
510  async def handle_get_prompt(
511      name: str, arguments: dict[str, str] | None
512  ) -> types.GetPromptResult:
513      if name != "example-prompt":
514          raise ValueError(f"Unknown prompt: {name}")
515  
516      return types.GetPromptResult(
517          description="Example prompt",
518          messages=[
519              types.PromptMessage(
520                  role="user",
521                  content=types.TextContent(type="text", text="Example prompt text"),
522              )
523          ],
524      )
525  
526  
527  async def run():
528      async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
529          await server.run(
530              read_stream,
531              write_stream,
532              InitializationOptions(
533                  server_name="example",
534                  server_version="0.1.0",
535                  capabilities=server.get_capabilities(
536                      notification_options=NotificationOptions(),
537                      experimental_capabilities={},
538                  ),
539              ),
540          )
541  
542  
543  if __name__ == "__main__":
544      import asyncio
545  
546      asyncio.run(run())
547  ```
548  
549  ### Writing MCP Clients
550  
551  The SDK provides a high-level client interface for connecting to MCP servers:
552  
553  ```python
554  from mcp import ClientSession, StdioServerParameters, types
555  from mcp.client.stdio import stdio_client
556  
557  # Create server parameters for stdio connection
558  server_params = StdioServerParameters(
559      command="python",  # Executable
560      args=["example_server.py"],  # Optional command line arguments
561      env=None,  # Optional environment variables
562  )
563  
564  
565  # Optional: create a sampling callback
566  async def handle_sampling_message(
567      message: types.CreateMessageRequestParams,
568  ) -> types.CreateMessageResult:
569      return types.CreateMessageResult(
570          role="assistant",
571          content=types.TextContent(
572              type="text",
573              text="Hello, world! from model",
574          ),
575          model="gpt-4.1-mini",
576          stopReason="endTurn",
577      )
578  
579  
580  async def run():
581      async with stdio_client(server_params) as (read, write):
582          async with ClientSession(
583              read, write, sampling_callback=handle_sampling_message
584          ) as session:
585              # Initialize the connection
586              await session.initialize()
587  
588              # List available prompts
589              prompts = await session.list_prompts()
590  
591              # Get a prompt
592              prompt = await session.get_prompt(
593                  "example-prompt", arguments={"arg1": "value"}
594              )
595  
596              # List available resources
597              resources = await session.list_resources()
598  
599              # List available tools
600              tools = await session.list_tools()
601  
602              # Read a resource
603              content, mime_type = await session.read_resource("file://some/path")
604  
605              # Call a tool
606              result = await session.call_tool("tool-name", arguments={"arg1": "value"})
607  
608  
609  if __name__ == "__main__":
610      import asyncio
611  
612      asyncio.run(run())
613  ```
614  
615  ### MCP Primitives
616  
617  The MCP protocol defines three core primitives that servers can implement:
618  
619  | Primitive | Control               | Description                                         | Example Use                  |
620  |-----------|-----------------------|-----------------------------------------------------|------------------------------|
621  | Prompts   | User-controlled       | Interactive templates invoked by user choice        | Slash commands, menu options |
622  | Resources | Application-controlled| Contextual data managed by the client application   | File contents, API responses |
623  | Tools     | Model-controlled      | Functions exposed to the LLM to take actions        | API calls, data updates      |
624  
625  ### Server Capabilities
626  
627  MCP servers declare capabilities during initialization:
628  
629  | Capability  | Feature Flag                 | Description                        |
630  |-------------|------------------------------|------------------------------------|
631  | `prompts`   | `listChanged`                | Prompt template management         |
632  | `resources` | `subscribe`<br/>`listChanged`| Resource exposure and updates      |
633  | `tools`     | `listChanged`                | Tool discovery and execution       |
634  | `logging`   | -                            | Server logging configuration       |
635  | `completion`| -                            | Argument completion suggestions    |
636  
637  ## Tool Composition Patterns
638  
639  When building complex workflows with MCP, effectively chaining tools together is crucial for success:
640  
641  ```python
642  from mcp.server.fastmcp import FastMCP, Context
643  
644  mcp = FastMCP("Analytics Pipeline")
645  
646  @mcp.tool()
647  async def fetch_data(source: str, date_range: str, ctx: Context) -> str:
648      """Fetch raw data from a source for analysis"""
649      # Fetch operation that might be slow
650      await ctx.report_progress(0.3, 1.0)
651      return f"Data from {source} for {date_range}"
652  
653  @mcp.tool()
654  def transform_data(raw_data: str, format_type: str = "json") -> dict:
655      """Transform raw data into structured format"""
656      # Data transformation logic
657      return {"processed": raw_data, "format": format_type}
658  
659  @mcp.tool()
660  def analyze_data(data: dict, metric: str) -> str:
661      """Analyze transformed data with specific metrics"""
662      # Analysis logic
663      return f"Analysis of {metric}: Result based on {data['processed']}"
664  
665  # Usage pattern (for LLMs):
666  # 1. First fetch the raw data
667  # 2. Transform the fetched data
668  # 3. Then analyze the transformed result
669  ```
670  
671  **Pattern: Sequential Dependency Chain**
672  ```
673  fetch_data → transform_data → analyze_data
674  ```
675  
676  **Pattern: Parallel Processing with Aggregation**
677  ```python
678  @mcp.tool()
679  async def parallel_process(sources: list[str], ctx: Context) -> dict:
680      """Process multiple sources in parallel and aggregate results"""
681      results = {}
682      for i, source in enumerate(sources):
683          # Get data for each source (these could be separate tool calls)
684          data = await fetch_data(source, "last_week", ctx)
685          transformed = transform_data(data)
686          results[source] = transformed
687          await ctx.report_progress(i / len(sources), 1.0)
688      return results
689  ```
690  
691  ## Error Recovery Strategies
692  
693  When tools fail or return unexpected results, LLMs should follow these recovery patterns:
694  
695  **Strategy: Retry with Backoff**
696  ```python
697  @mcp.tool()
698  async def resilient_operation(resource_id: str, ctx: Context) -> str:
699      """Example of resilient operation with retry logic"""
700      MAX_ATTEMPTS = 3
701      for attempt in range(1, MAX_ATTEMPTS + 1):
702          try:
703              # Attempt the operation
704              return f"Successfully processed {resource_id}"
705          except Exception as e:
706              if attempt == MAX_ATTEMPTS:
707                  # If final attempt, report the failure clearly
708                  ctx.warning(f"Operation failed after {MAX_ATTEMPTS} attempts: {str(e)}")
709                  return f"ERROR: Could not process {resource_id} - {str(e)}"
710              # For earlier attempts, log and retry
711              ctx.info(f"Attempt {attempt} failed, retrying...")
712              await asyncio.sleep(2 ** attempt)  # Exponential backoff
713  ```
714  
715  **Strategy: Fallback Chain**
716  ```python
717  @mcp.tool()
718  async def get_data_with_fallbacks(primary_source: str, fallback_sources: list[str] = None) -> dict:
719      """Try multiple data sources in order until one succeeds"""
720      sources = [primary_source] + (fallback_sources or [])
721      
722      errors = []
723      for source in sources:
724          try:
725              # Try to get data from this source
726              result = {"source": source, "data": f"Data from {source}"}
727              return result
728          except Exception as e:
729              # Record the error and try the next source
730              errors.append(f"{source}: {str(e)}")
731      
732      # If all sources failed, return a clear error with history
733      return {"error": "All sources failed", "attempts": errors}
734  ```
735  
736  **Error Reporting Best Practices**
737  - Always return structured error information (not just exception text)
738  - Include specific error codes when possible
739  - Provide actionable suggestions for recovery
740  - Log detailed error context for debugging
741  
742  ## Resource Selection Optimization
743  
744  Efficiently managing resources within context limits requires strategic selection:
745  
746  **Progressive Loading Pattern**
747  ```python
748  @mcp.tool()
749  async def analyze_document(doc_uri: str, ctx: Context) -> str:
750      """Analyze a document with progressively loaded sections"""
751      # First load metadata for quick access
752      metadata = await ctx.read_resource(f"{doc_uri}/metadata")
753      
754      # Based on metadata, selectively load relevant sections
755      relevant_sections = identify_relevant_sections(metadata)
756      
757      # Only load sections that are actually needed
758      section_data = {}
759      for section in relevant_sections:
760          section_data[section] = await ctx.read_resource(f"{doc_uri}/sections/{section}")
761      
762      # Process with only the necessary context
763      return f"Analysis of {len(section_data)} relevant sections"
764  ```
765  
766  **Context Budget Management**
767  ```python
768  @mcp.tool()
769  async def summarize_large_dataset(dataset_uri: str, ctx: Context) -> str:
770      """Summarize a large dataset while respecting context limits"""
771      # Get total size to plan the approach
772      metadata = await ctx.read_resource(f"{dataset_uri}/metadata")
773      total_size = metadata.get("size_kb", 0)
774      
775      if total_size > 100:  # Arbitrary threshold
776          # For large datasets, use chunking approach
777          chunks = await ctx.read_resource(f"{dataset_uri}/summary_chunks")
778          return f"Summary of {len(chunks)} chunks: {', '.join(chunks)}"
779      else:
780          # For smaller datasets, process everything at once
781          full_data = await ctx.read_resource(dataset_uri)
782          return f"Complete analysis of {dataset_uri}"
783  ```
784  
785  **Resource Relevance Filtering**
786  - Focus on the most recent/relevant data first
787  - Filter resources to match the specific query intent
788  - Use metadata to decide which resources to load
789  - Prefer sampling representative data over loading everything
790  
791  ## Documentation
792  
793  - [Model Context Protocol documentation](https://modelcontextprotocol.io)
794  - [Model Context Protocol specification](https://spec.modelcontextprotocol.io)
795  - [Officially supported servers](https://github.com/modelcontextprotocol/servers)
796  
797  ## Contributing
798  
799  We are passionate about supporting contributors of all levels of experience and would love to see you get involved in the project. See the [contributing guide](CONTRIBUTING.md) to get started.
800  
801  ## License
802  
803  This project is licensed under the MIT License - see the LICENSE file for details.