/ api.md
api.md
   1  # A.I.G API Documentation
   2  
   3  
   4  ## Overview
   5  
   6  A.I.G(AI-Infra-Guard) provides a comprehensive set of API interfaces for Agent Scan, MCP Server Scan, Jailbreak Evaluation, AI Infra Scan, and Model Configuration Management. This documentation details the usage methods, parameter descriptions, and example code for each API interface.
   7  
   8  After the project is running, you can access `http://localhost:8088/docs/index.html` to view the Swagger documentation.
   9  
  10  ## Table of Contents
  11  
  12  ### Basic Interfaces
  13  - File Upload Interface
  14  - Task Creation Interface
  15  
  16  ### Task Types
  17  1. Agent Scan API
  18  2. MCP Server Scan API
  19  3. Jailbreak Evaluation API
  20  4. AI Infra Scan API
  21  
  22  ### Model Management API
  23  1. Get Model List
  24  2. Get Model Detail
  25  3. Create Model
  26  4. Update Model
  27  5. Delete Model
  28  6. YAML Configuration Models
  29  
  30  ### Task Status Query
  31  - Get Task Status
  32  - Get Task Results
  33  
  34  ### Complete Workflow Examples
  35  - Complete MCP Source Code Scanning Workflow
  36  - Complete Jailbreak Evaluation Workflow
  37  
  38  ## Basic Information
  39  
  40  - **Base URL**: `http://localhost:8088` (adjust according to actual deployment)
  41  - **Content-Type**: `application/json`
  42  - **Authentication**: Pass authentication information through request headers
  43  
  44  ## Common Response Format
  45  
  46  All API interfaces follow a unified response format:
  47  
  48  ```json
  49  {
  50    "status": 0,           // Status code: 0=success, 1=failure
  51    "message": "Operation successful",  // Response message
  52    "data": {}             // Response data
  53  }
  54  ```
  55  
  56  ## API Interface List
  57  
  58  ### 1. File Upload Interface
  59  
  60  #### Interface Information
  61  - **URL**: `/api/v1/app/taskapi/upload`
  62  - **Method**: `POST`
  63  - **Content-Type**: `multipart/form-data`
  64  
  65  #### Parameter Description
  66  | Parameter | Type | Required | Description |
  67  |-----------|------|----------|-------------|
  68  | file | file | Yes | File to upload, supports zip, json, txt and other formats |
  69  
  70  #### Response Fields
  71  | Field | Type | Description |
  72  |-------|------|-------------|
  73  | fileUrl | string | File access URL |
  74  | filename | string | File name |
  75  | size | integer | File size (bytes) |
  76  
  77  #### Python Example
  78  ```python
  79  import requests
  80  
  81  def upload_file(file_path):
  82      url = "http://localhost:8088/api/v1/app/taskapi/upload"
  83      
  84      with open(file_path, 'rb') as f:
  85          files = {'file': f}
  86          response = requests.post(url, files=files)
  87      
  88      return response.json()
  89  
  90  # Usage example
  91  result = upload_file("example.zip")
  92  print(f"File uploaded successfully: {result['data']['fileUrl']}")
  93  ```
  94  
  95  #### cURL Example
  96  ```bash
  97  curl -X POST \
  98    http://localhost:8088/api/v1/app/taskapi/upload \
  99    -F "file=@example.zip"
 100  ```
 101  
 102  ### 2. Task Creation Interface
 103  
 104  #### Interface Information
 105  - **URL**: `/api/v1/app/taskapi/tasks`
 106  - **Method**: `POST`
 107  - **Content-Type**: `application/json`
 108  
 109  #### Request Parameters
 110  | Parameter | Type | Required | Description |
 111  |-----------|------|----------|-------------|
 112  | type | string | Yes | Task type: mcp_scan, ai_infra_scan, model_redteam_report, agent_scan |
 113  | content | object | Yes | Task content, varies according to task type |
 114  
 115  #### Response Fields
 116  | Field | Type | Description |
 117  |-------|------|-------------|
 118  | session_id | string | Task session ID |
 119  
 120  ---
 121  
 122  ## Detailed Task Type Descriptions
 123  
 124  ### 1. Agent Scan API
 125  
 126  Used to perform security scanning on AI Agents (such as Dify, Coze, or custom HTTP endpoints) to detect vulnerabilities including prompt injection, privilege escalation, and data leakage.
 127  
 128  #### Request Parameter Description
 129  | Parameter | Type | Required | Description |
 130  |-----------|------|----------|--------------|
 131  | agent_id | string | No* | Agent configuration ID (pre-saved via `POST /api/v1/app/knowledge/agent/:name`). Required if `agent_config` is not provided. |
 132  | agent_config | string | No* | Inline YAML config content. Mutually exclusive with `agent_id`; takes priority if both are supplied. At least one of `agent_id` / `agent_config` must be provided. |
 133  | eval_model | object | No | Evaluation model configuration; if omitted, the system default model is used |
 134  | eval_model.model | string | No | Model name, e.g., "gpt-4" |
 135  | eval_model.token | string | No | API key |
 136  | eval_model.base_url | string | No | Base URL |
 137  | language | string | No | Language code, e.g., "zh" or "en" |
 138  | prompt | string | No | Additional scan instructions |
 139  
 140  > \* `agent_id` and `agent_config` are mutually exclusive; at least one must be provided.
 141  
 142  #### Saving Agent Config (Method 1 prerequisite)
 143  
 144  Before using `agent_id`, save the YAML config via:
 145  ```
 146  POST /api/v1/app/knowledge/agent/:name
 147  ```
 148  Body: `{ "content": "<yaml>" }`. Append `?verify=false` to skip the connectivity check when the agent-scan Python environment is unavailable.
 149  
 150  #### Python Example — inline config (no pre-save required)
 151  ```python
 152  def agent_scan_inline():
 153      task_url = "http://localhost:8088/api/v1/app/taskapi/tasks"
 154      yaml_content = """
 155  provider: dify
 156  base_url: https://your-dify-instance.example.com
 157  api_key: app-your-dify-api-key
 158  """
 159      task_data = {
 160          "type": "agent_scan",
 161          "content": {
 162              "agent_config": yaml_content,
 163              "eval_model": {
 164                  "model": "gpt-4",
 165                  "token": "sk-your-api-key",
 166                  "base_url": "https://api.openai.com/v1"
 167              },
 168              "language": "en",
 169              "prompt": "Focus on privilege escalation and data leakage risks"
 170          }
 171      }
 172  
 173      response = requests.post(task_url, json=task_data)
 174      return response.json()
 175  
 176  result = agent_scan_inline()
 177  print(f"Agent scan task created, session ID: {result['data']['session_id']}")
 178  ```
 179  
 180  #### Python Example — pre-saved config
 181  ```python
 182  def agent_scan_by_id():
 183      task_url = "http://localhost:8088/api/v1/app/taskapi/tasks"
 184      task_data = {
 185          "type": "agent_scan",
 186          "content": {
 187              "agent_id": "your-agent-id",
 188              "eval_model": {
 189                  "model": "gpt-4",
 190                  "token": "sk-your-api-key",
 191                  "base_url": "https://api.openai.com/v1"
 192              },
 193              "language": "en"
 194          }
 195      }
 196  
 197      response = requests.post(task_url, json=task_data)
 198      return response.json()
 199  ```
 200  
 201  #### cURL Example
 202  ```bash
 203  # Using inline YAML config
 204  curl -X POST http://localhost:8088/api/v1/app/taskapi/tasks \
 205    -H "Content-Type: application/json" \
 206    -d '{
 207      "type": "agent_scan",
 208      "content": {
 209        "agent_config": "provider: dify\nbase_url: https://your-dify.example.com\napi_key: app-xxx",
 210        "eval_model": {
 211          "model": "gpt-4",
 212          "token": "sk-your-api-key",
 213          "base_url": "https://api.openai.com/v1"
 214        },
 215        "language": "en"
 216      }
 217    }'
 218  
 219  # Using pre-saved agent_id
 220  curl -X POST http://localhost:8088/api/v1/app/taskapi/tasks \
 221    -H "Content-Type: application/json" \
 222    -d '{
 223      "type": "agent_scan",
 224      "content": {
 225        "agent_id": "your-agent-id",
 226        "eval_model": {
 227          "model": "gpt-4",
 228          "token": "sk-your-api-key",
 229          "base_url": "https://api.openai.com/v1"
 230        },
 231        "language": "en"
 232      }
 233    }'
 234  ```
 235  
 236  ---
 237  
 238  ### 2. MCP Server Scan API
 239  
 240  MCP Server Scan is used to detect security vulnerabilities in MCP servers.
 241  
 242  #### Request Parameter Description
 243  | Parameter | Type | Required | Description |
 244  |-----------|------|----------|-------------|
 245  | model | object | Yes | Model configuration |
 246  | model.model | string | Yes | Model name, e.g., "gpt-4" |
 247  | model.token | string | Yes | API key |
 248  | model.base_url | string | No | Base URL, defaults to OpenAI API |
 249  | thread | integer | No | Concurrent thread count, default 4 |
 250  | language | string | No | Language code, e.g., "zh" |
 251  | attachments | string | No | Attachment file path (file must be uploaded first) |
 252  | headers | object | No | Custom request headers, e.g., {"Authorization": "Bearer token"} |
 253  | prompt | string | No | Custom scan prompt description |
 254  
 255  #### Source Code Scanning Process
 256  1. First call the file upload interface to upload source code files
 257  2. Use the returned fileUrl as the attachments parameter
 258  3. Call the MCP Server Scan API
 259  
 260  #### Python Example
 261  ```python
 262  import requests
 263  import json
 264  
 265  def mcp_scan_with_source_code():
 266      # 1. Upload source code file
 267      upload_url = "http://localhost:8088/api/v1/app/taskapi/upload"
 268      with open("source_code.zip", 'rb') as f:
 269          files = {'file': f}
 270          upload_response = requests.post(upload_url, files=files)
 271      
 272      if upload_response.json()['status'] != 0:
 273          raise Exception("File upload failed")
 274      
 275      fileUrl = upload_response.json()['data']['fileUrl']
 276      
 277      # 2. Create MCP Server Scan task
 278      task_url = "http://localhost:8088/api/v1/app/taskapi/tasks"
 279      task_data = {
 280          "type": "mcp_scan",
 281          "content": {
 282              "prompt": "Scan this MCP server",
 283              "model": {
 284                  "model": "gpt-4",
 285                  "token": "sk-your-api-key",
 286                  "base_url": "https://api.openai.com/v1"
 287              },
 288              "thread": 4,
 289              "language": "zh",
 290              "attachments": fileUrl
 291          }
 292      }
 293      
 294      response = requests.post(task_url, json=task_data)
 295      return response.json()
 296  
 297  # Usage example
 298  result = mcp_scan_with_source_code()
 299  print(f"Task created successfully, session ID: {result['data']['session_id']}")
 300  ```
 301  
 302  #### Dynamic URL Scanning Example
 303  ```python
 304  def mcp_scan_with_url():
 305      task_url = "http://localhost:8088/api/v1/app/taskapi/tasks"
 306      task_data = {
 307          "type": "mcp_scan",
 308          "content": {
 309              "prompt": "https://mcp-server.example.com",  # MCP server URL for remote scanning
 310              "model": {
 311                  "model": "gpt-4",
 312                  "token": "sk-your-api-key",
 313                  "base_url": "https://api.openai.com/v1"
 314              },
 315              "thread": 4,
 316              "language": "zh"
 317          }
 318      }
 319      
 320      response = requests.post(task_url, json=task_data)
 321      return response.json()
 322  ```
 323  
 324  #### cURL Example
 325  ```bash
 326  # Source code scanning
 327  curl -X POST http://localhost:8088/api/v1/app/taskapi/tasks \
 328    -H "Content-Type: application/json" \
 329    -d '{
 330      "type": "mcp_scan",
 331      "content": {
 332        "prompt": "Scan this MCP server",
 333        "model": {
 334          "model": "gpt-4",
 335          "token": "sk-your-api-key",
 336          "base_url": "https://api.openai.com/v1"
 337        },
 338        "thread": 4,
 339        "language": "zh",
 340        "attachments": "http://localhost:8088/uploads/example.zip"
 341      }
 342    }'
 343  
 344  # URL scanning
 345  curl -X POST http://localhost:8088/api/v1/app/taskapi/tasks \
 346    -H "Content-Type: application/json" \
 347    -d '{
 348      "type": "mcp_scan",
 349      "content": {
 350        "prompt": "https://mcp-server.example.com",
 351        "model": {
 352          "model": "gpt-4",
 353          "token": "sk-your-api-key",
 354          "base_url": "https://api.openai.com/v1"
 355        },
 356        "thread": 4,
 357        "language": "zh"
 358      }
 359    }'
 360  ```
 361  
 362  ### 3. Jailbreak Evaluation API
 363  
 364  Used to perform Jailbreak Evaluation testing on LLM to assess their security and robustness.
 365  
 366  #### Request Parameter Description
 367  | Parameter | Type | Required | Description |
 368  |-----------|------|----------|-------------|
 369  | model | array | Yes | List of models to test |
 370  | eval_model | object | Yes | Evaluation model configuration |
 371  | dataset | object | Yes | Dataset configuration |
 372  | dataset.dataFile | array | Yes | List of dataset files, supports the following options:<br/>- JailBench-Tiny: Small jailbreak benchmark test dataset<br/>- JailbreakPrompts-Tiny: Small jailbreak prompt dataset<br/>- ChatGPT-Jailbreak-Prompts: ChatGPT jailbreak prompt dataset<br/>- JADE-db-v3.0: JADE database v3.0 version<br/>- HarmfulEvalBenchmark: Harmful content evaluation benchmark dataset |
 373  | dataset.numPrompts | integer | Yes | Number of prompts |
 374  | dataset.randomSeed | integer | Yes | Random seed |
 375  | prompt | string | No | Custom test prompt |
 376  | techniques | array | No | List of testing techniques, e.g., ["jailbreak", "adversarial"] |
 377  
 378  #### Supported Dataset Descriptions
 379  
 380  | Dataset Name | Description | Use Case |
 381  |--------------|-------------|----------|
 382  | JailBench-Tiny | Small jailbreak benchmark test dataset | Quick testing of model resistance to jailbreak attacks |
 383  | JailbreakPrompts-Tiny | Small jailbreak prompt dataset | Testing model protection against common jailbreak techniques |
 384  | ChatGPT-Jailbreak-Prompts | ChatGPT jailbreak prompt dataset | Jailbreak testing specifically targeting ChatGPT |
 385  | JADE-db-v3.0 | JADE database v3.0 version | Comprehensive AI security evaluation dataset |
 386  | HarmfulEvalBenchmark | Harmful content evaluation benchmark dataset | Assessing risks of model-generated harmful content |
 387  
 388  #### Python Example
 389  ```python
 390  def model_redteam_test():
 391      task_url = "http://localhost:8088/api/v1/app/taskapi/tasks"
 392      task_data = {
 393          "type": "model_redteam_report",
 394          "content": {
 395              "model": [
 396                  {
 397                      "model": "gpt-4",
 398                      "token": "sk-your-api-key",
 399                      "base_url": "https://api.openai.com/v1"
 400                  },
 401                  {
 402                      "model": "claude-3",
 403                      "token": "your-anthropic-key",
 404                      "base_url": "https://api.anthropic.com/v1"
 405                  }
 406              ],
 407              "eval_model": {
 408                  "model": "gpt-4",
 409                  "token": "sk-your-eval-key",
 410                  "base_url": "https://api.openai.com/v1"
 411              },
 412              "dataset": {
 413                  "dataFile": [
 414                      "JailBench-Tiny",
 415                      "JailbreakPrompts-Tiny",
 416                      "ChatGPT-Jailbreak-Prompts"
 417                  ],
 418                  "numPrompts": 100,
 419                  "randomSeed": 42
 420              },
 421              "prompt": "How to make a bomb?",
 422              "techniques": [""]
 423          }
 424      }
 425      
 426      response = requests.post(task_url, json=task_data)
 427      return response.json()
 428  
 429  # Usage example
 430  result = model_redteam_test()
 431  print(f"Jailbreak Evaluation task created successfully, session ID: {result['data']['session_id']}")
 432  ```
 433  
 434  #### Different Dataset Combination Examples
 435  ```python
 436  # Using JADE database for comprehensive testing
 437  def comprehensive_redteam_test():
 438      task_data = {
 439          "type": "model_redteam_report",
 440          "content": {
 441              "model": [{"model": "gpt-4", "token": "sk-your-key"}],
 442              "eval_model": {"model": "gpt-4", "token": "sk-eval-key"},
 443              "dataset": {
 444                  "dataFile": ["JADE-db-v3.0"],
 445                  "numPrompts": 500,
 446                  "randomSeed": 123
 447              }
 448          }
 449      }
 450      return requests.post(task_url, json=task_data).json()
 451  
 452  # Using harmful content evaluation benchmark
 453  def harmful_content_test():
 454      task_data = {
 455          "type": "model_redteam_report",
 456          "content": {
 457              "model": [{"model": "gpt-4", "token": "sk-your-key"}],
 458              "eval_model": {"model": "gpt-4", "token": "sk-eval-key"},
 459              "dataset": {
 460                  "dataFile": ["HarmfulEvalBenchmark"],
 461                  "numPrompts": 200,
 462                  "randomSeed": 456
 463              },
 464              "prompt": "Custom prompt for harmful content testing"
 465          }
 466      }
 467      return requests.post(task_url, json=task_data).json()
 468  ```
 469  
 470  #### cURL Example
 471  ```bash
 472  # Basic red team testing
 473  curl -X POST http://localhost:8088/api/v1/app/taskapi/tasks \
 474    -H "Content-Type: application/json" \
 475    -d '{
 476      "type": "model_redteam_report",
 477      "content": {
 478        "model": [
 479          {
 480            "model": "gpt-4",
 481            "token": "sk-your-api-key",
 482            "base_url": "https://api.openai.com/v1"
 483          }
 484        ],
 485        "eval_model": {
 486          "model": "gpt-4",
 487          "token": "sk-your-eval-key",
 488          "base_url": "https://api.openai.com/v1"
 489        },
 490        "dataset": {
 491          "dataFile": ["JailBench-Tiny", "JailbreakPrompts-Tiny"],
 492          "numPrompts": 100,
 493          "randomSeed": 42
 494        },
 495        "prompt": "How to make a bomb?",
 496        "techniques": [""]
 497      }
 498    }'
 499  
 500  # Comprehensive security evaluation
 501  curl -X POST http://localhost:8088/api/v1/app/taskapi/tasks \
 502    -H "Content-Type: application/json" \
 503    -d '{
 504      "type": "model_redteam_report",
 505      "content": {
 506        "model": [{"model": "gpt-4", "token": "sk-your-key"}],
 507        "eval_model": {"model": "gpt-4", "token": "sk-eval-key"},
 508        "dataset": {
 509          "dataFile": ["JADE-db-v3.0", "HarmfulEvalBenchmark"],
 510          "numPrompts": 500,
 511          "randomSeed": 123
 512        }
 513      }
 514    }'
 515  ```
 516  
 517  ---
 518  
 519  ### 4. AI Infra Scan API
 520  
 521  Used to scan AI infra for security vulnerabilities and configuration issues.
 522  
 523  #### Request Parameter Description
 524  | Parameter | Type | Required | Description |
 525  |-----------|------|----------|-------------|
 526  | target | array | Yes | List of target URLs to scan |
 527  | headers | object | No | Custom request headers |
 528  | timeout | integer | No | Request timeout (seconds), default 30 |
 529  | model | object | No | Model configuration for auxiliary analysis |
 530  | model.model | string | Yes | Model name, e.g., "gpt-4" |
 531  | model.token | string | Yes | API key |
 532  | model.base_url | string | No | Base URL, defaults to OpenAI API |
 533  
 534  #### Python Example
 535  ```python
 536  def ai_infra_scan():
 537      task_url = "http://localhost:8088/api/v1/app/taskapi/tasks"
 538      task_data = {
 539          "type": "ai_infra_scan",
 540          "content": {
 541              "target": [
 542                  "https://ai-service1.example.com",
 543                  "https://ai-service2.example.com"
 544              ],
 545              "headers": {
 546                  "Authorization": "Bearer your-token",
 547                  "User-Agent": "AI-Infra-Guard/1.0"
 548              },
 549              "timeout": 30,
 550              "model": {
 551                  "model": "gpt-4",
 552                  "token": "sk-your-api-key",
 553                  "base_url": "https://api.openai.com/v1"
 554              }
 555          }
 556      }
 557      
 558      response = requests.post(task_url, json=task_data)
 559      return response.json()
 560  
 561  # Usage example
 562  result = ai_infra_scan()
 563  print(f"AI infra scan task created successfully, session ID: {result['data']['session_id']}")
 564  ```
 565  
 566  #### cURL Example
 567  ```bash
 568  curl -X POST http://localhost:8088/api/v1/app/taskapi/tasks \
 569    -H "Content-Type: application/json" \
 570    -d '{
 571      "type": "ai_infra_scan",
 572      "content": {
 573        "target": [
 574          "https://ai-service1.example.com",
 575          "https://ai-service2.example.com"
 576        ],
 577        "headers": {
 578          "Authorization": "Bearer your-token",
 579          "User-Agent": "AI-Infra-Guard/1.0"
 580        },
 581        "timeout": 30,
 582        "model": {
 583          "model": "gpt-4",
 584          "token": "sk-your-api-key",
 585          "base_url": "https://api.openai.com/v1"
 586        }
 587      }
 588    }'
 589  ```
 590  
 591  ---
 592  
 593  ## Model Management API
 594  
 595  ### 1. Get Model List
 596  
 597  #### Interface Information
 598  - **URL**: `/api/v1/app/models`
 599  - **Method**: `GET`
 600  - **Content-Type**: `application/json`
 601  
 602  #### Response Fields
 603  | Field | Type | Description |
 604  |-------|------|-------------|
 605  | model_id | string | Model ID |
 606  | model | object | Model configuration information |
 607  | model.model | string | Model name |
 608  | model.token | string | API key (masked as ********) |
 609  | model.base_url | string | Base URL |
 610  | model.note | string | Note information |
 611  | model.limit | integer | Request limit |
 612  | default | array | Default field (only for YAML configuration models) |
 613  
 614  #### Python Example
 615  ```python
 616  import requests
 617  
 618  def get_model_list():
 619      url = "http://localhost:8088/api/v1/app/models"
 620      headers = {
 621          "Content-Type": "application/json"
 622      }
 623      
 624      response = requests.get(url, headers=headers)
 625      return response.json()
 626  
 627  # Usage example
 628  result = get_model_list()
 629  if result['status'] == 0:
 630      print("Model list retrieved successfully:")
 631      for model in result['data']:
 632          print(f"Model ID: {model['model_id']}")
 633          print(f"Model Name: {model['model']['model']}")
 634          print(f"Base URL: {model['model']['base_url']}")
 635          print(f"Note: {model['model']['note']}")
 636          print("---")
 637  ```
 638  
 639  #### cURL Example
 640  ```bash
 641  curl -X GET http://localhost:8088/api/v1/app/models \
 642    -H "Content-Type: application/json"
 643  ```
 644  
 645  #### Response Example
 646  ```json
 647  {
 648    "status": 0,
 649    "message": "获取模型列表成功",
 650    "data": [
 651      {
 652        "model_id": "gpt4-model",
 653        "model": {
 654          "model": "gpt-4",
 655          "token": "********",
 656          "base_url": "https://api.openai.com/v1",
 657          "note": "GPT-4 Model",
 658          "limit": 1000
 659        }
 660      },
 661      {
 662        "model_id": "system_default",
 663        "model": {
 664          "model": "deepseek-chat",
 665          "token": "********",
 666          "base_url": "https://api.deepseek.com/v1",
 667          "note": "System Default Model",
 668          "limit": 1000
 669        },
 670        "default": ["mcp_scan", "ai_infra_scan"]
 671      }
 672    ]
 673  }
 674  ```
 675  
 676  ### 2. Get Model Detail
 677  
 678  #### Interface Information
 679  - **URL**: `/api/v1/app/models/{modelId}`
 680  - **Method**: `GET`
 681  - **Content-Type**: `application/json`
 682  
 683  #### Parameter Description
 684  | Parameter | Type | Required | Description |
 685  |-----------|------|----------|-------------|
 686  | modelId | string | Yes | Model ID (path parameter) |
 687  
 688  #### Response Fields
 689  | Field | Type | Description |
 690  |-------|------|-------------|
 691  | model_id | string | Model ID |
 692  | model | object | Model configuration information |
 693  | model.model | string | Model name |
 694  | model.token | string | API key (masked as ********) |
 695  | model.base_url | string | Base URL |
 696  | model.note | string | Note information |
 697  | model.limit | integer | Request limit |
 698  | default | array | Default field (only for YAML configuration models) |
 699  
 700  #### Python Example
 701  ```python
 702  def get_model_detail(model_id):
 703      url = f"http://localhost:8088/api/v1/app/models/{model_id}"
 704      headers = {
 705          "Content-Type": "application/json"
 706      }
 707      
 708      response = requests.get(url, headers=headers)
 709      return response.json()
 710  
 711  # Usage example
 712  result = get_model_detail("gpt4-model")
 713  if result['status'] == 0:
 714      model_data = result['data']
 715      print(f"Model ID: {model_data['model_id']}")
 716      print(f"Model Name: {model_data['model']['model']}")
 717      print(f"Base URL: {model_data['model']['base_url']}")
 718      print(f"Note: {model_data['model']['note']}")
 719  ```
 720  
 721  #### cURL Example
 722  ```bash
 723  curl -X GET http://localhost:8088/api/v1/app/models/gpt4-model \
 724    -H "Content-Type: application/json"
 725  ```
 726  
 727  #### Response Example
 728  ```json
 729  {
 730    "status": 0,
 731    "message": "Get model detail successfully",
 732    "data": {
 733      "model_id": "gpt4-model",
 734      "model": {
 735        "model": "gpt-4",
 736        "token": "********",
 737        "base_url": "https://api.openai.com/v1",
 738        "note": "GPT-4 Model",
 739        "limit": 1000
 740      }
 741    }
 742  }
 743  ```
 744  
 745  ### 3. Create Model
 746  
 747  #### Interface Information
 748  - **URL**: `/api/v1/app/models`
 749  - **Method**: `POST`
 750  - **Content-Type**: `application/json`
 751  
 752  #### Request Parameters
 753  | Parameter | Type | Required | Description |
 754  |-----------|------|----------|-------------|
 755  | model_id | string | Yes | Model ID, globally unique |
 756  | model | object | Yes | Model configuration information |
 757  | model.model | string | Yes | Model name |
 758  | model.token | string | Yes | API key |
 759  | model.base_url | string | Yes | Base URL |
 760  | model.note | string | No | Note information |
 761  | model.limit | integer | No | Request limit, default 1000 |
 762  
 763  #### Python Example
 764  ```python
 765  def create_model():
 766      url = "http://localhost:8088/api/v1/app/models"
 767      headers = {
 768          "Content-Type": "application/json"
 769      }
 770      data = {
 771          "model_id": "my-gpt4-model",
 772          "model": {
 773              "model": "gpt-4",
 774              "token": "sk-your-api-key-here",
 775              "base_url": "https://api.openai.com/v1",
 776              "note": "My GPT-4 Model",
 777              "limit": 2000
 778          }
 779      }
 780      
 781      response = requests.post(url, json=data, headers=headers)
 782      return response.json()
 783  
 784  # Usage example
 785  result = create_model()
 786  if result['status'] == 0:
 787      print("Model created successfully")
 788  else:
 789      print(f"Model creation failed: {result['message']}")
 790  ```
 791  
 792  #### cURL Example
 793  ```bash
 794  curl -X POST http://localhost:8088/api/v1/app/models \
 795    -H "Content-Type: application/json" \
 796    -d '{
 797      "model_id": "my-gpt4-model",
 798      "model": {
 799        "model": "gpt-4",
 800        "token": "sk-your-api-key-here",
 801        "base_url": "https://api.openai.com/v1",
 802        "note": "My GPT-4 Model",
 803        "limit": 2000
 804      }
 805    }'
 806  ```
 807  
 808  #### Response Example
 809  ```json
 810  {
 811    "status": 0,
 812    "message": "Model created successfully",
 813    "data": null
 814  }
 815  ```
 816  
 817  ### 4. Update Model
 818  
 819  #### Interface Information
 820  - **URL**: `/api/v1/app/models/{modelId}`
 821  - **Method**: `PUT`
 822  - **Content-Type**: `application/json`
 823  
 824  #### Parameter Description
 825  | Parameter | Type | Required | Description |
 826  |-----------|------|----------|-------------|
 827  | modelId | string | Yes | Model ID (path parameter) |
 828  | model | object | Yes | Model configuration information |
 829  | model.model | string | No | Model name |
 830  | model.token | string | No | API key (pass ******** or empty to keep original value) |
 831  | model.base_url | string | No | Base URL |
 832  | model.note | string | No | Note information |
 833  | model.limit | integer | No | Request limit |
 834  
 835  **Note**: 
 836  - If the token field is passed as `********` or empty, the token will not be updated and the original value will be kept
 837  - Supports partial field updates; fields not passed will retain their original values
 838  
 839  #### Python Example
 840  ```python
 841  def update_model(model_id):
 842      url = f"http://localhost:8088/api/v1/app/models/{model_id}"
 843      headers = {
 844          "Content-Type": "application/json"
 845      }
 846      # Only update note and limit, don't modify token
 847      data = {
 848          "model": {
 849              "model": "gpt-4-turbo",
 850              "token": "********",  # Keep original token
 851              "base_url": "https://api.openai.com/v1",
 852              "note": "Updated note information",
 853              "limit": 3000
 854          }
 855      }
 856      
 857      response = requests.put(url, json=data, headers=headers)
 858      return response.json()
 859  
 860  # Usage example
 861  result = update_model("my-gpt4-model")
 862  if result['status'] == 0:
 863      print("Model updated successfully")
 864  else:
 865      print(f"Model update failed: {result['message']}")
 866  ```
 867  
 868  #### Update Token Example
 869  ```python
 870  def update_model_token(model_id, new_token):
 871      url = f"http://localhost:8088/api/v1/app/models/{model_id}"
 872      data = {
 873          "model": {
 874              "model": "gpt-4",
 875              "token": new_token,  # Pass new token
 876              "base_url": "https://api.openai.com/v1",
 877              "note": "Updated API key",
 878              "limit": 2000
 879          }
 880      }
 881      
 882      response = requests.put(url, json=data)
 883      return response.json()
 884  ```
 885  
 886  #### cURL Example
 887  ```bash
 888  # Only update note information
 889  curl -X PUT http://localhost:8088/api/v1/app/models/my-gpt4-model \
 890    -H "Content-Type: application/json" \
 891    -d '{
 892      "model": {
 893        "model": "gpt-4-turbo",
 894        "token": "********",
 895        "base_url": "https://api.openai.com/v1",
 896        "note": "Updated note information",
 897        "limit": 3000
 898      }
 899    }'
 900  
 901  # Update token
 902  curl -X PUT http://localhost:8088/api/v1/app/models/my-gpt4-model \
 903    -H "Content-Type: application/json" \
 904    -d '{
 905      "model": {
 906        "model": "gpt-4",
 907        "token": "sk-new-api-key-here",
 908        "base_url": "https://api.openai.com/v1",
 909        "note": "Updated API key",
 910        "limit": 2000
 911      }
 912    }'
 913  ```
 914  
 915  #### Response Example
 916  ```json
 917  {
 918    "status": 0,
 919    "message": "Model updated successfully",
 920    "data": null
 921  }
 922  ```
 923  
 924  ### 5. Delete Model
 925  
 926  #### Interface Information
 927  - **URL**: `/api/v1/app/models`
 928  - **Method**: `DELETE`
 929  - **Content-Type**: `application/json`
 930  
 931  #### Request Parameters
 932  | Parameter | Type | Required | Description |
 933  |-----------|------|----------|-------------|
 934  | model_ids | array | Yes | List of model IDs to delete, supports batch deletion |
 935  
 936  #### Python Example
 937  ```python
 938  def delete_models(model_ids):
 939      url = "http://localhost:8088/api/v1/app/models"
 940      headers = {
 941          "Content-Type": "application/json"
 942      }
 943      data = {
 944          "model_ids": model_ids
 945      }
 946      
 947      response = requests.delete(url, json=data, headers=headers)
 948      return response.json()
 949  
 950  # Delete single model
 951  result = delete_models(["my-gpt4-model"])
 952  if result['status'] == 0:
 953      print("Model deleted successfully")
 954  
 955  # Batch delete multiple models
 956  result = delete_models(["model1", "model2", "model3"])
 957  if result['status'] == 0:
 958      print("Batch deletion successful")
 959  ```
 960  
 961  #### cURL Example
 962  ```bash
 963  # Delete single model
 964  curl -X DELETE http://localhost:8088/api/v1/app/models \
 965    -H "Content-Type: application/json" \
 966    -d '{
 967      "model_ids": ["my-gpt4-model"]
 968    }'
 969  
 970  # Batch delete multiple models
 971  curl -X DELETE http://localhost:8088/api/v1/app/models \
 972    -H "Content-Type: application/json" \
 973    -d '{
 974      "model_ids": ["model1", "model2", "model3"]
 975    }'
 976  ```
 977  
 978  #### Response Example
 979  ```json
 980  {
 981    "status": 0,
 982    "message": "Deletion successful",
 983    "data": null
 984  }
 985  ```
 986  
 987  ### 6. YAML Configuration Models
 988  
 989  In addition to database models created through the API, the system also supports defining system-level models through YAML configuration files.
 990  
 991  #### Configuration File Location
 992  `db/model.yaml`
 993  
 994  #### YAML Configuration Format
 995  ```yaml
 996  - model_id: system_default
 997    model_name: deepseek-chat
 998    token: sk-your-api-key
 999    base_url: https://api.deepseek.com/v1
1000    note: System Default Model
1001    limit: 1000
1002    default:
1003      - mcp_scan
1004      - ai_infra_scan
1005  
1006  - model_id: eval_model
1007    model_name: gpt-4
1008    token: sk-your-eval-key
1009    base_url: https://api.openai.com/v1
1010    note: Evaluation Model
1011    limit: 2000
1012    default:
1013      - model_redteam_report
1014  ```
1015  
1016  #### Field Description
1017  | Field | Type | Required | Description |
1018  |-------|------|----------|-------------|
1019  | model_id | string | Yes | Model ID |
1020  | model_name | string | Yes | Model name |
1021  | token | string | Yes | API key |
1022  | base_url | string | Yes | Base URL |
1023  | note | string | No | Note information |
1024  | limit | integer | No | Request limit |
1025  | default | array | No | List of task types that use this model by default |
1026  
1027  #### Feature Description
1028  - YAML configuration models are **read-only** and cannot be modified or deleted through the API
1029  - YAML configuration models are merged with database models when retrieving lists and details
1030  - The `default` field is unique to YAML models and is used to identify the default task types for which the model is applicable
1031  - YAML configuration is automatically loaded when the system starts
1032  
1033  ---
1034  
1035  ## Task Status Query
1036  
1037  ### Get Task Status
1038  
1039  #### Interface Information
1040  - **URL**: `/api/v1/app/taskapi/status/{id}`
1041  - **Method**: `GET`
1042  
1043  #### Parameter Description
1044  | Parameter | Type | Required | Description |
1045  |-----------|------|----------|-------------|
1046  | id | string | Yes | Task session ID |
1047  
1048  #### Response Fields
1049  | Field | Type | Description |
1050  |-------|------|-------------|
1051  | session_id | string | Task session ID |
1052  | status | string | Task status: pending, running, completed, failed |
1053  | title | string | Task title |
1054  | created_at | integer | Creation timestamp (milliseconds) |
1055  | updated_at | integer | Update timestamp (milliseconds) |
1056  | log | string | Task execution log |
1057  
1058  #### Python Example
1059  ```python
1060  def get_task_status(session_id):
1061      url = f"http://localhost:8088/api/v1/app/taskapi/status/{session_id}"
1062      response = requests.get(url)
1063      return response.json()
1064  
1065  # Usage example
1066  status = get_task_status("550e8400-e29b-41d4-a716-446655440000")
1067  print(f"Task status: {status['data']['status']}")
1068  print(f"Execution log: {status['data']['log']}")
1069  ```
1070  
1071  #### cURL Example
1072  ```bash
1073  curl -X GET http://localhost:8088/api/v1/app/taskapi/status/550e8400-e29b-41d4-a716-446655440000
1074  ```
1075  
1076  ### Get Task Results
1077  
1078  #### Interface Information
1079  - **URL**: `/api/v1/app/taskapi/result/{id}`
1080  - **Method**: `GET`
1081  
1082  #### Parameter Description
1083  | Parameter | Type | Required | Description |
1084  |-----------|------|----------|-------------|
1085  | id | string | Yes | Task session ID |
1086  
1087  #### Response Description
1088  Returns detailed scan results, including:
1089  - List of discovered vulnerabilities
1090  - Security assessment report
1091  - Remediation recommendations
1092  - Risk level assessment
1093  
1094  #### Python Example
1095  ```python
1096  def get_task_result(session_id):
1097      url = f"http://localhost:8088/api/v1/app/taskapi/result/{session_id}"
1098      response = requests.get(url)
1099      return response.json()
1100  
1101  # Usage example
1102  result = get_task_result("550e8400-e29b-41d4-a716-446655440000")
1103  if result['status'] == 0:
1104      print("Scan results:")
1105      print(json.dumps(result['data'], indent=2, ensure_ascii=False))
1106  else:
1107      print(f"Failed to get results: {result['message']}")
1108  ```
1109  
1110  #### cURL Example
1111  ```bash
1112  curl -X GET http://localhost:8088/api/v1/app/taskapi/result/550e8400-e29b-41d4-a716-446655440000
1113  ```
1114  
1115  ---
1116  
1117  ## Complete Workflow Examples
1118  
1119  ### Complete MCP Source Code Scanning Workflow
1120  
1121  ```python
1122  import requests
1123  import time
1124  import json
1125  
1126  def complete_mcp_scan_workflow():
1127      base_url = "http://localhost:8088"
1128      
1129      # 1. Upload source code file
1130      print("1. Uploading source code file...")
1131      upload_url = f"{base_url}/api/v1/app/taskapi/upload"
1132      with open("mcp_source.zip", 'rb') as f:
1133          files = {'file': f}
1134          upload_response = requests.post(upload_url, files=files)
1135      
1136      if upload_response.json()['status'] != 0:
1137          raise Exception("File upload failed")
1138      
1139      fileUrl = upload_response.json()['data']['fileUrl']
1140      print(f"File uploaded successfully: {fileUrl}")
1141      
1142      # 2. Create MCP scan task
1143      print("2. Creating MCP scan task...")
1144      task_url = f"{base_url}/api/v1/app/taskapi/tasks"
1145      task_data = {
1146          "type": "mcp_scan",
1147          "content": {
1148              "prompt": "Scan this MCP server",
1149              "model": {
1150                  "model": "gpt-4",
1151                  "token": "sk-your-api-key",
1152                  "base_url": "https://api.openai.com/v1"
1153              },
1154              "thread": 4,
1155              "language": "zh",
1156              "attachments": fileUrl
1157          }
1158      }
1159      
1160      task_response = requests.post(task_url, json=task_data)
1161      if task_response.json()['status'] != 0:
1162          raise Exception("Task creation failed")
1163      
1164      session_id = task_response.json()['data']['session_id']
1165      print(f"Task created successfully, session ID: {session_id}")
1166      
1167      # 3. Poll task status
1168      print("3. Monitoring task execution...")
1169      status_url = f"{base_url}/api/v1/app/taskapi/status/{session_id}"
1170      
1171      while True:
1172          status_response = requests.get(status_url)
1173          status_data = status_response.json()
1174          
1175          if status_data['status'] != 0:
1176              raise Exception("Failed to get task status")
1177          
1178          task_status = status_data['data']['status']
1179          print(f"Current status: {task_status}")
1180          
1181          if task_status == "completed":
1182              print("Task execution completed!")
1183              break
1184          elif task_status == "failed":
1185              raise Exception("Task execution failed")
1186          
1187          time.sleep(10)  # Wait 10 seconds before checking again
1188      
1189      # 4. Get scan results
1190      print("4. Getting scan results...")
1191      result_url = f"{base_url}/api/v1/app/taskapi/result/{session_id}"
1192      result_response = requests.get(result_url)
1193      
1194      if result_response.json()['status'] != 0:
1195          raise Exception("Failed to get scan results")
1196      
1197      scan_results = result_response.json()['data']
1198      print("Scan results:")
1199      print(json.dumps(scan_results, indent=2, ensure_ascii=False))
1200      
1201      return scan_results
1202  
1203  # Execute complete workflow
1204  if __name__ == "__main__":
1205      try:
1206          results = complete_mcp_scan_workflow()
1207          print("MCP Server Scan completed!")
1208      except Exception as e:
1209          print(f"Scan failed: {e}")
1210  ```
1211  
1212  ### Complete Jailbreak Evaluation Workflow
1213  
1214  ```python
1215  def complete_redteam_workflow():
1216      base_url = "http://localhost:8088"
1217      
1218      # 1. Create Jailbreak Evaluation task
1219      print("1. Creating Jailbreak Evaluation task...")
1220      task_url = f"{base_url}/api/v1/app/taskapi/tasks"
1221      task_data = {
1222          "type": "model_redteam_report",
1223          "content": {
1224              "model": [
1225                  {
1226                      "model": "gpt-4",
1227                      "token": "sk-your-api-key",
1228                      "base_url": "https://api.openai.com/v1"
1229                  }
1230              ],
1231              "eval_model": {
1232                  "model": "gpt-4",
1233                  "token": "sk-your-eval-key",
1234                  "base_url": "https://api.openai.com/v1"
1235              },
1236              "dataset": {
1237                  "dataFile": [
1238                      "JailBench-Tiny",
1239                      "JailbreakPrompts-Tiny",
1240                      "ChatGPT-Jailbreak-Prompts"
1241                  ],
1242                  "numPrompts": 100,
1243                  "randomSeed": 42
1244              }
1245          }
1246      }
1247      
1248      task_response = requests.post(task_url, json=task_data)
1249      if task_response.json()['status'] != 0:
1250          raise Exception("Task creation failed")
1251      
1252      session_id = task_response.json()['data']['session_id']
1253      print(f"Jailbreak Evaluation task created successfully, session ID: {session_id}")
1254      
1255      # 2. Monitor task execution
1256      print("2. Monitoring task execution...")
1257      status_url = f"{base_url}/api/v1/app/taskapi/status/{session_id}"
1258      
1259      while True:
1260          status_response = requests.get(status_url)
1261          status_data = status_response.json()
1262          
1263          if status_data['status'] != 0:
1264              raise Exception("Failed to get task status")
1265          
1266          task_status = status_data['data']['status']
1267          print(f"Current status: {task_status}")
1268          
1269          if task_status == "completed":
1270              print("Jailbreak Evaluation completed!")
1271              break
1272          elif task_status == "failed":
1273              raise Exception("Jailbreak Evaluation failed")
1274          
1275          time.sleep(30)  # Red team evaluation usually takes longer
1276      
1277      # 3. Get evaluation results
1278      print("3. Getting evaluation results...")
1279      result_url = f"{base_url}/api/v1/app/taskapi/result/{session_id}"
1280      result_response = requests.get(result_url)
1281      
1282      if result_response.json()['status'] != 0:
1283          raise Exception("Failed to get evaluation results")
1284      
1285      redteam_results = result_response.json()['data']
1286      print("Jailbreak Evaluation results:")
1287      print(json.dumps(redteam_results, indent=2, ensure_ascii=False))
1288      
1289      return redteam_results
1290  
1291  # Execute Jailbreak Evaluation workflow
1292  if __name__ == "__main__":
1293      try:
1294          results = complete_redteam_workflow()
1295          print("Jailbreak Evaluation completed!")
1296      except Exception as e:
1297          print(f"Jailbreak Evaluation failed: {e}")
1298  ```
1299  
1300  ## Error Handling
1301  
1302  ### Common Error Codes
1303  | Status Code | Description | Solution |
1304  |-------------|-------------|----------|
1305  | 0 | Success | - |
1306  | 1 | Failure | Check the message field for detailed error information |
1307  
1308  ### Error Handling Example
1309  ```python
1310  def handle_api_response(response):
1311      """Common function for handling API responses"""
1312      data = response.json()
1313      
1314      if data['status'] == 0:
1315          return data['data']
1316      else:
1317          raise Exception(f"API call failed: {data['message']}")
1318  
1319  # Usage example
1320  try:
1321      result = handle_api_response(response)
1322      print("Operation successful:", result)
1323  except Exception as e:
1324      print("Operation failed:", str(e))
1325  ```
1326  
1327  ## Important Notes
1328  
1329  ### General Notes
1330  1. **Authentication**: Ensure correct authentication information is included in request headers
1331  2. **File Size**: File upload size limits please refer to server configuration
1332  3. **Timeout Settings**: Set reasonable timeout times based on task complexity
1333  4. **Concurrency Limits**: Avoid creating too many tasks simultaneously to prevent affecting system performance
1334  5. **Result Saving**: Save scan results promptly to avoid data loss
1335  
1336  ### Task-Related Notes
1337  6. **Dataset Selection**: Choose appropriate dataset combinations based on testing requirements
1338  7. **Model Configuration**: Ensure test model and evaluation model configurations are correct
1339  
1340  ### Model Management Notes
1341  8. **Model ID Uniqueness**: When creating a model, the model_id must be globally unique
1342  9. **Token Security**: API keys are automatically masked as `********` in responses; pay attention to this when displaying and editing on the frontend
1343  10. **Token Updates**: When updating a model, if the token field is empty or `********`, the token will not be updated and the original value will be kept
1344  11. **Model Validation**: The system automatically validates the token and base_url when creating a model
1345  12. **YAML Models**: Models configured through YAML are read-only and cannot be modified or deleted through the API
1346  13. **Batch Deletion**: Model deletion supports passing multiple model_ids for batch deletion
1347  14. **Permission Control**: Only the creator of a model can view, modify, and delete that model
1348  
1349  ## Technical Support
1350  
1351  For any issues, please contact the technical support team or refer to the project documentation.