/ quickstart.md
quickstart.md
1 # Quickstart 2 3 ## **1. Install RagaAI Catalyst** 4 5 To install the RagaAI Catalyst package, run the following command in your terminal: 6 7 ```bash 8 pip install ragaai-catalyst 9 ``` 10 11 12 13 ## **2. Set Up Authentication Keys** 14 15 ### **How to Get Your API Keys :** 16 1. Log in to your account at [RagaAI Catalyst](https://catalyst.raga.ai/). 17 2. Navigate to **Profile Settings** → **Authentication**. 18 3. Click **Generate New Key** to obtain your **Access Key** and **Secret Key**. 19 20  21 22 ### **Initialize the SDK** 23 24 To begin using Catalyst, initialize it as follows: 25 26 ```python 27 from ragaai_catalyst import RagaAICatalyst 28 29 catalyst = RagaAICatalyst( 30 access_key="YOUR_ACCESS_KEY", # Replace with your access key 31 secret_key="YOUR_SECRET_KEY", # Replace with your secret key 32 base_url="BASE_URL" 33 ) 34 ``` 35 36 ## **3. Create Your First Project** 37 38 Create a new project and choose a use case from the available options: 39 40 ```python 41 # Create a new project 42 project = catalyst.create_project( 43 project_name="Project_Name", 44 usecase="Q/A" # Options : Chatbot, Q/A, Others, Agentic Application 45 ) 46 47 # List available use cases 48 print(catalyst.project_use_cases()) 49 ``` 50  51 52 53 ### **Add a Dataset** 54 Initialize the dataset manager and create a dataset from a CSV file, DataFrame, or JSONl file. 55 56 Define a **schema mapping** for the dataset. 57 58 ```python 59 from ragaai_catalyst import Dataset 60 61 # Initialize dataset manager 62 dataset_manager = Dataset(project_name="Project_Name") 63 64 # Create dataset from a CSV file 65 dataset_manager.create_from_csv( 66 csv_path="path/to/your.csv", 67 dataset_name="MyDataset", 68 schema_mapping={ 69 'column1': 'schema_element1', 70 'column2': 'schema_element2' 71 } 72 ) 73 74 # View dataset schema 75 print(dataset_manager.get_schema_mapping()) 76 ``` 77  78 79 ## **4. Trace Your Application** 80 81 82 83 ### **Auto-Instrumentation** 84 85 Auto-Instrumentation automatically traces your application after initializing the correct tracer. 86 87 #### **Implementation** 88 89 ```python 90 from ragaai_catalyst import init_tracing, Tracer 91 92 # Initialize the tracer 93 tracer = Tracer( 94 project_name="Project_Name", 95 dataset_name="Dataset_Name", 96 tracer_type="agentic/langgraph" 97 ) 98 99 # Enable auto-instrumentation 100 init_tracing(catalyst=catalyst, tracer=tracer) 101 ``` 102 103 #### **Supported Tracer Types** 104 105 Choose from the given supported tracer types based on your framework: 106 107 - `agentic/langgraph` 108 - `agentic/langchain` 109 - `agentic/smolagents` 110 - `agentic/openai_agents` 111 - `agentic/llamaindex` 112 - `agentic/haystack` 113 114 --- 115 116 117 118 ### Custom Tracing 119 120 You can enable custom tracing in two ways: 121 122 1. Using the `with tracer()` function. 123 2. Manually starting and stopping the tracer with `tracer.start()` and `tracer.stop()`. 124 125 ```python 126 from ragaai_catalyst import Tracer 127 128 # Initialize production tracer 129 tracer = Tracer( 130 project_name="Project_Name", 131 dataset_name="tracer_dataset_name", 132 tracer_type="tracer_type" 133 ) 134 135 # Start a trace recording (Option 1) 136 with tracer(): 137 # Your code here 138 139 # Start a trace recording (Option 2) 140 tracer.start() 141 142 # Your code here 143 144 # Stop the trace recording 145 tracer.stop() 146 147 # Verify data capture 148 print(tracer.get_upload_status()) 149 ``` 150  151 152 153 ## **5. Evaluation Framework** 154 155 156 1. Import `Evaluation` from `ragaai_catalyst`. 157 2. Configure evaluation metrics. 158 3. Add metrics from the available options. 159 4. Check the status and retrieve results after running the evaluation. 160 161 ```python 162 from ragaai_catalyst import Evaluation 163 164 # Initialize evaluation engine 165 evaluation = Evaluation( 166 project_name="Project_Name", 167 dataset_name="MyDataset" 168 ) 169 170 # Define Schema-mapping 171 172 schema_mapping = { 173 'Query': 'prompt', 174 'response': 'response', 175 'Context': 'context', 176 'expectedResponse': 'expected_response' 177 } 178 179 evaluation.add_metrics( 180 metrics=[ 181 { 182 "name": "Faithfulness", 183 "config": {"model": "gpt-4o-mini", "provider": "openai", "threshold": {"gte": 0.232323}}, 184 "column_name": "Faithfulness_v1", 185 "schema_mapping": schema_mapping 186 } 187 ] 188 ) 189 190 # Get status and results 191 192 print(f"Status: {evaluation.get_status()}") 193 print(f"Results: {evaluation.get_results()}") 194 ``` 195 