/ .kamaji_index.json
.kamaji_index.json
1 { 2 "created_at": "2025-10-29T02:19:01.780272360+00:00", 3 "documents": [ 4 "=== README.md ===\n# LangChain with Ollama - Complete Setup\n\nThis folder contains examples of using LangChain with your Ollama server at `http://192.222.50.154:11434`.\n\n## Setup\n\n### 1. Activate Virtual Environment\n\n```bash\nsource venv/bin/activate\n```\n\n### 2. Install Dependencies (already done)\n\n```bash\npip install -r requirements.txt\n```\n\n## Examples\n\n### 1. Basic Client (`1_basic_client.py`)\n\nSimple LLM interactions demonstrating:\n- Basic queries\n- Code generation\n- Streaming responses\n\n**Run it:**\n```bash\npython 1_basic_client.py\n```\n\n### 2. RAG Example (`2_rag_example.py`)\n\nRetrieval Augmented Generation (RAG) demonstrating:\n- Loading text documents\n- Creating vector embeddings\n- Querying documents with natural language\n- Retrieving source documents\n\n**Run it:**\n```bash\npython 2_rag_example.py\n```\n\n**What is RAG?**\nRAG allows the LLM to answer questions based on your specific documents, not just its training data. Perfect for:\n- Company knowledge bases\n- Product documentation\n- Research papers\n- Customer support\n\n### 3. Conversational Memory (`3_conversational_memory.py`)\n\nBuilding chatbots that remember conversation history:\n- Buffer memory (stores entire conversation)\n- Summary memory (for long conversations)\n- Context-aware responses\n\n**Run it:**\n```bash\npython 3_conversational_memory.py\n```\n\n**Use cases:**\n- Customer service chatbots\n- Personal assistants\n- Educational tutors\n\n### 4. Agent with Tools (`4_agent_with_tools.py`)\n\nAI agents that can use tools to accomplish tasks:\n- Calculator tool\n- String manipulation tools\n- ReAct reasoning pattern\n- Tool selection and execution\n\n**Run it:**\n```bash\npython 4_agent_with_tools.py\n```\n\n**Use cases:**\n- Task automation\n- Data analysis\n- Complex problem solving\n- API integrations\n\n### 5. PDF RAG (`5_pdf_rag.py`)\n\nQuery PDF documents using RAG:\n- Load PDF files\n- Create searchable indexes\n- Save/load indexes for reuse\n- Extract information from documents\n\n**Setup:**\n1. Place a PDF file in the Language folder\n2. Update the `pdf_path` variable in the script\n3. Run the script\n\n**Use cases:**\n- Document Q&A\n- Research paper analysis\n- Legal document review\n- Contract analysis\n\n## Project Structure\n\n```\nLanguage/\n├── venv/ # Virtual environment\n├── requirements.txt # Python dependencies\n├── README.md # This file\n├── 1_basic_client.py # Basic LLM usage\n├── 2_rag_example.py # RAG with text documents\n├── 3_conversational_memory.py # Chatbot with memory\n├── 4_agent_with_tools.py # AI agent with tools\n└── 5_pdf_rag.py # RAG with PDF files\n```\n\n## Key Concepts\n\n### LangChain Components\n\n1. **LLMs** - Large Language Models (your Ollama server)\n2. **Prompts** - Templates for structuring inputs\n3. **Chains** - Combining LLMs with other components\n4. **Agents** - LLMs that can use tools and make decisions\n5. **Memory** - Storing conversation history\n6. **Vector Stores** - Storing and searching document embeddings\n7. **Retrievers** - Finding relevant documents\n\n### When to Use What\n\n| Task | Example | Best Choice |\n|------|---------|-------------|\n| Simple Q&A | \"What is Python?\" | Basic Client |\n| Document Q&A | \"What does our policy say about X?\" | RAG |\n| Multi-turn conversation | Customer support chat | Conversational Memory |\n| Task automation | \"Calculate X and search for Y\" | Agent with Tools |\n| Large document analysis | Query 100-page PDF | PDF RAG |\n\n## Server Configuration\n\nAll examples connect to your Ollama server:\n- **URL:** `http://192.222.50.154:11434`\n- **Model:** `gpt-oss:120b`\n\nTo change the server or model, update these values in each Python file.\n\n## Common Patterns\n\n### Changing the Model\n\n```python\nllm = Ollama(\n model=\"your-model-name\", # Change this\n base_url=\"http://192.222.50.154:11434\"\n)\n```\n\n### Adjusting Temperature\n\n```python\nllm = Ollama(\n model=\"gpt-oss:120b\",\n base_url=\"http://192.222.50.154:11434\",\n temperature=0.7 # 0 = deterministic, 1 = creative\n)\n```\n\n### Adding More Documents to RAG\n\n```python\nmore_docs = [\n \"Your text here...\",\n \"More text...\",\n]\nvectorstore = create_vector_db_from_text(more_docs)\n```\n\n## Next Steps\n\n1. **Run the examples** to understand each concept\n2. **Modify them** for your specific use case\n3. **Combine patterns** (e.g., RAG + Conversational Memory)\n4. **Add your own tools** to the agent\n5. **Load your own documents** for RAG\n\n## Troubleshooting\n\n### Connection Issues\n```bash\n# Test if server is accessible\ncurl http://192.222.50.154:11434/api/generate -d '{\"model\": \"gpt-oss:120b\", \"prompt\": \"test\"}'\n```\n\n### Import Errors\n```bash\n# Reinstall dependencies\npip install -r requirements.txt --force-reinstall\n```\n\n### Memory Issues\n- For large documents, use FAISS instead of Chroma\n- Reduce chunk size in text splitters\n- Use summary memory for long conversations\n\n## Resources\n\n- [LangChain Documentation](https://python.langchain.com/docs/get_started/introduction)\n- [Ollama Documentation](https://ollama.ai/docs)\n- [LangChain RAG Tutorial](https://python.langchain.com/docs/use_cases/question_answering/)\n- [LangChain Agents Guide](https://python.langchain.com/docs/modules/agents/)\n\n## Tips\n\n1. **Start simple** - Begin with `1_basic_client.py`\n2. **Use RAG** for domain-specific knowledge\n3. **Agents** are powerful but complex - use for multi-step tasks\n4. **Memory** is essential for conversational applications\n5. **Save indexes** when working with large documents\n", 5 "=== Cargo.toml ===\n[package]\nname = \"kamaji\"\nversion = \"1.1.0\"\nedition = \"2021\"\n\n[[bin]]\nname = \"kamaji\"\npath = \"src/main.rs\"\n\n[dependencies]\nratatui = \"0.24\"\ncrossterm = \"0.27\"\ntokio = { version = \"1.0\", features = [\"full\"] }\nserde = { version = \"1.0\", features = [\"derive\"] }\nserde_json = \"1.0\"\nreqwest = { version = \"0.11\", features = [\"json\", \"rustls-tls\", \"stream\"], default-features = false }\nanyhow = \"1.0\"\nclap = { version = \"4.0\", features = [\"derive\"] }\ndirs = \"5.0\"\nuuid = { version = \"1.0\", features = [\"v4\"] }\nfutures = \"0.3\"\nchrono = { version = \"0.4\", features = [\"serde\"] }\n" 6 ], 7 "files": [ 8 "README.md", 9 "Cargo.toml" 10 ], 11 "total_chars": 6087 12 }