Skip to main content

Prerequisites

  • Python 3.12+
  • Git with Git LFS
  • API keys for OpenAI and/or Google (Gemini)

Installation

1

Install from PyPI (recommended)

pip install leeroo-kapso
2

Or install from source (for wiki knowledge data)

git clone https://github.com/leeroo-ai/kapso.git
cd kapso

# Pull Git LFS files (wiki knowledge data)
git lfs install
git lfs pull

# Create conda environment (recommended)
conda create -n kapso_conda python=3.12
conda activate kapso_conda

# Install in development mode
pip install -e .
3

Set up API keys

Create a .env file in the project root:
OPENAI_API_KEY=your-openai-api-key
GOOGLE_API_KEY=your-google-api-key       # For Gemini models
ANTHROPIC_API_KEY=your-anthropic-api-key # For Claude Code
4

Connect Leeroopedia MCP (optional)

Give Kapso access to curated best practices from 1000+ ML/AI frameworks:
pip install leeroopedia-mcp
Sign up at app.leeroopedia.com for an API key ($20 free credit), then add to your .env:
LEEROOPEDIA_API_KEY=kpsk_your_key_here

Run Your First Experiment

from kapso.kapso import Kapso, DeployStrategy

# Initialize Kapso
kapso = Kapso()

# Build a solution — Kapso runs experiments automatically
# The developer agent builds evaluation dynamically and the feedback generator
# validates results and decides when to stop
solution = kapso.evolve(
    goal="Build a random forest classifier for the Iris dataset with accuracy > 0.9",
    max_iterations=10,
)

# Deploy and run
deployed_program = kapso.deploy(solution, strategy=DeployStrategy.LOCAL)
result = deployed_program.run({"data_path": "./test.csv"})

# Cleanup
deployed_program.stop()

Option 2: CLI

# Basic usage
kapso evolve --goal "Build a random forest classifier for the Iris dataset"

# With options
kapso evolve \
    --goal "Build a feature engineering pipeline for tabular data" \
    --iterations 10 \
    --coding-agent aider

# List available options
kapso --list-agents

Expected Output

============================================================
EVOLVING: Build a random forest classifier for the Iris dataset
============================================================
  Max iterations: 10
  Coding agent: from config

Running experiments...
Experiment 1: Developer agent implementing solution...
Experiment 1: Running evaluation in kapso_evaluation/...
Experiment 1: Feedback generator validating results...
Experiment 1 completed with cumulative cost: $0.125
####################################################################################################
Experiment with score 0.92:
# Solution: Random forest with GridSearchCV hyperparameter tuning...
# Feedback: Good progress! Accuracy 0.92 meets the > 0.9 target. Goal achieved.
####################################################################################################

============================================================
Evolution Complete
============================================================
Solution at: ./workspace
Experiments run: 1
Total cost: $0.125
Goal achieved: Yes
The system creates git branches for each experiment and outputs the best solution path.

With Knowledge Graph (Optional)

For domain-specific context, index a knowledge graph first:
1

Start infrastructure

# Start Weaviate + Neo4j (required for KG)
./scripts/start_infra.sh
2

Index wiki pages (one-time setup)

from kapso.kapso import Kapso

kapso = Kapso()
kapso.index_kg(
    wiki_dir="data/wikis_llm_finetuning",
    save_to="data/indexes/llm_finetuning.index",
)
3

Use the indexed KG

from kapso.kapso import Kapso

# Load from existing index
kapso = Kapso(kg_index="data/indexes/llm_finetuning.index")

solution = kapso.evolve(
goal="Fine-tune Llama-3.1-8B with QLoRA, target loss < 0.5",
output_path="./models/qlora_v1",
)

Web Research (Optional)

Kapso can do deep web research before evolving:
from kapso.kapso import Kapso

kapso = Kapso()

# Research returns ResearchFindings with ideas and implementations
# mode: "idea" | "implementation" | "both" (default: "both")
# depth: "light" | "deep" (default: "deep")

findings = kapso.research(
    "unsloth FastLanguageModel example",
    mode="both",
    depth="deep",
)

# Use research as context for evolving
solution = kapso.evolve(
    goal="Fine-tune a model with Unsloth + LoRA",
    context=[findings.to_string()],
    output_path="./models/unsloth_v1",
)

Understanding the Output

After evolve() completes, you get a SolutionResult:
solution.goal             # Original goal
solution.code_path        # Path to generated code
solution.experiment_logs  # List of experiment summaries
solution.final_feedback   # FeedbackResult with stop decision and score
solution.metadata         # Cost, iterations, final evaluation

# Check if goal was achieved
if solution.succeeded:
    print(f"Goal achieved with score: {solution.final_score}")
The code is in a git repository with branches for each experiment:
cd ./models/iris_v1
git branch -a
# * experiment_2  (best solution)
#   experiment_1
#   experiment_0
#   main

Next Steps