Capstone projects built
across our cohorts.
These projects show the kinds of applied outcomes learners complete across engineering, product, and no-code AI tracks. Explore the work, the learner context, and the cohorts that shaped each capstone.
Projects span advanced engineering, product strategy, and no-code workflow building.
Each capstone is designed to be a tangible proof point learners can discuss professionally.
Every project connects back to a specific cohort path, learning arc, and final output expectation.
Featured Technical Executions
A curated archive of learner work across our cohorts. Each card shows who built the project, what problem it solves, and which program it came out of.
The program where the project was developed and completed.
The builder's role or background to help you understand who the project was for.
A quick read on what was built, why it matters, and which tools were used.
Autonomous RFP Responder
Automates complex RFP drafting and review for enterprise teams.
Product Manager
Automates complex RFP drafting and review for enterprise teams.
An advanced multi-agent orchestration system designed to automate the high-stakes B2B RFP (Request for Proposal) process. The system utilizes a recursive retrieval strategy (RAG) to ingest thousands of legacy documentation pages and technical specs, synthesizing compliant, context-aware responses in minutes. Features include a verification loop where a 'Reviewer Agent' audits the output for technical accuracy against golden datasets.
Miniature Code Assistant (1.5B)
Builds a specialized code-generation model for IDE completion workflows.
Backend Engineer
Builds a specialized code-generation model for IDE completion workflows.
A custom-trained 1.5B parameter transformer model specialized for high-throughput Python code generation. Rahul executed the entire pipeline: from writing a custom BPE tokenizer to curating a 10GB 'Fine-Code' dataset. The model was trained using Distributed Data Parallel (DDP) across 4xA100 GPUs and features Flash Attention 2 for efficient long-context window inference during real-time IDE completion tasks.
Customer Support Auto-Triage
Routes and classifies large-scale support traffic without manual triage.
Operations Lead
Routes and classifies large-scale support traffic without manual triage.
A production-grade autonomous triage system that manages a 50,000+ monthly ticket volume without human intervention. Built entirely using no-code state machines, it performs semantic classification of incoming Zendesk tickets, extracts high-priority entities (Order IDs, SLA tiers), and routes them to specialized AI sub-agents. The system includes an automated 'Human-in-the-loop' handoff for high-churn risk scenarios.
Financial Report Extractor
Extracts financial reporting data for downstream analysis and dashboards.
Data Scientist
Extracts financial reporting data for downstream analysis and dashboards.
A specialized RAG engine designed to parse and structure hyper-complex SEC 10-K and 10-Q filings. Unlike standard RAG, this system uses LlamaParse to maintain table hierarchies and accurately extract nested financial metrics. A custom Evaluation (Eval) pipeline was built using G-Eval to ensure that numeric extractions have zero margin for error before being pushed to downstream investment dashboards.
Multi-Agent Negotiation Env
Explores negotiation behavior and strategic reasoning between LLM agents.
AI Researcher
Explores negotiation behavior and strategic reasoning between LLM agents.
A research environment exploring emergent game-theoretical behaviors in LLM-to-LLM negotiations. Samir implemented two distinct personas—'Skeptical Buyer' and 'Aggressive Seller'—that compete over contract terms within a sandboxed environment. The project tracks 'Tactical Shifts' and calculates 'Utility Gain' for each party, providing a rich visualization of how different prompt-tuning strategies affect multi-turn strategic reasoning.
Local RAG for Offline Docs
Provides secure document Q&A for offline and air-gapped environments.
Systems Engineer
Provides secure document Q&A for offline and air-gapped environments.
An air-gapped, privacy-first technical documentation assistant. Designed for enterprise environments where cloud data leakage is a critical risk, this system runs entirely on local hardware using Ollama and ChromaDB. It features a custom 'Context Compression' layer that allows 8B parameter models to reason over massive local knowledge bases without exceeding 16GB of VRAM, maintaining high-fidelity answers offline.
Each capstone starts with a specific learning path.
These projects are not detached portfolio pieces. They are the kinds of final outputs learners build through the structure, review cycles, and technical expectations inside our cohorts.
Model training, inference systems, deployment, and advanced AI infrastructure work.
AI product framing, evaluation planning, UX decisions, and launch-ready PRD thinking.
Workflow automation, retrieval setups, and business-facing assistants built without heavy custom code.
Explore the cohorts
behind these projects.
Review the learning paths, weekly structure, and capstone expectations that lead to work like this.
View Open Cohorts