Workshop

AI Engineering Acceleration Bootcamp

Thursday, 7. May 2026, Gurten

Description

While many people experiment with prompting large language models (LLMs), building reliable, domain-specific systems requires knowing when to prompt, retrieve, fine-tune, or deploy agents. In this workshop, we’ll explore the full LLM development workflow from understanding model behavior, limitations, and security, to customizing and extending models through retrieval, fine-tuning, and reinforcement learning. You’ll learn how to evaluate generations and retrieval quality, and how to design workflows and agents that align with real-world use cases. Through practical case studies, we’ll bridge theory and application, helping you build intuition for choosing the right technique, ensuring reliability, and staying current in the fast-moving LLM ecosystem.

Part 1: Foundational Knowledge and Using LLMs, Limitations & Weaknesses, Prompting, Security, Modalities
Part 2: Building on Top of LLMs, Model selection, Customization, Databases, Retrieval, Fine-Tuning, Distillation, RAG, CAG, RL, When to Use each Technique…
Part 3: Everything about evaluations (retrieval, LLM generations)
Part 4: Workflows and Agents with case studies: introduction to “agents” and “workflows”, with practical case studies to help choose the best solution for a given scenario
Part 5: Words on Vibe Coding, how to stay up-to-date…

Requirements: Basic coding skills in Python.
Target Audience: Anyone who wants to learn how best to implement LLMs in applications. No previous experience with AI is necessary.

Speaker

Joshua Starmer
Louis-François Bouchard
Co-Founder Towards AI