From Scratch to Scale: Turning LLM Code into Architecture Insights Keynote

Sebastian Raschka

Keynote
Python Skill Intermediate
Domain Expertise Intermediate
Tuesday 10:30 in None

Python has been at the center of my work in machine learning and AI for more than a decade. It is where I start from scratch, experiment with ideas, and build systems that help me understand how large language models really work.

In this keynote, I will look at what it means to build and study LLMs in Python today. Starting from small, from-scratch implementations, I will show how Python and PyTorch help us understand modern model architectures, compare new designs against reference code, and learn details that papers often leave out. I will then connect those implementation lessons to current LLM trends, especially the push to reduce inference costs and KV-cache pressure as reasoning models and agentic workflows need longer contexts. At the end, I will also share a practical roadmap of libraries, open projects, and learning resources for going from first principles to real-world LLM development.

Sebastian Raschka

Sebastian is an LLM Research Engineer with over a decade of experience in artificial intelligence. His work bridges academia and industry, including roles as a senior engineer at Lightning AI and a statistics professor at the University of Wisconsin–Madison.

He is also the author of Build a Large Language Model (From Scratch).

His expertise lies in LLM research and the development of high-performance AI systems, with a strong focus on practical, code-driven implementations.