Deterministic LLM programming
Learn to build production-grade LLM systems using AWS Bedrock, local inference toolchains, and systematic quality evaluation. You will explore retrieval-augmented generation (RAG) on AWS, configuring Bedrock knowledge bases with S3 data sources for document-grounded responses, and building Rust applications that interact with Bedrock model APIs. The course covers tokenization fundamentals, multi-model architectures for routing requests to appropriate foundation models, and the Bedrock knowledge agent workflow from data ingestion to response generation. You will compile llama.cpp with hardware-…
Watch on Coursera ↗
(saves to browser)
DeepCamp AI