Disclaimer: This course is independently developed and not affiliated with Microsoft. It covers concepts and skills that closely align with the objectives of Microsoft’s DP-3028: Implement Generative AI engineering with Azure Databricks training, making it a strong preparatory or complementary learning experience.
This One-Day intermediate-level custom training This intermediate-level training empowers data scientists and Data & AI engineers with the practical skills to design, fine-tune, evaluate, and operationalize generative AI solutions using Azure Databricks. Learners will explore the full lifecycle of Large Language Models (LLMs), from foundational concepts to responsible deployment, leveraging Apache Spark’s scalability and Databricks’ collaborative environment.
Before enrolling in this custom training, participants should be familiar with core Azure Databricks concepts and foundational AI principles.
Get started with language models in Azure Databricks: Introduces LLMs and their applications in NLP tasks like summarization, translation, and classification. Learners explore key components and use cases through hands-on exercises.
Implement Retrieval Augmented Generation (RAG): Covers how RAG enhances generative models by integrating external data retrieval, improving contextual relevance and accuracy of outputs.
Implement multi-stage reasoning: Teaches how to break down complex problems into structured reasoning stages, enabling more systematic and interpretable AI workflows.
Fine-tune language models: Demonstrates how to adapt pre-trained LLMs to specific tasks or domains, improving performance and efficiency without training from scratch.
Evaluate language models: Explores evaluation metrics, challenges, and automated techniques like LLM-as-a-judge to assess model quality and reliability.
Review responsible AI principles: Focuses on ethical implementation of LLMs, risk mitigation, and security tooling to ensure responsible AI practices in Databricks.
Implement LLMOps: Guides learners through deploying and managing LLMs at scale using LLMOps, covering lifecycle management and operational best practices.