Azure OpenAI Fundamentals Hackathon

This hackathon teaches attendees to experiment with prompt engineering and machine learning best practices to generate effective responses from ChatGPT and OpenAI models

Microsoft Instructor-led Training | Microsoft Azure Training | Microsoft 365 Training | Microsoft Azure AI Training | Microsoft Power Platform Training | Microsoft Dynamics 365 Training

Course Description

Azure OpenAI Fundamentals Hackathon

This hack is for anyone who wants to gain hands-on experience experimenting with prompt engineering and machine learning best practices, and apply them to generate effective responses from ChatGPT and OpenAI models. 

Participants will learn how to:

  • Compare OpenAI models and choose the best one for a scenario 
  • Use prompt engineering techniques on complex tasks 
  • Manage large amounts of data within token limits, including the use of chunking and chaining techniques 
  • Grounding models to avoid hallucinations or false information 
  • Implement embeddings using search retrieval techniques 
  • Evaluate models for truthfulness and monitor for PII detection in model interactions 
 

About this Course

Challenge 00: Prerequisites – Ready, Set, GO! 

  • Prepare your workstation to work with Azure. 

Challenge 01: Prompt Engineering 

  • What’s possible through Prompt Engineering 
  • Best practices when using OpenAI text and chat models 

Challenge 02: OpenAI Models & Capabilities 

  • What are the capacities of each Azure OpenAI model? 
  • How to select the right model for your application 

Challenge 03: Grounding, Chunking, and Embedding 

  • Why is grounding important and how can you ground a Large Language Model (LLM)? 
  • What is a token limit? How can you deal with token limits? What are techniques of chunking? 

Challenge 04: Retrieval Augmented Generation (RAG) 

  • How do we create ChatGPT-like experiences on Enterprise data? In other words, how do we “ground” powerful LLMs to primarily our own data? 

Challenge 05: Responsible AI 

  • What are services and tools to identify and evaluate harms and data leakage in LLMs? 

What are ways to evaluate truthfulness and reduce hallucinations? What are methods to evaluate a model if you don’t have a ground truth dataset for comparison? 

 

1 Days

none

Intermediate

Azure OpenAI

AI Engineer

Need to Train a Team?

Contact us to schedule a dedicated Microsoft Hackathon for your team.