DP-3012: Implementing a Data Analytics Solution with Azure Synapse Analytics
Course Overview
Course Description
Deliver powerful, end-to-end data analytics solutions with Azure Synapse Analytics. In this intensive, instructor-led one-day course, data professionals will learn to utilize both serverless SQL pools and Spark pools to ingest, transform, and model data. You’ll explore Delta Lake, build robust data pipelines with Synapse Pipelines, design data warehouses, and implement governance and security—all tailored to modern lakehouse analytics.
Target Audience
Ideal for:
Data Engineers, Data Analysts, and BI Professionals looking to build scalable analytics solutions using Azure Synapse
Data-oriented developers familiar with SQL, Python, notebooks (e.g., Spark, Databricks), and Azure data integration tools
Prerequisites:
Experience with data scripting in SQL/Python and working in notebook environments
Familiarity with Azure tools like Data Factory and data lake storage
Course Outline
Module 1: Introduction to Azure Synapse Analytics
Understand Azure Synapse Analytics architecture and core capabilities
Identify when to use Synapse for lakehouse and analytics workloads
Module 2: Query Data Lakes with Serverless SQL Pools
Use serverless SQL pool to query CSV, JSON, Parquet directly from data lakes
Define external tables and lake databases for managed querying
Module 3: Analyze Big Data with Apache Spark Pools
Configure Spark pools in Synapse for large-scale processing
Perform data transformation and visualization using Spark notebooks
Module 4: Leverage Delta Lake in Synapse
Implement Delta Lake for ACID transactions, schema enforcement, and time-travel
Use Delta tables within both Spark and SQL environments
Module 5: Build and Optimize a Relational Data Warehouse
Design warehouse schemas (star and snowflake) and load data into dedicated SQL pools
Manage compute, scaling, optimization, security, and monitoring
Module 6: Orchestrate End-to-End Data Pipelines with Synapse Pipelines
Use Synapse Pipelines (equivalent to ADF) and Spark notebook activities for ETL/ELT
Implement and monitor pipelines, handle data flows, triggering, and transformation
HandsOn Experience
Approximately 40–50% of the course is hands-on, where participants will configure Synapse pools, run Spark jobs, build Delta Lake tables, design warehouse schemas, and orchestrate data flows—reinforcing Azure Synapse Analytics solutions skills.
Skills You’ll Gain
By completing DP3012, you’ll be able to:
Query data lakes using serverless SQL pools and manage external tables
Analyze big data using Spark pools and transform datasets in notebooks
Implement Delta Lake to ensure data reliability and versioning
Design and maintain relational warehouse schemas in dedicated SQL pools
Build and monitor data pipelines using Synapse Pipelines for automated analytics
Ready to Get Started?
Join thousands of professionals who have advanced their careers with our training programs.
Join Scheduled Training
Find upcoming sessions for this course and register for instructor-led training with other professionals.
View ScheduleCustom Training Solution
Need training for your team? We'll create a customized program that fits your organization's specific needs.
Get Custom Quote