Data Engineering with Azure Databricks: Design, build, and optimize scalable data pipelines and analytics solutions with Azure Databricks
Master end-to-end data engineering on Azure Databricks. From data ingestion and Delta Lake to CI/CD and real-time streaming, build secure, scalable, and performant data solutions with Spark, Unity Catalog, and ML tools.

Key Features

  • Build scalable data pipelines using Apache Spark and Delta Lake
  • Automate workflows and manage data governance with Unity Catalog
  • Learn real-time processing and structured streaming with practical use cases
  • Implement CI/CD, DevOps, and security for production-ready data solutions
  • Explore Databricks-native ML, AutoML, and Generative AI integration

Book Description

"Data Engineering with Azure Databricks" is your essential guide to building scalable, secure, and high-performing data pipelines using the powerful Databricks platform on Azure. Designed for data engineers, architects, and developers, this book demystifies the complexities of Spark-based workloads, Delta Lake, Unity Catalog, and real-time data processing. Beginning with the foundational role of Azure Databricks in modern data engineering, you’ll explore how to set up robust environments, manage data ingestion with Auto Loader, optimize Spark performance, and orchestrate complex workflows using tools like Azure Data Factory and Airflow. The book offers deep dives into structured streaming, Delta Live Tables, and Delta Lake’s ACID features for data reliability and schema evolution. You’ll also learn how to manage security, compliance, and access controls using Unity Catalog, and gain insights into managing CI/CD pipelines with Azure DevOps and Terraform. With a special focus on machine learning and generative AI, the final chapters guide you in automating model workflows, leveraging MLflow, and fine-tuning large language models on Databricks. Whether you're building a modern data lakehouse or operationalizing analytics at scale, this book provides the tools and insights you need.

What you will learn

  • Set up a full-featured Azure Databricks environment
  • Implement batch and streaming ingestion using Auto Loader
  • Optimize Spark jobs with partitioning and caching
  • Build real-time pipelines with structured streaming and DLT
  • Manage data governance using Unity Catalog
  • Orchestrate production workflows with jobs and ADF
  • Apply CI/CD best practices with Azure DevOps and Git
  • Secure data with RBAC, encryption, and compliance standards
  • Use MLflow and Feature Store for ML pipelines
  • Build generative AI applications in Databricks

Who this book is for

This book is for data engineers, solution architects, cloud professionals, and software engineers seeking to build robust and scalable data pipelines using Azure Databricks. Whether you're migrating legacy systems, implementing a modern lakehouse architecture, or optimizing data workflows for performance, this guide will help you leverage the full power of Databricks on Azure. A basic understanding of Python, Spark, and cloud infrastructure is recommended.

1147808843
Data Engineering with Azure Databricks: Design, build, and optimize scalable data pipelines and analytics solutions with Azure Databricks
Master end-to-end data engineering on Azure Databricks. From data ingestion and Delta Lake to CI/CD and real-time streaming, build secure, scalable, and performant data solutions with Spark, Unity Catalog, and ML tools.

Key Features

  • Build scalable data pipelines using Apache Spark and Delta Lake
  • Automate workflows and manage data governance with Unity Catalog
  • Learn real-time processing and structured streaming with practical use cases
  • Implement CI/CD, DevOps, and security for production-ready data solutions
  • Explore Databricks-native ML, AutoML, and Generative AI integration

Book Description

"Data Engineering with Azure Databricks" is your essential guide to building scalable, secure, and high-performing data pipelines using the powerful Databricks platform on Azure. Designed for data engineers, architects, and developers, this book demystifies the complexities of Spark-based workloads, Delta Lake, Unity Catalog, and real-time data processing. Beginning with the foundational role of Azure Databricks in modern data engineering, you’ll explore how to set up robust environments, manage data ingestion with Auto Loader, optimize Spark performance, and orchestrate complex workflows using tools like Azure Data Factory and Airflow. The book offers deep dives into structured streaming, Delta Live Tables, and Delta Lake’s ACID features for data reliability and schema evolution. You’ll also learn how to manage security, compliance, and access controls using Unity Catalog, and gain insights into managing CI/CD pipelines with Azure DevOps and Terraform. With a special focus on machine learning and generative AI, the final chapters guide you in automating model workflows, leveraging MLflow, and fine-tuning large language models on Databricks. Whether you're building a modern data lakehouse or operationalizing analytics at scale, this book provides the tools and insights you need.

What you will learn

  • Set up a full-featured Azure Databricks environment
  • Implement batch and streaming ingestion using Auto Loader
  • Optimize Spark jobs with partitioning and caching
  • Build real-time pipelines with structured streaming and DLT
  • Manage data governance using Unity Catalog
  • Orchestrate production workflows with jobs and ADF
  • Apply CI/CD best practices with Azure DevOps and Git
  • Secure data with RBAC, encryption, and compliance standards
  • Use MLflow and Feature Store for ML pipelines
  • Build generative AI applications in Databricks

Who this book is for

This book is for data engineers, solution architects, cloud professionals, and software engineers seeking to build robust and scalable data pipelines using Azure Databricks. Whether you're migrating legacy systems, implementing a modern lakehouse architecture, or optimizing data workflows for performance, this guide will help you leverage the full power of Databricks on Azure. A basic understanding of Python, Spark, and cloud infrastructure is recommended.

59.99 Pre Order
Data Engineering with Azure Databricks: Design, build, and optimize scalable data pipelines and analytics solutions with Azure Databricks

Data Engineering with Azure Databricks: Design, build, and optimize scalable data pipelines and analytics solutions with Azure Databricks

Data Engineering with Azure Databricks: Design, build, and optimize scalable data pipelines and analytics solutions with Azure Databricks

Data Engineering with Azure Databricks: Design, build, and optimize scalable data pipelines and analytics solutions with Azure Databricks

Paperback

$59.99 
  • SHIP THIS ITEM
    Available for Pre-Order. This item will be released on April 10, 2026

Related collections and offers


Overview

Master end-to-end data engineering on Azure Databricks. From data ingestion and Delta Lake to CI/CD and real-time streaming, build secure, scalable, and performant data solutions with Spark, Unity Catalog, and ML tools.

Key Features

  • Build scalable data pipelines using Apache Spark and Delta Lake
  • Automate workflows and manage data governance with Unity Catalog
  • Learn real-time processing and structured streaming with practical use cases
  • Implement CI/CD, DevOps, and security for production-ready data solutions
  • Explore Databricks-native ML, AutoML, and Generative AI integration

Book Description

"Data Engineering with Azure Databricks" is your essential guide to building scalable, secure, and high-performing data pipelines using the powerful Databricks platform on Azure. Designed for data engineers, architects, and developers, this book demystifies the complexities of Spark-based workloads, Delta Lake, Unity Catalog, and real-time data processing. Beginning with the foundational role of Azure Databricks in modern data engineering, you’ll explore how to set up robust environments, manage data ingestion with Auto Loader, optimize Spark performance, and orchestrate complex workflows using tools like Azure Data Factory and Airflow. The book offers deep dives into structured streaming, Delta Live Tables, and Delta Lake’s ACID features for data reliability and schema evolution. You’ll also learn how to manage security, compliance, and access controls using Unity Catalog, and gain insights into managing CI/CD pipelines with Azure DevOps and Terraform. With a special focus on machine learning and generative AI, the final chapters guide you in automating model workflows, leveraging MLflow, and fine-tuning large language models on Databricks. Whether you're building a modern data lakehouse or operationalizing analytics at scale, this book provides the tools and insights you need.

What you will learn

  • Set up a full-featured Azure Databricks environment
  • Implement batch and streaming ingestion using Auto Loader
  • Optimize Spark jobs with partitioning and caching
  • Build real-time pipelines with structured streaming and DLT
  • Manage data governance using Unity Catalog
  • Orchestrate production workflows with jobs and ADF
  • Apply CI/CD best practices with Azure DevOps and Git
  • Secure data with RBAC, encryption, and compliance standards
  • Use MLflow and Feature Store for ML pipelines
  • Build generative AI applications in Databricks

Who this book is for

This book is for data engineers, solution architects, cloud professionals, and software engineers seeking to build robust and scalable data pipelines using Azure Databricks. Whether you're migrating legacy systems, implementing a modern lakehouse architecture, or optimizing data workflows for performance, this guide will help you leverage the full power of Databricks on Azure. A basic understanding of Python, Spark, and cloud infrastructure is recommended.


Product Details

ISBN-13: 9781806106370
Publisher: Packt Publishing
Publication date: 04/10/2026
Product dimensions: 75.00(w) x 92.50(h) x (d)

About the Author

Dmitry Foshin is a business intelligence team leader, whose main goals are delivering business insights to the management team through data engineering, analytics, and visualization. He has led and executed complex full-stack BI solutions (from ETL processes to building DWH and reporting) using Azure technologies, Data Lake, Data Factory, Data Bricks, MS Office 365, PowerBI, and Tableau. He has also successfully launched numerous data analytics projects – both on-premises and cloud – that help achieve corporate goals in international FMCG companies, banking, and manufacturing industries.

Dmitry Anoshin is a data-centric technologist and a recognized expert in building and implementing big data and analytics solutions. He has a successful track record when it comes to implementing business and digital intelligence projects in numerous industries, including retail, finance, marketing, and e-commerce. Dmitry possesses in-depth knowledge of digital/business intelligence, ETL, data warehousing, and big data technologies. He has extensive experience in the data integration process and is proficient in using various data warehousing methodologies. Dmitry has constantly exceeded project expectations when he has worked in the financial, machine tool, and retail industries. He has completed a number of multinational full BI/DI solution life cycle implementation projects. With expertise in data modeling, Dmitry also has a background and business experience in multiple relation databases, OLAP systems, and NoSQL databases. He is also an active speaker at data conferences and helps people to adopt cloud analytics.

Tonya Chernyshova is an experienced Data Engineer with over 10 years in the field, including time at Amazon. Specializing in Data Modeling, Automation, Cloud Computing (AWS and Azure), and Data Visualization, she has a strong track record of delivering scalable, maintainable data products. Her expertise drives data-driven insights and business growth, showcasing her proficiency in leveraging cloud technologies to enhance data capabilities.

Xenia Ireton is a Senior Software Engineer at Microsoft. She has extensive knowledge in building distributed services, data pipelines and data warehouses.

Table of Contents

Table of Contents

  1. The role of Azure Databricks in modern data engineering
  2. Setting up an end-to-end Azure Databricks environment
  3. Data ingestion strategies for Azure Databricks
  4. Deep dive into Apache Spark on Azure Databricks
  5. Streaming architectures with structured streaming
  6. Working with Delta Lake: ACID transactions & schema evolution
  7. Automating data pipelines with Delta Live Tables (DLT)
  8. Orchestrating data workflows: from notebooks to production
  9. CI/CD and DevOps for Azure Databricks
  10. Optimizing query performance and cost management
  11. Security, compliance, and data governance
  12. Machine learning, AutoML, and generative AI in Databricks
From the B&N Reads Blog

Customer Reviews