Understand the complexities of modern-day data engineering platforms and
explore strategies to deal with them with the help of use case scenarios led
by an industry expert in big data \nKey Features \n
\n
- Become well-versed with the core concepts of Apache Spark and Delta Lake for building data platforms
\n
- Learn how to ingest, process, and analyze data that can be later used for training machine learning models
\n
- Understand how to operationalize data models in production using curated data
\n
\nBook Description \nIn the world of ever-changing data and schemas, it is
important to build data pipelines that can auto-adjust to changes. This book
will help you build scalable data platforms that managers, data scientists,
and data analysts can rely on. \n \nStarting with an introduction to data
engineering, along with its key concepts and architectures, this book will
show you how to use Microsoft Azure Cloud services effectively for data
engineering. You'll cover data lake design patterns and the different stages
through which the data needs to flow in a typical data lake. Once you've
explored the main features of Delta Lake to build data lakes with fast
performance and governance in mind, you'll advance to implementing the lambda
architecture using Delta Lake. Packed with practical examples and code
snippets, this book takes you through real-world examples based on production
scenarios faced by the author in his 10 years of experience working with big
data. Finally, you'll cover data lake deployment strategies that play an
important role in provisioning the cloud resources and deploying the data
pipelines in a repeatable and continuous way. \n \nBy the end of this data
engineering book, you'll know how to effectively deal with ever-changing data
and create scalable data pipelines to streamline data science, ML, and
artificial intelligence (AI) tasks. \nWhat you will learn \n \n
- Discover the challenges you may face in the data engineering world
\n
- Add ACID transactions to Apache Spark using Delta Lake
\n
- Understand effective design strategies to build enterprise-grade data lakes
\n
- Explore architectural and design patterns for building efficient data ingestion pipelines
\n
- Orchestrate a data pipeline for preprocessing data using Apache Spark and Delta Lake APIs
\n
- Automate deployment and monitoring of data pipelines in production
\n
- Get to grips with securing, monitoring, and managing data pipelines models efficiently
\n
\nWho this book is for \nThis book is for aspiring data engineers and data
analysts who are new to the world of data engineering and are looking for a
practical guide to building scalable data platforms. If you already work with
PySpark and want to use Delta Lake for data engineering, you'll find this book
useful. Basic knowledge of Python, Spark, and SQL is expected.
Також купити книгу Data Engineering with Apache Spark, Delta Lake, and
Lakehouse: Create scalable pipelines that ingest, curate, and aggregate
complex data in a timely and secure way, Manoj Kukreja, Danil Zburivsky
можливо по посиланню: