Cantitate/Preț
Produs

The Azure Data Lakehouse Toolkit: Building and Scaling Data Lakehouses on Azure with Delta Lake, Apache Spark, Databricks, Synapse Analytics, and Snowflake

Autor Ron L'Esteve
en Limba Engleză Paperback – 14 iul 2022
Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease.
The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupled storage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs.

After reading this book, you will be able to implement Delta Lake capabilities, including Schema Evolution, Change Feed, Live Tables, Sharing, and Clones to enable better business intelligence and advanced analytics on your data within the Azure Data Platform.

What You Will Learn
  • Implement the Data Lakehouse Paradigm on Microsoft’s Azure cloud platform
  • Benefit from the new Delta Lake open-source storage layer for data lakehouses 
  • Take advantage of schema evolution, change feeds, live tables, and more
  • Writefunctional PySpark code for data lakehouse ELT jobs
  • Optimize Apache Spark performance through partitioning, indexing, and other tuning options
  • Choose between alternatives such as Databricks, Synapse Analytics, and Snowflake

Who This Book Is For

Data, analytics, and AI professionals at all levels, including data architect and data engineer practitioners. Also for data professionals seeking patterns of success by which to remain relevant as they learn to build scalable data lakehouses for their organizations and customers who are migrating into the modern Azure Data Platform. 
Citește tot Restrânge

Preț: 31189 lei

Preț vechi: 38987 lei
-20% Nou

Puncte Express: 468

Preț estimativ în valută:
5968 6279$ 4941£

Carte disponibilă

Livrare economică 24 decembrie 24 - 07 ianuarie 25

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9781484282328
ISBN-10: 1484282329
Pagini: 465
Ilustrații: XXII, 465 p. 365 illus.
Dimensiuni: 178 x 254 x 28 mm
Greutate: 0.84 kg
Ediția:1st ed.
Editura: Apress
Colecția Apress
Locul publicării:Berkeley, CA, United States

Cuprins

Part I: Getting Started.- Chapter 1: The Data Lakehouse Paradigm.- Part II: Data Platforms.- Chapter 2:  Snowflake.- Chapter 3: Databricks.- Chapter 4: Synapse Analytics.- Part III: Apache Spark ELT.- Chapter 5: Pipelines and Jobs.- Chapter 6: Notebook Code.- Part IV: Delta Lake.-Chapter 7: Schema Evolution.- Chapter 8: Change Feed.- Chapter 9: Clones.- Chapter 10: Live Tables.- Chapter 11: Sharing.- Part V: Optimizing Performance.- Chapter 12: Dynamic Partition Pruning for Querying Star Schemas.- Chapter 13: Z-Ordering & Data Skipping.- Chapter 14: Adaptive Query Execution.- Chapter 15: ​Bloom Filter Index.- Chapter 16: Hyperspace.- Part VI: Advanced Capabilities.- Chapter 17: Auto Loader.- Chapter 18: Python Wheels.- Chapter 19: Security & Controls.

Notă biografică

Ron C. L’Esteve is a professional author, trusted technology leader, and digital innovation strategist residing in Chicago, IL, USA. He is well-known for his impactful books and award-winning article publications about Azure Data & AI Architecture and Engineering. He possesses deep technical skills and experience in designing, implementing, and delivering modern Azure Data & AI projects for numerous clients around the world.
Having several Azure Data, AI, and Lakehouse certifications under his belt, Ron has been a go-to technical advisor for some of the largest and most impactful Azure implementation projects on the planet. He has been responsible for scaling key data architectures, defining the road map and strategy for the future of data and business intelligence needs, and challenging customers to grow by thoroughly understanding the fluid business opportunities and enabling change by translating them into high-quality and sustainable technical solutionsthat solve the most complex challenges and promote digital innovation and transformation.

Ron is a gifted presenter and trainer, known for his innate ability to clearly articulate and explain complex topics to audiences of all skill levels. He applies a practical and business-oriented approach by taking transformational ideas from concept to scale. He is a true enabler of positive and impactful change by championing a growth mindset.

 


Textul de pe ultima copertă

Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease.
The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupledstorage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs.

After reading this book, you will be able to implement Delta Lake capabilities, including Schema Evolution, Change Feed, Live Tables, Sharing, and Clones to enable better business intelligence and advanced analytics on your data within the Azure Data Platform.

What You Will Learn
  • Implement the Data Lakehouse Paradigm on Microsoft’s Azure cloud platform
  • Benefit from the new Delta Lake open-source storage layer for data lakehouses 
  • Take advantage of schema evolution, change feeds, live tables, and more
  • Write functional PySparkcode for data lakehouse ELT jobs
  • Optimize Apache Spark performance through partitioning, indexing, and other tuning options
  • Choose between alternatives such as Databricks, Synapse Analytics, and Snowflake



Caracteristici

Shows data lakehouse design using Apache Spark on Azure Teaches performance optimization techniques for Spark queries Provides hands-on PySpark and Delta Lake examples for lakehouse ELT jobs