Cantitate/Preț
Produs

Practical Apache Spark: Using the Scala API

Autor Subhashini Chellappan, Dharanitharan Ganesan
en Limba Engleză Paperback – 13 dec 2018
Work with Apache Spark using Scala to deploy and set up single-node, multi-node, and high-availability clusters. This book discusses various components of Spark such as Spark Core, DataFrames, Datasets and SQL, Spark Streaming, Spark MLib, and R on Spark with the help of practical code snippets for each topic. Practical Apache Spark also covers the integration of Apache Spark with Kafka with examples. You’ll follow a learn-to-do-by-yourself approach to learning – learn the concepts, practice the code snippets in Scala, and complete the assignments given to get an overall exposure. 

On completion, you’ll have knowledge of the functional programming aspects of Scala, and hands-on expertise in various Spark components. You’ll also become familiar with machine learning algorithms with real-time usage.

What You Will Learn
  • Discover the functional programming features of Scala
  • Understand the completearchitecture of Spark and its components
  • Integrate Apache Spark with Hive and Kafka 
  • Use Spark SQL, DataFrames, and Datasets to process data using traditional SQL queries
  • Work with different machine learning concepts and libraries using Spark's MLlib packages

Who This Book Is For

Developers and professionals who deal with batch and stream data processing. 


Citește tot Restrânge

Preț: 28373 lei

Preț vechi: 35466 lei
-20% Nou

Puncte Express: 426

Preț estimativ în valută:
5430 5634$ 4538£

Carte disponibilă

Livrare economică 24 februarie-10 martie

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9781484236512
ISBN-10: 1484236513
Pagini: 390
Ilustrații: XVI, 280 p. 303 illus.
Dimensiuni: 178 x 254 x 17 mm
Greutate: 0.52 kg
Ediția:1st ed.
Editura: Apress
Colecția Apress
Locul publicării:Berkeley, CA, United States

Cuprins

1. Scala - Functional Programming Aspects. - 2. Single & Multi-node cluster setup. - 3. Introduction to Apache Spark and Spark Core. - 4. Spark SQL, Dataframes & Datasets. - 5. Introduction to Spark Streaming. - 6. Spark Structured Streaming. - 7. Spark Streaming with Kafka. - 8. Spark Machine Learning Library. - 9. Working with SparkR. - 10. Spark - Real time use case.


Notă biografică

Subhashini Chellappan is a technology enthusiast with expertise in the big data and cloud space. She has rich experience in both academia and the software industry. Her areas of interest and expertise are centered on business intelligence, big data analytics and cloud computing.

Dharanitharan Ganesan is a senior analyst with five years of experience in IT. He has a high level of exposure and experience in big data – Apache Hadoop, Apache Spark and various Hadoop ecosystem components. He has a proven track record of improving efficiency and productivity through the automation of various routine and administrative functions in business intelligence and big data technologies. His areas of interest and expertise are centered on machine learning algorithms, statistical modelling and predictive analysis.



Textul de pe ultima copertă

Work with Apache Spark using Scala to deploy and set up single-node, multi-node, and high-availability clusters. This book discusses various components of Spark such as Spark Core, DataFrames, Datasets and SQL, Spark Streaming, Spark MLib, and R on Spark with the help of practical code snippets for each topic. Practical Apache Spark also covers the integration of Apache Spark with Kafka with examples. You’ll follow a learn-to-do-by-yourself approach to learning – learn the concepts, practice the code snippets in Scala, and complete the assignments given to get an overall exposure. 

On completion, you’ll have knowledge of the functional programming aspects of Scala, and hands-on expertise in various Spark components. You’ll also become familiar with machine learning algorithms with real-time usage.

You will:
  • Discover the functional programming features of Scala
  • Understand the complete architecture of Spark and its components
  • Integrate Apache Spark with Hive and Kafka 
  • Use Spark SQL, DataFrames, and Datasets to process data using traditional SQL queries
  • Work with different machine learning concepts and libraries using Spark's MLlib packages

Caracteristici

Contains extensive coverage of machine-learning algorithms with real-time code implementation using Spark MLib Explains the SparkR real-time module with code implementation Covers Spark Streaming and Spark Integration examples with other big data components such as Kafka