Data is bigger, arrives faster, and comes in a variety of formatsâ??and it all needs to be processed at scale for analytics or machine learning. But how can you process such varied workloads efficiently? Enter Apache Spark.
Updated to include Spark 3.0, this second edition shows data engineers and data scientists why structure and unification in Spark matters. Specifically, this book explains how to perform simple and complex data analytics and employ machine learning algorithms. Through step-by-step walk-throughs, code snippets, and notebooks, youâ??ll be able to:
- Learn Python, SQL, Scala, or Java high-level Structured APIs
- Understand Spark operations and SQL Engine
- Inspect, tune, and debug Spark operations with Spark configurations and Spark UI
- Connect to data sources: JSON, Parquet, CSV, Avro, ORC, Hive, S3, or Kafka
- Perform analytics on batch and streaming data using Structured Streaming
- Build reliable data pipelines with open source Delta Lake and Spark
- Develop machine learning pipelines with MLlib and productionize models using MLflow
Table of contents
1. Introduction to Apache Spark: A Unified Analytics Engine
2. Downloading Apache Spark and Getting Started
3. Apache Spark’s Structured APIs
4. Spark SQL and DataFrames: Introduction to Built-in Data Sources
5. Spark SQL and DataFrames: Interacting with External Data Sources
6. Spark SQL and Datasets
7. Optimizing and Tuning Spark Applications
8. Structured Streaming
9. Building Reliable Data Lakes with Apache Spark
10. Machine Learning with MLlib
11. Managing, Deploying, and Scaling Machine Learning Pipelines with Apache Spark
12. Epilogue: Apache Spark 3.0
Be the first to review “Learning Spark”