
DENG-256: Optimizing Apache Spark Applications
This hands-on training course delivers the key concepts and expertise developers need to improve the performance of their Apache Spark applications. During the course, participants will learn how to identify common sources of poor performance in Spark applications, techniques for avoiding or solving them, and best practices for Spark application monitoring.Apache Spark Application Performance Tuning presents the architecture and concepts behind Apache Spark and underlying data platform, then builds on this foundational understanding by teaching students how to tune Spark application code. The course format emphasizes instructor-led demonstrations illustrate both performance issues and the techniques that address them, followed by hands-on exercises that give students an opportunity to practice what they’ve learned through an interactive notebook environment. The course applies to Spark 2.4, but also introduces the Spark 3.0 Adaptive Query Execution framework.
Students who successfully complete this course will be able to:Understand Apache Spark’s architecture, job execution, and how techniques such as lazy execution and pipelining can improve runtime performanceEvaluate the performance characteristics of core data structures such as RDD and DataFramesSelect the file formats that will provide the best performance for your applicationIdentify and resolve performance problems caused by data skewUse partitioning, bucketing, and join optimizations to improve SparkSQL performanceUnderstand the performance overhead of Python-based RDDs, DataFrames, and user-defined functionsTake advantage of caching for better application performanceUnderstand how the Catalyst and Tungsten optimizers workUnderstand how Workload XM can help troubleshoot and proactively monitor Spark applications performanceLearn about the new features in Spark 3.0 and specifically how the Adaptive Query Execution engine improves performance
Spark ArchitectureRDDsDataFrames and DatasetsLazy EvaluationPipeliningData Sources and FormatsAvailable Formats OverviewImpact on PerformanceThe Small Files ProblemInferring SchemasThe Cost of InferenceMitigating TacticsDealing With Skewed DataRecognizing SkewMitigating TacticsCatalyst and Tungsten OverviewCatalyst OverviewTungsten OverviewMitigating Spark ShufflesDenormalizationBroadcast JoinsMap-Side OperationsSort Merge JoinsPartitioned and Bucketed TablesPartitioned TablesBucketed TablesImpact on PerformanceImproving Join PerformanceSkewed JoinsBucketed JoinsIncremental JoinsPyspark Overhead and UDFsPyspark OverheadScalar UDFsVector UDFs using Apache ArrowScala UDFsCaching Data for ReuseCaching OptionsImpact on PerformanceCaching PitfallsWorkload XM (WXM) IntroductionWXM OverviewWXM for Spark DevelopersWhat's New in Spark 3.0?Adaptive Number of Shuffle PartitionsSkew JoinsConvert Sort Merge Joins to Broadcast JoinsDynamic Partition PruningDynamic Coalesce Shuffle Partitions
This course is designed for software developers, engineers, and data scientists who have experience developing Spark applications and want to learn how to improve the performance of their code. This is not an introduction to Spark.Spark examples and hands-on exercises are presented in Python and the ability to program in this language is required. Basic familiarity with the Linux command line is assumed. Basic knowledge of SQL is helpful.



