San Francisco
June 30 - July 2, 2014

Spark Summit 2014 brought the Apache Spark community together on June 30- July 2, 2014 at the The Westin St. Francis in San Francisco. It featured production users of Spark, Shark, Spark Streaming and related projects.


Spark Summit 2014
Sparkling: Identification of Task Skew and Speculative Partition of Data for Spark Applications
Peilong Li (University of Massachusetts Lowell, University of Wisconsin Milwaukee)

Apache Spark demonstrated its advantages over Hadoop’s MapReduce computation engine, in terms of both the runtime performance and the broader range of computation workloads that it can handle. In this work, we make two significant contributions to the Spark community: (1) we build a web tool called Sparkling with better execution visualization and statistical analysis. With Sparkling, we are able to identify the task skew problem in a number of biomedical multimedia analytics applications; and (2) we propose two methods to address the unbalanced task execution problem: application-aware data partitioning and greedy (speculative) task scheduling. Our methods improve the execution time of biomedical multimedia analytics applications by up to 20%.

To help understand the performance of an application running on Spark, Spark provides a web UI and several external instrumentation tools such as Ganglia for users and developers. However, these tools do not provide adequate insights on why an application does not yield expected performance. Sparkling enhances Spark application development in three ways: a) modify the original metrics system on Spark so that more useful analytical information can be obtained; b) visualize detailed activities on each task or node and demonstrate the overall performance with statistics; c) provide developers insight into how much better a program execution can achieve by means of avoiding “data skew” and “small tasks”.

We also propose two methods to mitigate data skews in Spark applications. Spark has provided a speculative mechanism to avoid the effect of stragglers (slow nodes). However cloud applications such as those for biomedical multimedia processing usually exhibit significant computational skew, which means that the execution time of each task depends on not only input data size but also data processing time, and thus introduce the difficulty of minimizing the skew. We argue that a data skew mitigation mechanism is of vital importance in Spark since our experimental data demonstrates the potential reduction of execution time by 20% with optimized data partitioning. We also find it important for the skew mitigator to be transparent to programmers and effective at run-time. Thus, we propose two ways to address the skew. Firstly, we partition the input data into slices with decremental workload since the start time of the longest task often determines whether there is any idle time between stages. Secondly, with the profiling information from input data profiler, we are able to partition the input based on domain knowledge such that “tough” data will be grouped into smaller workload slices. Both of the two methods yield better performance than Spark’s naive partition algorithm as shown in our experiments.

Peilong Li received the BS degree in electrical engineering from Qingdao University of Science and Technology, China, in 2007. He is currently a Ph.D. candidate in Electrical and Computer Engineering, University of Massachusetts Lowell. Peilong is a research assistant working in Computer Architecture and Network System Lab from University of Massachusetts Lowell. His research interests include power-efficient cloud and mobile computer architecture, and big data analysis.

Slides PDF |Video