SESSION

SparkOscope: Enabling Apache Spark Optimization through Cross Stack Monitoring

Slides PDF Video

During the last year, the team at IBM Research at Ireland has been using Apache Spark to perform analytics on large volumes of sensor data. These applications need to be executed on a daily basis, therefore, it was essential for them to understand Spark resource utilization. They found it cumbersome to manually consume and efficiently inspect the CSV files for the metrics generated at the Spark worker nodes.

Although using an external monitoring system like Ganglia would automate this process, they were still plagued with the inability to derive temporal associations between system-level metrics (e.g. CPU utilization) and job-level metrics (e.g. job or stage ID) as reported by Spark. For instance, they were not able to trace back the root cause of a peak in HDFS Reads or CPU usage to the code in their Spark application causing the bottleneck.

To overcome these limitations, they developed SparkOScope. Taking advantage of the job-level information available through the existing Spark Web UI and to minimize source-code pollution, they use the existing Spark Web UI to monitor and visualize job-level metrics of a Spark application (e.g. completion time). More importantly, they extend the Web UI with a palette of system-level metrics of the server/VM/container that each of the Spark job’s executor ran on. Using SparkOScope, you can navigate to any completed application and identify application-logic bottlenecks by inspecting the various plots providing in-depth timeseries for all relevant system-level metrics related to the Spark executors, while also easily associating them with stages, jobs and even source code lines incurring the bottleneck.

They have made Sparkoscope available as a standalone module, and also extended the available Sinks (mongodb, mysql).

Session hashtag: #SFdev16

Yiannis Gkoufas, Software Engineer at IBM

About Yiannis

Yiannis Gkoufas works as a Research Software Engineer in IBM Research and Development in Dublin since December 2012. He received his Bachelor’s and Master’s degrees in Athens University of Economics and Business. He has been working mainly with Java-based technologies on the backend. In the past few years he has been exploring Hadoop-related frameworks for large batch data processing.