Philly ETE 2017 #40 – Scaling with Apache Spark (or a lesson in unintended consequences) – H. Karau

by
Tags: , , , ,
Category:

Apache Spark is one the most popular general purpose distributed systems in the past few years. Apache Spark has APIs in Scala, Java, Python and more recently a few different attempts to provide support for R, C#, and Julia. This talk looks at Apache Spark from a performance/scaling point of view and the work we need to do to be able to handle large datasets. In essence parts of this talk could be considered “the impact of design decisions from years ago and how to work around them.” It’s not all doom and gloom though, we will explore the new APIs and the exciting new things we can do with them with a brief detour into how to work around some of the trade-offs in the new APIs – but mostly focused on the new exciting shiny things we can play with. A basic background with Apache Spark will probably make the talk more exciting, or depressing depending on your point of view, but for those new to Apache Spark just enough to understand whats going will be covered at the start. The presenter would of course encourage you to buy and read her books on the topic (“Learning Spark” & “High Performance Spark”), because which presenter doesn’t do that.