Apache incubator

Announcing Apache Toree

I’m pleased to announce that in late 2015, Apache Spark Kernel was accepted by Apache as an incubator project. As part of this transition, the Apache Spark Kernel project was renamed Toree. As an Apache incubator, Toree will continue to focus on enabling interactive workloads between applications and a Spark cluster.

But, what are interactive workloads and what are these applications?

I find it easier to explain the concept of interactive applications by explaining what it is not. Lets take, for example, Hadoop Map/Reduce. Applications, and end-users using Hadoop Map/Reduce, run jobs by typically

  1. taking some code,
    1. packaging it up,
    2. submitting the package to a JobServer,
    3. monitoring the state of the job,
    4. waiting for job to complete, and then
    5. reading the results from wherever the job stored them.

BatchWorkloadsThis style of workload is reffered to as a Batch job. In many cases, a sequence of Batch jobs are connected to each other by reading and writing to a persistent store, such as HDFS. This is certainly a very common and valid workload for many use-cases, and it is one that is also often applied to Spark.

When it comes to applications that enable data exploration and ad-hoc analytics, Batch workloads tend to introduce lots of friction. For example, packaging and submitting of jobs, as well as the reading and writing from storage tend to introduce unwanted latencies. Spark alleviates some of the frictions by relying on memory to hold data along with the concept of a SparkContext as a way to tie jobs together. What is missing from Spark is a way for applications to interact with a long living SparkContext . This is where Toree comes into play.

Toree provides a communication channel between an application and a SparkContext that allows access to the entire Spark API in a variety of languages, including Scala, Python, R and soon JavaScript. Through this channel, the application interacts with Spark by exchanging code and data.

InteractiveWorkloadsThe Jupyter Notebook is a great example of an application, that with the help of Toree, can provide the full power of Spark to its end user. Toree can be installed out-of-the-box into Jupyter to enable support for Spark.

I look forward to growing the Toree community, both users and contributors to the project. I’m very interested in seeing what new types of applications you build with Toree. Visit the site and join the community.

Spark Technology Center

Newsletter

Subscribe to the Spark Technology Center newsletter for the latest thought leadership in Apache Spark™, machine learning and open source.

Subscribe

Newsletter

You Might Also Enjoy