Spark SQL

Apache Spark™ 2.0: MQTT as a Source for Structured Streaming

Apache Spark™ 2.0 introduced Structured Streaming as an alpha release and Spark contributors have already created a library for reading data from MQTT Servers that allows Spark users to process Internet-of-Things data. Specifically, the library allows developers to create a SQL Stream with data received through an MQTT server using sqlContext.readstream.

The work is being done in Apache Bahir™ which "provides extensions to distributed analytic platforms such as Apache Spark". For more detail about Bahir and to get started with MQTT Structured Streaming, check out this step-by-step post from Luciano Resende.

What is MQTT?

For those who might be unfamiliar with MQTT (Message Queuing Telemetry Transport), the site defines it as "a machine-to-machine (M2M)/Internet of Things connectivity protocol. It was designed as an extremely lightweight publish/subscribe messaging transport. It is a standard at OASIS-mqtt-v3.1.1"

MQTT has the potential to be a good fit for Spark because of its efficiency, low networking overhead, and a message protocol governed by Quality of Service (QoS) levels that establish different delivery guarantees. Originally authored in 1999, MQTT is already in wide use and its various implementations feature easy distribution, high performance, and high availability.

High throughput and fault tolerance for structured streaming sources

Currently, Spark provides fault tolerance in the sense that a streaming source can reliably replay the entire sequence of incoming messages. This is true of streaming sources like Kafka, S3, HDFS, and even local filesystems. However, it may not be the case with MQTT. (And it is definitely not the case with native socket-based sources.)

For now, all of the available sources in structured streaming are backed by a filesystem (S3, HDFS etc.). In these cases, all the executors can request offsets from the filesystem sources and process data in parallel.

However in cases where streaming sources have pub-sub as the only option for receiving the stream, supporting high throughput becomes a challenge. MQTT does not currently support reading in parallel and thus does not support high throughput.

Most message queues including MQTT lack built-in support for replaying the entire sequence of message. That deficiency might not be an issue since not all work loads require the entire sequence be retained on the server. If so, there will ideally be a limit on when a source can purge stored data without compromising any "exactly once" delivery guarantees. (Note that this is an outstanding issue. See SPARK-16963.)

For our current release, we implement a minimal fault tolerance guarantee by storing all incoming messages locally on the disk. This represents an intermediate solution, as it poses a limit on how many messages it can process.

For source code and documentation, visit the related github repository, or visit the Bahir website to learn about other extensions available within the project.


You Might Also Enjoy

Kevin Bates
Kevin Bates
10 months ago

Limit Notebook Resource Consumption by Culling Kernels

There’s no denying that data analytics is the next frontier on the computational landscape. Companies are scrambling to establish teams of data scientists to better understand their clientele and how best to evolve product solutions to the ebb and flow of today’s business ecosystem. With Apache Hadoop and Apache Spark entrenched as the analytic engine and coupled with a trial-and-error model to... Read More

Gidon Gershinsky
Gidon Gershinsky
a year ago

How Alluxio is Accelerating Apache Spark Workloads

Alluxio is fast virtual storage for Big Data. Formerly known as Tachyon, it’s an open-source memory-centric virtual distributed storage system (yes, all that!), offering data access at memory speed and persistence to a reliable storage. This technology accelerates analytic workloads in certain scenarios, but doesn’t offer any performance benefits in other scenarios. The purpose of this blog is to... Read More