ElasticSearch App on Apache Spark Released to Open Source

Several months ago, designers and Apache Spark coders teamed up to create an app to demonstrate the power of Spark when integrated with other technologies, in particular ElasticSearch.

They called the project RedRock and the goal was to put the power of big data analysis in the hands of everyday users. By providing just a word or hashtag, those users can do deep searches of Twitter history. RedRock analyzes billions of tweets and provides analysis. RedRock users can then tune search results by including and excluding terms.

The team has now open-sourced the complete code for the application’s back end: https://github.com/SparkTC/redrock

The team paired the RedRock back end with an iPad front end that communicates with it via REST calls. The iPad app is not currently open source, but the back end includes a REST API that users can hook up to any front end they like.

To get the gears turning, the team provided examples of how data returned from RedRock can be parsed and displayed. Let’s say a user searches for “ibm”. By selecting different icons in the bottom menu bar, the user gets a set of sophisticated views of the data:

*Cluster:* Shows relationships between the top 20 most closely related terms to your search. Each colored group represents a different topic, and the size of each circle represents the volume of occurrences.

RedRock-01Cluster view

Twitter Feed: Top 100 tweets related to your search, sorted by most influential user (by number of followers).

Found Tweets: Total tweets matching the search.

Found Users: Total unique users within tweets that matched your search.

Total Tweets: Total tweets in the database.

RedRock-02Network Graph view

Network Graph: The top 20 most closely related terms to the search term. The size of each circle indicates its frequency, and the length of the line indicates how closely it relates to the original search term.

RedRock-03Sentiment Bar Chart view

Sentiment Bar Chart: Positive and negative sentiment related to the search term over time.

RedRock-04Profession Treemap view

RedRock-05World Map view

Profession Treemap: The careers of users tweeting about the given topic.

World Map: Shows the volume of tweets from each country over time.

From the beginning, the team set up the RedRock project as if they were using Spark from a notebook. This meant that every time a user searched for a term, a Spark job would run. That model worked fine for one user, but to make the application scale, the team decided to use Spark Streaming to process and annotate the tweets as they arrived. This meant that Spark would extract the information that needed from each tweet, do some analysis, and then save the annotated tweet in ElasticSearch. (Managing text searches is one of ElasticSearch’s core strengths.)

Combining the two technologies meant better analysis and faster response times than the team could have gotten with either technology individually.

Originally assembled quickly as a demo, RedRocks benefited greatly from Spark — in particular the ability to do easy setup, develop algorithms, and scale.

The team now shares it with the world in the hope that using Spark will help you tackle the big data that is standing between you and world domination.

Let the team know what you think.


You Might Also Enjoy

Gidon Gershinsky
Gidon Gershinsky
2 months ago

How Alluxio is Accelerating Apache Spark Workloads

Alluxio is fast virtual storage for Big Data. Formerly known as Tachyon, it’s an open-source memory-centric virtual distributed storage system (yes, all that!), offering data access at memory speed and persistence to a reliable storage. This technology accelerates analytic workloads in certain scenarios, but doesn’t offer any performance benefits in other scenarios. The purpose of this blog is to... Read More

James Spyker
James Spyker
4 months ago

Streaming Transformations as Alternatives to ETL

The strategy of extracting, transforming and then loading data (ETL) to create a version of your data optimized for analytics has been around since the 1970s and its challenges are well understood. The time it takes to run an ETL job is dependent on the total data volume so that the time and resource costs rise as an enterprise’s data volume grows. The requirement for analytics databases to be mo... Read More