spark

0 to Life-Changing App with Apache Spark™: The difficulty in finding good data.

Difficulty in finding good data?

What? But data is growing exponentially as you read this. Why is it so difficult to find it and what constitutes good data?

In a time that has been deemed the age of data hoarding by Beena Ammanath at a recent UC Berkeley DataEdge conference, and a time where data and data science programs are popping up everywhere, how could it be that data takes effort to find?

There are several reasons for this, and while a data scientist's role includes lots of awesome tasks like using machine learning, programming, communications and visualizations, any data scientist can tell you that the majority of their time is spent on finding and cleaning data. It's not very glamorous, but it's the truth. You've been warned.

Though not glamorous, data is a data scientist's currency. It is necessary and essential. It's why we all have jobs and it is what makes this field so interesting. So what makes it good? Well in this project, I define good data as data that is big enough and important enough for my team and I to make an impact. You know, saving the world data. (See my last blog post for more information about what I'm doing.)

Specifically, what kind of delightful data am I looking for?

  1. Open data. I want this project to be open to the public. I want people to be able to follow along and learn how to build an app with SystemML while I'm learning how to build an app with SystemML.

  2. I want the data to be big enough for a machine learning problem. I want to be able to gain insight on a problem big enough to make a difference.

  3. I also want the data to be big enough to use artificial neural networks.

  4. And most important of all, I want the data to be relevant. I want to be able to find a solution to a problem that actually affects people- I'm changing lives remember?!

Finding this data and finding the problem I want to answer is not easy, especially given my preconditions; not to mention this is far from a step by step process. In fact, the entire act of building an app is non linear where each component is not mutually exclusive. As a data scientist, you end up jumping back and forth, looking at and exploring the data and reconstructing your ideas and approach. You try to figure out what insights you could draw if you moved forward with that particular data set or what would happen if you combined both. More often than not, the data is not clean, or it is not the right type, or it is not what you thought you were going to be looking at and you realize you need something else.

That is exactly what I'm facing now. After 1.5 weeks of non-stop brainstorming and searching for data, we have narrowed our project down to a few leads, though nothing has been finalized. Well, nothing is ever really finalized, at least until you run some confirmatory data analysis and even then there's always more information coming in, not to mention real-time data!

So if you're following along and hoping you can sit back and watch lives being changed, don't be discouraged; just remember that this job takes patience and persistence. Soon enough we will be applying SystemML and Apache Spark™ to our dataset doing some amazing, life-changing things! I have full faith that we will all be able to save the world soon enough, just right after we find that quality data.

Newsletter

You Might Also Enjoy

James Spyker
James Spyker
2 months ago

Streaming Transformations as Alternatives to ETL

The strategy of extracting, transforming and then loading data (ETL) to create a version of your data optimized for analytics has been around since the 1970s and its challenges are well understood. The time it takes to run an ETL job is dependent on the total data volume so that the time and resource costs rise as an enterprise’s data volume grows. The requirement for analytics databases to be mo... Read More

Seth Dobrin
Seth Dobrin
2 months ago

Non-Obvious Application of Spark™ as a Cloud-Sync Tool

When most people think about Apache Spark™, they think about analytics and machine learning. In my upcoming talk at Spark Summit East, I'll talk about leveraging Spark in conjunction with Kafka, in a hybrid cloud environment, to apply the batch and micro-batch analytic capabilities to transactional data in place of performing traditional ETL. This application of these two open source tools is a no... Read More