Styles

Monday, June 13, 2016

Don't Always Believe Your Data

As erroneous as that statement might seem, not trusting your data might be the most prudent thing you do. In this new age of data-driven development where data has become a precious commodity, the need to unlock valuable information that businesses hold about customer interactions has become a crucial part of business success, and hence the investment for better ways to store and extract data has evolved over the last few years more than it ever had over the previous two decades.

The options are endless now with experts recommending vast arrays of strategies to tackle your storage and analysis techniques. But it is more than just the sum of all knowledge about these alternatives that makes you a Data Scientist. The reason we store data in the first place is to provide information that can remove ambiguity in our decision making process. That requires a broader outlook towards investigating industry trends and changes in technology on top of our expertise on how well we persist and describe our business data.

There was a time when relational data stores ruled the earth and with them came the ability to analyse large amounts of data using OLAP cubes. But as the need grew for a more efficient Extract Transform Load (ETL) process the limitations became obvious. Analyzing records that were aggregated nightly because there was “too much data” to extract no longer became a legitimate excuse. Businesses want it now, and they want a lot of it.

With that, storage capabilities evolved to scale horizontally with the map-reduce patterns of document stores like Mongo, key-value stores like Riak, graph stores like Neo4j, and columnar stores like Cassandra. Add into the mix the likes of Big Data storage such as Hadoop, and Event Streams for real time processing like kafka and Amazon Kinesis. Although competition is always a welcome dynamic in any industry, with every contender vying for feature capabilities over their competitors, the choice becomes difficult.

Storage is just one part of the puzzle however. Next would come analysis and forecast. Once a business makes sense of the data it holds with a myriad of techniques for analysis, predicting change comes packaged for us with Machine Learning. Apache Spark jumped on this wagon early commercially with its MLlib offering and Google has also been a popular choice with TensorFlow. There is a degree of expertise required in handling these tools including understanding the type of learning you want your machine to perform, whether its supervised learning using labeled datasets (for fraud detection and recommendations), unsupervised and semi-supervised learning with unlabeled datasets (for image and voice recognition), or reinforcement learning (for artificial intelligence).

There is more to it than knowing your alternatives though. Your data can only tell you as much information as your business keeps about itself, and so Data Science is not just about how to unlock your data with the available tools, its about causation. It's about why.

Identifying peaks or troughs in a graph might help you determine with a guarantee that your tools can help you see patterns or changes, but not the reasons why these patterns and changes exist. And thus starts the real investigation.

This was a lesson that Andrea Burbank from Pinterest learnt as she explained at the YOW! Conference held in Sydney late 2016, and she was kind enough to share her challenges in trying to successfully formulate behavioral trends from Pinterest's massive data storage. The results of her findings were very interesting.

By measuring "daily active users", Pinterest were able to determine which unique customers engaged on their site. Using kafka, they were able to stream logs of response data that told the user's story, but that was just basic counting.

In late October of 2013, they found a sudden step change in growth rate specifically with iPhone users, which seemed likely to be the result of Pinterest being featured in the App Store. What they found though was that it was merely a coincidence because not long before a new feature was introduced to iOS 7 called "Background App Refresh" to prevent apps hanging in suspended animation.

Burbank realized that the statistics they generated weren't telling the full story and after digging further found that one of their endpoints, particularly the home feed endpoint, was giving unnatural background hits even when the app wasn't appearing in the multitask tray, and that obviously had no correlation to daily active users.

This lead to what Burbank defined as the "Spectrum of Certainty" where correlation is found at one end of the spectrum while causation is found at the other. Most of the time Data Scientists believe they are closer to the causation end of the spectrum, when in actual fact they sit very close to the correlation side.

The goal is to ensure data analysis moves towards causal inference, in effect requiring much more investigative techniques about events that occur outside the business domain which may have a strong impact on how data is represented within the business itself. Only then can you truly believe your data to a certain degree.

Reference: Data science as software, Andrea Burbank, Pinterest, YOW! Conference 2016 Sydney

No comments :