I recently was on the partner panel for some of the Microsoft Transformation IT Roadshows where I was honored to speak on Advanced Analytics. For those of you who attended I had mentioned I would capture some of the Q&A and post it on our blog for your convenience. For those unable to attend, I wanted to share the content with you, too! If you have any additional questions or if you’d like to talk more about any of these, email our Client Services team. I look forward to connecting with you!
And, if you’re looking for more information on Advanced Analytics, be sure to attend our complementary webinar on March 10th at 9am PT / 12pm ET. Register here.
Partner Panel Q&A
Here’s some of my responses from the Q&A. I’m going to stick to my own answers, since I’m doing this from memory and don’t want to mischaracterize the answers of the others on the panel.
Question: What does “the Modern Data Platform” mean to you?
Although many of the answers of the other panelists leaned towards hardware and software refresh schedules, I felt like my experience lent itself to a different answer. Here was my response:
“I feel a bit out of place with my answer, since I think you can have a ‘modern data platform’ even with old hardware and software, although I’m sure that makes it a bit harder. To me, a ‘modern data platform’ is one where both transactional and non-transactional data play a role.
Traditionally, people have thought about important, decision-supporting data as belonging in a traditional SQL based transactional store or data warehouse. That still plays an extremely important role, especially for data that needs to be analyzed repeatedly and quickly – your hot data. But I believe a ‘modern data platform’ includes semi-structured and unstructured data, and that this data is used to provide business insight. Think web logs, click streams, user reviews, and the like – sometimes this is cold, rarely accessed data. All these things contribute meaningfully to understanding your business, but they don’t belong in a traditional structured database.
To be fair to my colleagues who talked about new hardware and software, I believe this is much more easily achieved with the right tools. For instance, SQL Server 2014 and 2016 allow integration of unstructured data stored in Hadoop (and many other unstructured data storage technologies) into SQL queries. SQL Server 2016 ups the game considerably with direct integration of powerful data analytics capabilities, such as running R scripts in-process. Plus, of course, all those wonderful features you saw demonstrated in the last presentation.”
Question: What recommendation would you make for someone just starting to think about data analytics?
“I have two recommendations: start storing your data now, and look at your existing dashboards with an eye toward predictive recommendations.
First, start saving your data now. There’s nothing more frustrating than coming up with a fantastic, potentially valuable idea, but then having to wait 6 months to get the data you need. Instead, store it now. All of it. Stuff you don’t think you’ll ever use or need. Unless you’re absolutely sure, store it. Storage in Azure is now less than 2 cents per gigabyte per month. Just save it there in piles. Once you have an idea, you’ll be glad you did.
Second, look at your existing executive dashboards. Chances are they show what has happened, and you’re using those to make business decisions today. Now, ask yourself, if the reports showed what will likely happen, would that be valuable. If so, how so? Now you’re on your way to beginning to sketch out some ideas for getting real business value from your predictive analytics. By starting with dashboards and reports that already provide value, you’re more likely to provide business value faster. You can leverage those successes to justify looking for deeper, potentially more valuable, insights.”
Question: We have all heard about Big Data and Advanced Analytics. Can you help make this real by sharing some examples of tangible solutions that you have delivered for your customers??
“Over the past several years, we’ve seen exponential growth in the amount of data our customers are generating. But not in the amount they are saving. But you don’t need to have a massive amount of data to do “Big Data”. We often find success with megabytes and gigabytes of data – not terabytes and petabytes. Let me give a couple examples:
A business-to-business retailer recent ran the entire purchase history of every customer through a simple recommendation engine. They then created up-sell opportunities for each of their 6,000 customers, allowing their sales team to prioritize their work. Total time investment: 1 day. Total Azure cost: $1.89. Yes, one dollar and 89 cents. Can we make the recommendations better? Of course, but sometimes it doesn’t take rocket science, just a couple gigabytes of data and a good engine.
Another company is using Azure Machine Learning to predict the probability that leads will close successfully. Are they 100% guaranteed to close? Absolutely not, but we’re getting closer and closer to understanding which leads are substantially better and should be pursued.
There are so many good stories, and so many different opportunities to do something fast to deliver value. The key is to pick something small, and run with it. Do something. It won’t be perfect the first time through, but you’re not looking for perfection, you’re looking for improvement. Then use proof for another small step.”
There were a few more questions, but these hit the key points during the launches. One last thing I’d recommend, however, if you’re looking to see how LEGO and data are related, check out our latest video and then attend one of our advanced analytics events – webcast or in person!