Four Big Themes from #GartnerBI

Chief Evangelist

I spent this week at Gartner’s Business Intelligence & Analytics Summit. It was the first time that I’ve gone to a Gartner event, let alone spoken at one, and found the couple days that I was there to be both interesting and fun. For those of you who were not there, I thought that there were four recurring themes that routinely worked their way into many of the talks and analyses:

Corporate IT needs to be more responsive to—if not outright enlist—business users to make systems more supportive of business operations.

There was a common sense that business units, or business analysts, were exerting greater sway over corporate IT systems and technologies. This is a topic that I have written about previously (“The Inexplicable Fragility of Analytic Systems”) and, coincidentally, was one of the themes that I discussed in my talk at Gartner: analysts are likely to want a degree of flexibility that, if not deftly handled, is likely to cause real concern among those professionals charged with protecting data integrity.

More, and more sophisticated, algorithms will play a central role in our professional lives.

Perhaps the most fascinating story I heard on this front came from Gartner analyst Doug Laney, who talked about how one chain was using sensors in its drive-through to determine which items to feature on the menu (e.g., long lines resulted in items with short preparation times being featured on the menu whereas short lines meant that items with higher profit margins, but potentially longer cooking times, were featured). Previously, I spent a lot of time thinking about algorithms, and IT systems in general, as “cognitive aids.” While I still think about them this way, it’s clear that we all need to think about what functions we simply offload to algorithms and what sort of alerting mechanisms we, as analysts will want and need.

If I thought anything was missing from this conversation, it was how these algorithms might converge with hardware to create what I used to call “wildly different user experiences.” Given the commonality of virtual personal assistants—and the looming appearance of virtual and augmented reality headsets on consumer markets—I kept wondering what life might look like if I, as an analyst, I walked through the data or relied on auditory cues.

“Data science” is likely to give way to a more diverse array of analysts dubbed “citizen data scientists.”

I am not a huge fan of trying to push words and phrases into the common vernacular—I still wince when I hear SME (“subject matter expert”). That said, I agree with the idea that more people are likely to be capable of extracting interesting, important, and arguably valuable insights from large volumes of disparate data than the people we currently call data scientists and we need to think about the technologies, interfaces, and user experiences that are likely to better support this more diverse population.

What I struggled with was the notion of versatilists, and not just because it was another coined word. There seemed to be some cognitive dissonance at play: on the one hand, the variety of people gleaning insight into big data is likely to grow; on the other hand, the skillset required for analysts also is likely to grow. I am not sure that works: I tend to think that teams of diverse experts are likely to be a more effective solution largely because the number of unicorns out there with the “right” combination of skills is likely to be too few to be a viable solution for the foreseeable future . . . and I tend to think that algorithms, interfaces, and user experiences should converge in ways that, if anything, shifts the desired skillset for analysts in different directions.

Opportunities to monetize data, or build new services on data, continue to grow.

This is, admittedly, the topic that I was least familiar with going into the Summit and have only a moderately better understanding of after the Summit. Instinctively, I get that we live in a sensor-rich world and the value of the data derived from those sensors—even if the “sensor” is a point-of-sale scanner or the device that captures loyalty card data—continues to not only grow, but also to give birth to new products and services (e.g., Waze). Those are just examples in the retail space: data that can be monetized can also come from many other sources (e.g., mobile apps, cookies, etc.). As an analyst, though, I think monetization involves at least one of (at least) three preconditions:

  1. Uniqueness. Like any commodity, uniqueness of data seems to be a critical facet of any discussion about monetization . . . and, based on conversations with my colleagues here, it seems as though one can often chart several sketch out several different analytic strategies using different data and methodologies to achieve similar insights. That said, there might be some cases where a specific type of unique data might allow for a more elegant solution or higher resolution insights.
  2. Optimization for Analysis. The last piece I wrote on this blog (“How to (Maybe) Get Data Science for Free”) looked at the cost of data clean-up, to include structuring that data to make it more analytically friendly (effectively lowering the cost of curiosity): optimizing data for analysis has value, particularly in light of all the open government and open data initiatives. Optimizing data allows analysts to more easily create complex data, which I think is not only more interesting—from an analytic point of view—but more useful. This optimization likely involves teasing apart the data into more columns and making it easier to join data together.
  3. Controlled Distribution. Gathering data is one thing. Packaging it, another. My colleagues at 1010data identified a third differentiator: the controls and distribution mechanisms necessary to sell data. This is often part and parcel of uniqueness. Selling data requires a team to market the data, having a platform to share the data, and having governance mechanisms in place to control data access, usage, and lineage.
  4. In short, the “big data” space remains very active and is likely to continue evolving across data, analytic platforms, analytic methodologies (especially machine learning algorithms), and—in my opinion—user interfaces and experiences. If you would like to see my stream-of-consciousness thinking during the event, check out my personal Twitter account (@StratGleeson) or for the blow-by-blow of my talk (until the video is posted) check out @1010data.