Can You Feel The Force?

Last week, I was fortunate enough to have a “customer success story” published by our friends at Apigee. It’s not something our organization tends to do very often, but in cases like this one, where we feel that there is sufficient industry-relevant and generally interesting content to stir other people’s imaginations, then we give these opportunities great deal of consideration before agreeing to “go public” with what we’re up to as our journey to, above, and beyond the cloud continues – all as a direct result of the insatiable demand from our business.

I received some great feedback on the content. Some people are keen to understand more about why a massive organization would once again turn to the consumer internet leaders (yes, that includes you, Netflix) for inspiration on how to attack specific business challenges differently than ever before (the why), while others are more enticed by the technology applied and how we approached the end to end design and delivery of APIs and mobile applications (the how).

It was, however, a question I received somewhat anonymously that gave me cause to consider what we are working on and whether it is something that can be “pigeon holed” in the current nomenclature of cloud.

The question was pretty simple and seemingly innocuous.

Do you consider what you have done with your API strategy to be “Data Virtualization”?

My initial reaction was “eh?, umm. not really.” – but like all good geeks, I turned to Wikipedia for help to wrap my mind around what Data Virtualization is currently classified as.

Data virtualization describes the process of abstracting disparate systems (databases, applications, file repositories, websites, data services vendors, etc.) through a single data access layer (which may be any of several data access mechanisms).

The more I thought about it (and specifically with Dave McCrory’s recent concept of Data Gravity still rattling around my intercranial space) the more I wondered if perhaps this approach is a new form of data virtualization, employing the API to assist in delivering the required functionality of the mobile UX by abstracting both the subsets of functionality and its corresponding, sometimes aggregated data from the typical silo of it’s complex “parent” application. Hmm.

To help understand our overall approach a little better, I offer the following extract from the success story mentioned above:

In our traditional application deployments, we often assumed that each user might need to understand 100% of how any given application worked and trained them on all functionality, irrespective of their role. Across our overall application portfolio, it became clear that we have two types of “users” – those who create data and those who consume information. In some cases, the percentage distribution of those who consume versus those who create can be as high as 80/20.

Using this new understanding of how our users used their applications, we began to consider how we might fulfill the growing demand by “deconstructing” the incumbent applications and delivering subsets of the application’s functionality across smartphone and tablet devices. This approach included the creation of standardized APIs to provide an alternative access method to the back-end systems and the development of new, intuitive user interfaces (or lightweight applications) that could be delivered, monitored, managed and analyzed via a combination of our Apigee infrastructure and private enterprise app store.

The key to our strategy is that we have very deliberately, and with crystal clear intent, not set out to replicate the entire functionality of the incumbent applications on smartphone / tablet devices. This, I believe, would be a recipe for disaster. We are (initially) concentrating on the information consumers, the people that demand information as and when they need it, as and when it suits them – the APIs and the functionality of the mobile UI reflect this – lightweight, agile, responsive and able to aggregate disparate data sources where necessary. Nobody here has enough time, nor inclination, to want to boil any one of the world’s five oceans – we’re much too schedule driven for that.

While I am largely unconvinced that anything we are doing is beyond the grasp of other enterprises looking toward mobility as a major strategy for 2012 and beyond, I am utterly convinced that if we had not started our cloud efforts back in 2007 (hey, look – another story here) and consolidated-then-cloudified what we had in our global estate, then we would have been in an almost impossible position to layer an efficient API strategy on top. Call it dumb luck, I guess.

Interestingly, the benefit of the API strategy for us has been the capability to unlock our vast swathes of siloed data and turn that into information. This is only truly achievable because that data, despite being tied to monolithic systems, is in a very limited number of physical locations today. It’s in those very same locations that we’ve begun to build these next generation enabling services – each positioned and deployed adjunct to the ever-growing mass of data we own.

Does Data Gravity drive Data Virtualization? It would appear so for us, but, I guess we better let McCrory answer that – after all, it’s his idea – isn’t it?

Advertisements

One thought on “Can You Feel The Force?

  1. Pingback: Can You Feel The Force? | WikiCloud

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s