Yes, it’s getting harder and harder to stay oblivious to the impact of AI, with implications from the geopolitical to the mundane and the positively creepy. It’s getting harder to miss the growing impact of IoT on everything from our homes to the way hospitals deliver care, autonomous cars are driven, factories are run, and smart cities are managed. And the arrival of GDPR, which will start taking effect in 2018, is forcing the issue for organizations the privacy and national sovereignty implications for the data sitting in everything from traction databases to data lakes and cloud storage.
But beneath the surface, we’re seeing the beginnings of tectonic shifts in how enterprises manage their cloud, streaming analytics, and data lake strategies.
MULTI-CLOUD MOVES TO THE FRONT BURNER
For our look ahead, we’re focusing on how the data is being managed. Rewind the tape to a year ago and we stated that “increasingly, Big Data, whether from IoT or more traditional sources, is going to live and be processed in the cloud.” Last year, we forecast that 35-40 percent of new big data workloads would be deployed in the cloud, and that by year end 2018, new deployments would pass the 50 percent threshold.
Our predictions weren’t far off the mark; Ovum’s latest global survey for all big data workloads shows that 27.5 percent of them are already deployed in the cloud. And according to Ovum research, big data is hardly an outlier for enterprise cloud adoption, which ranges from 26-30 percent across different workloads.
By inertia, most organizations have ended up with the same polyglot environments in the cloud that characterize their data centers. Most organizations use more than one cloud provider, just like on premises where they often have one of everything. Like history repeating itself, this is the consequence of a combination of top-down policies mandating a corporate standard, and departmental decisions made for expedience.
Big Data 2018: Cloud storage becomes the de facto data lake