Life was simple back then: Data was attached to a single application, both were hosted on a singular marvel of modern information technology engineering – the SAN. We had redundancies piled upon redundancies, compression, deduplication, snapshots, clones, incrementals, differentials, inline, and out-of-band, among others. In short, we had it all!
But time marches inevitably on, and soon enough we started to want to share data across and between appliances, and with applications we didn’t write, and potentially didn’t even host. Instead of keeping all of our data centrally in a lovingly crafted – and expensive – bespoke repository, we started keeping it here, there, and everywhere, like so much loose change.
Then an era of new, non-infrastructure data owners sprung up. Steeped in the dark arts of corporate administration, they promised better performance for local apps, simplified access for their remote (sometimes third-party) teams, and cheaper storage costs.
John Annand is a research director of Info-Tech Research Group and he shares his perspective in this article.