“The cloud is just someone else’s computer” is a common phrase in tech circles. An otherwise excellent article last week on Opensource.com opened with this line: “A personal web server is ‘the cloud,’ except you own and control it as opposed to a large corporation.” Let me be unambiguous here: that’s bullshit.
The context of the “someone else’s computer” saying is generally one of data ownership. Why let someone else own your data when you can own it yourself? I’m sympathetic to that point, but it glosses over a very questionable assumption. Namely, that people have the skills and desire to run the services themselves. That may be true in the tech sector, but it’s certainly not going to be true in the population at large.
What’s even more frustrating is the comparison of a Raspberry Pi to a multi-replica distributed environment. A Raspberry Pi has no redundancy, so if a component fails, you’re out of luck until you can replace it. If your house floods, sorry about your data. Granted, you can address these issues yourself by having redundant hardware and an offsite copy, but the effort goes up dramatically with each layer of protection you build in. Maybe it’s worth the effort to you. And maybe you have the skills necessary to do it. Good for you.
It’s absolutely a good thing to make sure people are aware of the costs and benefits of any technology solution. But one of the benefits of cloud offerings is that some portion of the stack is maintained by competent professionals that can aggregate the demands of individual customers to build a pretty robust and reliable offering. You know why it’s big news when Amazon Web Services has a major outage? 1. Because it’s rare. 2. Because their services are good enough that a lot of people have said “it doesn’t make sense for us to do this ourselves.”
I liken “the cloud is just someone else’s computer” to saying “the grocery store is just someone else’s farm”.
Public cloud services are well-known for hitting the middle of the bell curve. General purpose hardware can be replaced by virtual machines in any cloud offering. It makes sense to target this beefy middle. While the margins might not be the best, there’s an unbelievable amount of volume.
But Oracle’s recent earnings call got me thinking about the ends. Oracle continues to insist, despite a lack of evidence, that they’re a legitimate player in Infrastructure-as-a-Service. Oracle is cutting jobs in their hardware business, which has lead some to believe that SPARC processors will primarily be used in Oracle’s cloud offering.
If that is indeed the case, that signals a surrender of sorts. X86 rules the CPU market these days, and Oracle must see the writing on the wall. As SPARC-based hardware reaches end-of-life, many customers will look to other options. A SPARC cloud gives Oracle a way to convince customers to stay on the platform, at least for a little while, and also helps drive IaaS usage.
This is similar to how I perceive the VMware/Amazon partnership announced last year. Customers are given a gentle exit ramp for a technology that is becoming less relevant, while the vendor gets to extract as much revenue as possible. Public cloud services can serve the tail end of the market, so long as there’s enough usage to keep a minimum offering.
But the cloud can also be where new hardware becomes mainstream. For example, both Amazon and Microsoft Azure have brought field-programmable gate array offerings to market. FPGAs are not new, but they’re certainly not mainstream. Amazon and Microsoft both see them as a worthwhile investment. With easy access to try them out, customers who might never have tried using FPGAs may soon find them indispensable. Once a new technology has a sufficient toe hold, a public cloud provider can give it the boost it needs to reach a mainstream audience. Or it may flop. That can happen, too.
In any case, the bulk of public cloud offerings will sensibly remain focused on the mass market. But don’t be surprised if offerings at either end become more numerous.