Why XaaS will alter business models forever, despite a slew of problems ranging from security to inconsistent bandwidth.
Everything as a service promises to simplify our lives, from cutting edge business to consumer applications. It is too early to tell, but the concept of everything moving to the cloud poses some interesting issues, from bandwidth to security.
Who would have guessed that in 2015, launching a business would require virtually no physical assets? You simply turn on your computer and everything you need is done in the cloud. The age of anything and everything as a service has arrived.
Typically referred to as Everything-as-a- Service (EaaS), or anything-as-a-Service (XaaS), it promises to create a new paradigm on how to start and run a business. On the consumer side, it promises to take everything from your recipe book to your daily programs, and the slew of applications and move them to the cloud.
But the implications are much more significant for business, namely where the focus is on ROI. And XaaS doesn’t restrict the business to just intellectual property businesses such as consulting, legal firms, or other businesses where the value is typically blue sky. Every business can benefit from XaaS — or so the pundits say.
“If you had asked me that five years ago, I would have said we are going to keep processing locally for a number of things,” said Paul Kocher, president and chief scientist at Cryptography Research, a division of Rambus. “I have modified my views a bit since then.”
The reason is that there has been a lot of progress made on the technology frontier, in the last five years, and cloud resources are mushrooming. So are the cloud players. For example, two very visible players in this domain are Intuit and Microsoft. Both have made forays into the XaaS area with products that traditionally were housed on local servers or individual computers. Intuit has put its Quicken business line into the cloud, while Microsoft has done the same with Office 360.
Neither is flourishing yet. Like any new venue, it takes time for the platform to mature. Experience will reshape the applications as time passes, to be cloud agile, and many of the issues that exist with the software will be resolved. But there are issues with simply putting applications made to run on dedicated local machines on cloud clusters.
It is still early in the game. Much of this is a learning experience. “QuickBooks seems to prevailing as a service, in part simply for business model reasons rather than technical ones,” says Kocher. “To a security individual, that can be frightening. When you start looking at the security issues around a service that aggregates everyone’s data, the security risks tend to be the sum of all of the participant’s information, kind of in the same way that the risks around the gold in Fort Knox tend to reflect the amount of gold kept there.”
Security was never near the top in the business model, and that is something that has to change if XaaS is going to work the way the advocates envision it.
What is XaaS?
XaaS is a collective term used to define the emerging essence of cloud computing — that the cloud can provide anything, including outsourced services. This acronym is attempting to define the burgeoning number of services that are being offered and can be delivered via the Internet, as opposed to on-site or in the enterprise.
However, before everyone skips happily off into the cloud, thinking everything can be rented or leased, the fact is that so far XaaS is mostly a concept with a few players testing the waters. There are leading elements already working to some degree. For example, GSA’s Cloud EaaS blanket purchase agreements (BPAs) provide a broad range of e-mail as a service solutions for government. Encryption as a Service is being offered by a company called Cloudlink. Amazon and Rackspace offer on-demand servers that scale to meet hosting requirements for running as large an application set as necessary. Other companies like Mailchimp and Sendgrid will run a company’s mail servers at high-performance and low-cost. On the payment end, PayPal, Stripe, and Braintree make payment processing painless and automatic.
There is a lot of discussion in a lot of areas. Education as a Service (EdaaS) has been discussed. So has Software as a Service (SaaS), which already is in use. Others include Unified Communications as a Service (UCaaS), Delivery as a Service (DaaS), Monitoring/Management as a Service (MaaS), Network as a Service (NaaS), Backup as a Service (BaaS), Infrastructure as a Service (IaaS), Platforms as a Service (PaaS), Communications as a Service (CaaS), even Food and Grocery (F/GaaS) as a Service (FaaS). The list keeps growing as we speak. Figure 1 is an example of some of these services
Perhaps one of the most shining examples of this is from a company called Instacart, whose mission is to build the largest grocery chain to date without a single brick and mortar store. How it plans to do this is quite simple, and it exemplifies what forward thinking and creativity can accomplish in new waters.
Instacart set up a virtual grocery store, bypassing the traditional process of building and stocking warehouses using applications to handle inventory, shipping and other logistics. They simply hired a bunch of individuals to go into grocery stores and purchase one of everything on the shelves. Then it was logged into a database. It took a while to put this together, but it certainly was much quicker than building and stocking physical stores.
Customers can log onto their Web site, pick stores within their area—they vary according to location, but include big chains such as Whole Foods, Safeway, Costco and even Petco. For some they even guarantee that their prices are the same as the stores. The store is chosen and the items pop up. They can be added to the cart and items are delivered in less than two hours in most cases.
This model is particularly practical example of XaaS. Of course, there are some particulars. You can’t examine the produce or meat, for example, and they have a delivery charge. But the business appears successful, at least for now, and is exemplary of face of things to come in the XaaS world.
A new paradigm
Picture an artificially intelligent and ubiquitous cloud platform with services that will be able to anticipate your needs. This is possible because they will have a real-time awareness of functions such as location, time of day, and preferences based on learned behavior — your behavior.
Given this is now artificial intelligence, it will radically change how one interacts with these services. Obtaining data and information will be done for you, not by you, in a preemptive, almost surreal environment. There will be a seamless, real-time, and consistent experience that permeates across each and every device one owns, regardless of platforms. It will be tailored to the individual and will be available on-demand, 24/7.
But there are some challenges that have to be addressed before XaaS becomes the norm. First of all, before any of this can even come close to happening, there will need to be a new generation of core components, the first of which will have to be a seamless, fast and agile cloud. The massive infrastructure of servers will, pretty much, have to function as one enormous resource and be able to deliver services anywhere, anytime and quickly. The path to the issues of who owns what will have to be resolved and, generally, everybody will have to play nice.
Next, the Internet will have to be a consistent 1 Gpbs of bandwidth. Pipelines across the Internet will have to be wide, agile, and relatively immune to loading imbalances. The wireless infrastructure will have the same requirements as the Internet – pervasive and ubiquitous. There is talk of super Wi-Fi, WiGig, and other unlicensed wideband technologies that will live in the 40GHz and 120GHz bands with extremely wide bandwidth. And there will need to be ways to make the existing spectrum much more efficient. This is extremely important because mobile data will constitute more than 70% of all data in the next few years.
There is quite a bit of traction in XaaS, but it still has a long way to go. For one thing, the digital and physical worlds need to converge. That’s the basis of the IoE, and the fact is that much of what will be in the IoE will be virtual. In this world, your location will always be known – it has to be because just about everything delivered to you will be location-dependent. It will even assess elements such as weather, whether you are moving, and if so, in what direction, social environment, physical environment, and more.
Another will be the demise of device-centric computing. It will become connectivity-centric. As location becomes the No. 1 element for context, connectivity will be a requirement. The issue will no longer be that one device will do everything. Instead, all devices can do anything, with the common thread being the cloud.
A third will be Big Data. Once the fog lifts for how and what Big Data really is, and how to use it, the benefits will be massive. Your medical records will be available at a moment’s notice. You will be able to do financial transactions, instantaneously, from any device, anywhere. And that is just the tip of the iceberg.
This last one has huge ramifications for the enterprise. Melding structured and unstructured data will reshape the business model. This has the potential to finely tune business decisions with information in brand new ways. The ability to have this enormous resource base on hand for the decision-making process, coupled with real-time, location-based data, will put the enterprise on the leading edge.
Security and other issues
There are other issues that will have to be addressed, as well.
One has to do with commonality. If your data is distributed, and the system crashes, there are no ramifications on other systems. Even on a larger scale, such as a bank, for example, if its system goes down, none of the other banks is affected. While this may affect a large number of users, it is restricted to the one element.
With the cloud, that becomes a much larger issue. If there is a breakdown in cloud systems, it becomes common across everything, and unless new methodologies are developed to isolate and reroute, the effect can be devastating, perhaps even on a global scale. Rambus’ Kocher compares it to an automobile. While a car may break down, that has little effect on anything except perhaps the other vehicles that have to go around it. But the cloud is akin to having every vehicle handled by the same service. And if it breaks down, then so will all of the vehicles.
This marks a change in direction for security, where the network historically has been viewed as the most vulnerable point of entry. “Network security is not the issue,” said Zach Shelby, vice president of marketing for the Internet of Things at ARM. “The issue is really large systems connected to clouds. What has to happen is that people have to realize the Internet really is the Internet.”
Another issue involves the security around the service being provided. One key question is who runs the service, you or the provider? There are some interesting ramifications around this, as well. For example, there is some discussion about the user running the service without any assistance from the provider. That raises questions such as how maintenance-free is this approach and can it be run securely so that the provider has no access to it? Today, creating a virtual server is child’s play, in most cases. But if this is the approach, how does that affect liability?
Liability is an issue unto itself. Where are the lines drawn? Is the service provider liable on a grand scale, even if the encroachment comes from the user? And just how much liability does the provider have? Is it liable for all data that is compromised by a user? Can the provider assign liability to users, or require them to work within a box? These are issues that are still up in the air.
And what about cloud security? “While it may seem that these concentrated places, such as in the cloud, have greater security resources than any typical, individual distributed locations, it turns out that such concentrations actually become very attractive to all sorts of attackers, upping the ante for top-level security profiles,” says Kocher.
It is bad enough when all one has to worry about are the ordinary criminal types. But with so much data accessible from one access point, even some of the various intelligence agencies have been trying to hack it, looking for as signs of terrorism. The fact remains that such ubiquitous and voluminous collective data are a real goldmine for those on the dark side, and maybe even those not on the dark side. Securing it is not just scaling present-day solutions. It will require a new paradigm, just as Big Data requires a radically new approach to data analysis.
There is not much that will change that risk profile in the short-term, either, although there is work underway such as Intel’s Software Guard Extensions (SGX), a set of CPU instructions that allow applications to isolate private regions of code and data, called enclaves. ARM has a similar strategy with its TrustZone and mbed.
There also seems to be more traction to the movement to isolate the software from the OS. That approach can immediately reduce some of the attack vectors that can use the commonality to compromise the whole.
On the encryption side, there are a number of options, as well. One of the more interesting approaches is to put encryption on the devices so the only decryption is done at the receiving end. That takes the cloud out of the loop because all the data in it is encrypted, moving the vulnerability out to the end points where encryption is much more practical and available, especially in hardware.
In the end, though, there also is the issue of price. How much are people willing to spend on security? “There’s already a security standard in microcontrollers,” said Eduardo Montañez, global systems and architecture manager at Freescale. “We can use asymmetrical, symmetrical, hashing, tamper-resistant, TrustZone, and lock the software to an external component. And you get this all in a $1 or $2 microcontroller. But how much more are you willing to pay to secure this environment?”
As with many emerging technologies and platforms, they tend to be overhyped and under-delivered, at least at first. The complexity of migrating traditionally device-centric applications, which had all the local horsepower they needed and where bloat didn’t matter, to a lean and mean virtual system that resides across a plethora of devices will take a major rework.
Applications will have to slim down and be optimized for functionality. One solution might be a modular approach. Each module can have a specific function and they can be loaded and unloaded as required, integrating when necessary to provide advanced functionality. This is a much better approach with virtual platforms and has been experimented with in the past.
As we go forward, processing power certainly will continue to increase at the device level so connection-centric devices will be able to do more and more, faster and faster. Likely the two directions will meet in the middle somewhere. This could well mean rethinking what goes into the chips rather than assuming they will be used to process huge amounts of software.