Cloud Computing is One of the Most Disruptive Forces in IT History

It’s no secret that cloud computing has revolutionized how businesses buy and use technology. This year alone, Market research firm Gartner Inc forecasts that $111 billion worth of IT spending will shift to cloud services. And that number is expected to grow. In fact, Gartner believes it will almost double to $216 billion by 2020. It’s no wonder Gartner’s analysts coined cloud computing as “one of the most disruptive forces of IT spending” in their report.

Companies of all sizes are being swept up in this shift. From startups all the way up to Fortune 500 companies, buying and maintaining in-house computer servers, storage, and networking gear is becoming a thing of the past. Most companies are finding they would rather have someone else handle it for them.

A number of companies have jumped at the call for cloud services, most notably Amazon Web Services, Microsoft, Google, and IBM. Instead of businesses having to invest in expensive equipment that needs to be regularly updated, patched, or replaced, they can now pay more precisely for exactly the computing, storage, and networking power they use. Renting is the new buying.

“Cloud-first strategies are the foundation for staying relevant in a fast-paced world,” said Ed Anderson, Gartner research vice president. “The market for cloud services has grown to such an extent that it is now a notable percentage of total IT spending, helping to create a new generation of start-ups and ‘born in the cloud’ providers.”

Businesses can choose between public and private cloud models. The difference can be thought of similarly to renting an apartment in a complex vs a stand-alone home. Public cloud models are a multi-tenant environment, where you rent a portion of capacity alongside other clients. Private cloud hosting, on the other hand, is a single-tenant environment where the hardware, storage, and networking power are dedicated to a single client or company.

Public cloud models come with their own benefits and tradeoffs. Their multi-tenant nature allows for a pay-as-you-go model, where businesses can pay by the hour for resources without a contract. A notable limitation to the public cloud model would be that since ever server shares the same hardware, storage, and network devises as all other tenants in the cloud, meeting compliance requirements, such as PCI or SOX is not possible. For this reason, the majority of public cloud deployments are generally used for web servers or development systems where security and compliance requirements of larger organizations and their customers is not an issue. Also, you obviously don’t have control of the hardware performance, since this is managed by the provider.

Private cloud models similarly have their own benefits and tradeoffs. The most obvious differentiator would be the customizability. Because the hardware, storage, and network are dedicated to a single client, performance can be specified and customized to ensure certain levels of security and meet complex compliance requirements. Private models also allow for hybrid deployments for cases in which a dedicated server is required to run a high speed database application. Hardware can be integrated into a private cloud hybridizing the solution between virtual servers and dedicated servers. The tradeoff here for customizability is obviously that corporate customers must still own their own data center equipment.

AWS was one of the first major proponents of cloud-first strategies back in 2006. Today, they are the largest public cloud company following by big players like Microsoft Azure, Google Cloud Platform, and IBM. These companies have completely disrupted the traditional model selling operating systems and software to users on a one to one basis.

While companies of all sizes have been rapidly shifting towards cloud, we are beginning to see a backlash from larger corporations. Smaller companies, used to spending most of their funding on servers and software, see public cloud deployment as a blessing. Startups, typically strapped for cash, can now operate on competitive computing power for just a few cents an hour. However, the public cloud often loses its economic benefits once a company hits a certain size. Companies processing a lot of data might find their public cloud service getting so expensive in fact that they opt instead to go back in-house.

Dropbox has been a prime example of this contention. Earlier this year, they admitted that they have been progressively moving data to their own data centers, off of AWS, for the past few years. Akhil Gupta, Dropbox’s VP of Infrastructure, cited the need for more customization of hardware as a chief motivator. Mainly, the company wanted to change the proportion of storage to computer to networking in order to lower costs. Because the public cloud model doesn’t allow for these hardware changes, Dropbox decided it was time for a change.

Gupta told Fortune earlier this year, “We wanted to build jumbo super storage servers that could hold immense amounts of data with a small amount of compute.” For Dropbox’s, holding and routing user files requires a lot of storage, but relatively little computing power. It makes sense why then, they would benefit so greatly from this specific customization.

This doesn’t mean, obviously, that all companies of a certain size should or will move away from the public cloud. But it does point to an interesting case for a legitimate outlier that doesn’t fit the public-cloud-for-all discourse.

 

What does “serverless computing” really mean?

“Serverless computing” might sound impossible, and that’s because it is. There will always be servers hanging around somewhere. Serverless computing doesn’t refer to the total absence of servers. Instead it refers to how developers think about servers, namely, that they don’t have to think about them.

As Chad Arimura, CEO of the startup Iron.io and vocal advocate of serverless computing, told Infoworld, “What we’ve seen is that the atomic unit of scale has been changing from the virtual machine to the container, and if you take this one step further, we’re starting to see something called a function… a single-purpose block of code. It’s something you can conceptualize very easily: Process an image. Transform a piece of data. Encode a piece of video.”

Serverless computing has less to do with the physicality of servers, and more to do with the modern developer’s perspective. Developers can effectively grab functions from a library without having to consider server infrastructure as they create an application.

One of the most well-known examples of serverless computing is Amazon’s AWS Lambda is the best-known example of serverless computing. When a developer uploads code into Lambda, the service takes care of everything required to make it run scale. Issues like capacity, scaling, patching, and administration of the infrastructure disappear. You can even set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

Serverless systems allow developers to build complex systems much more quickly and ensures that they are spending most of their time focusing on core business issues rather than infrastructure based and administrative duties. These systems can scale, grow and evolve without developers or solution architects having to patch a web server ever again.

Serverless computing is most importantly about developer efficiency. It allows developers to forget about infrastructure concerns and let’s them refocus once again on the heart of their jobs: writing code.

It should be noted that while serverless computing is pretty amazing, there are certain limitations, and not all applications today can be implemented the serverless way, especially when it comes to legacy systems and public cloud services. However, when utilized in conjunction with the right business, it can take efficiency to a new level.

//