Friday, October 31, 2014

Cloud Computing: Origins

Ph. D. Julio Fernandez Vilas
¿Any comments? jfvilas@gmail.com



The arrival of cloud computing technologies to the IT market has not happened in the big bang mode.
Unlike other technologies more directly related to the research and innovative activities that manufacturers in the world of IT (as might be the use of flash technology in storage systems, higher recording densities on tape, or FCoE), Cloud Computing is not is a technology in itself, but is rather a "way of doing".

In my view, it all started in the mid-90s, with the emergence of ISPs (Internet Service Providers) and ASP (Application Service Providers). These were the years of the birth of the Internet Age. The need of an Internet connection to be a consumer of content, and the need to have "presence" on the Internet to become a content generator was quickly covered with ISPs (those that provided access) and ASP's (those that provided the infrastructure needed to publish content).

Thereafter, as a result of the acquisition of knowledge in the new technologies of the Internet by businesses, Service Providers began to lose focus, and were the companies themselves, seeing the potential of the Internet as a new channel to open new lines of business (or expand existing), which took care of personally managing its online presence. In fact, web applications that were only mere information portals, became sites where e-commerce activities can be done, both B2C (the most successful business of all time) and B2B.
 
Obviously, these business execution capabilities may be beyond the any business capabilities, leaving a margin of existence to the ASP (ISPs were gradually eliminated by the telcos). But in the second era of the Internet, where "being on the internet" was not only for disseminating information (it was mainly for doing business), the ASP had to evolve, did not come to stay static websites, which eventually generate a new way of working, and therefore a new way of doing online business. Then we came to the "dot.com" era.

Everyone wanted to do business very quickly, so there was no time to create departments, or build a datacenter, time-to-market was critical. How is this problem of getting quicker deployments resolved? A new type of service providers appeared: the hosters (as a continuation of the story we might call them HSP, Hosting Service Provider).

With HSP, companies no longer have to worry about the more purely technological side. They just need to focus on defining a good business strategy. Once defined, if a company has some development capacity, it can implement the solution and it is ready. If not, it just has to find a company that will make the development and deploy it to the HSP chosen.

Alongside all the evolution we have been living in recent years in all that relates to "the web" and mobile, the world's infrastructure has also undergone major changes. From the IBM mainframe virtualizer (now renamed to zVM) of the 60s, to the current KVM, Xen, vSphere, Hyper-V, etc., the world of virtualization has evolved in dizzying, provided a new vision and a new way of managing the infrastructure.

Workload consolidation, cost reduction, simplification of the datacenter, high availability, and a right utilization of the installed resources are some of the concepts to which we have been migrating almost without realizing the fact that we have just virtualized our hardware.

But virtualization is not only a valuable asset for companies with own datacenter. Those who are taking greater advantage of developments in virtualization technologies are undoubtedly the HSP.

HSPs have been able to finally get rid of dealing a big problem, the "power and cooling." At the time of the birth of the HSP, one operating system instance was installed in one physical machine (they were the years that became popular pizza servers, 1 server = 1U, ie, 42 servers per standard rack), and powering and cooling hosters’ datacenters was a real headache.

With virtualization the HSP consolidate operating system instances one far fewer physical machines, allowing them to lower their costs exponentially, and improve their service offerings.

And not just talking about physical space. The power consumption decreases dramatically, which also makes it also decreases the need for refrigeration. That is, virtualization has a double impact on the cost savings, as well as everything that is in itself by consolidating services, also helps reduce datacenter PUE.

HSP's as Rackspace, Softlayer, Terremark, etc ..., can offer hosting services at a much lower price, and also a very quicker provisioning process.

Infrastructure virtualization has changed the way we manage our hardware infrastructure, and the next step is here knocking on doors of datacenters: network virtualization.

We ourselves see it in our datacenters: provisioning a virtual machine software takes minutes to set up, but the configuration of the network part is a series of manual tasks that must be added to the total provisioning time.

New trends in what is known as "software-defined" are called to finish simplifying all procurement processes and operations, at the same time that it will mean the next step in the design and management of systems high Availability (HA) and disaster recovery (DR).

Software-Defined Networks (SDN), Software-Defined Data Center (SDDC), or Software-Defined Storage (SDS) are some of the initiatives (definable as best as pure technology strategies) that begin to fill the tables of CIOs and datacenter architects, and where the next battle for control of the datacenter between computer hardware manufacturers and communications hardware manufacturers as long as the software-defined trends will be responsible for completing the process of virtualization, to lead us to what I call the Virtual Datacenter.

No comments:

Post a Comment