Monday, December 8, 2014

Cloud Computing: The 3 layer model

Ph. D. Julio Fernandez Vilas
linkedIN
¿Any comments? jfvilas@gmail.com



So far everything that relates to the architecture of cloud-based technology model. With regard to the implementation model or working model, we distinguish at least 3 entities, although each of them may be subdivided depending on the business needs:

a)        Infrastructure Provider is the entity that provides services in the cloud.
b)        Service Provider is the entity that consumes the services to be offered to a third party.
c)         Client Service is the final consumer.

This, with a real example would look something like:


According to the Gartner report on the current state of cloud technologies (http://www.gartner.com/document/2515316, access rights maybe required), today we can separate it into 4 modes according to "how" the customer manages the infrastructure used in the cloud:

a)        IaaS+ Middleware. In this mode the client (subscriber) manages the infrastructure and is responsible for deploying their own middleware (or use pre-built middleware that facilitates the service provider models).
b)        IaaS Cloud-Enabled. For the customer remains IaaS mode, but the provider handles the scaling part (elasticity platform).
c)         PaaS Cloud-Based. This is where the great change in the way we manage. In client mode only PaaS handles the functionality of the platform, and is the provider who manages the hardware and software infrastructure.
d)        PaaS Cloud-Native. The operation mode is the same as in the previous model, with the addition that the middleware layer is designed and configured to work in cloud mode.

In the figure below we can see the different "grade of management" that must perform the subscriber of different cloud services based on the hired mode.



Saturday, November 8, 2014

Cloud Computing: Strata

Ph. D. Julio Fernandez Vilas
¿Any comments? jfvilas@gmail.com

There is no standard definition of what "Cloud Computing" is, and two reasons are responsible for this lack:
  1. There are different implementation models (infrastructure, application, software ...), so there is no precise definition.
  2. b) There are very multifarious offerings providers, so that a strict definition would not include all offerings.

But if I had to define what Cloud Computing is, I would choose the definition included in Wikipedia (read the interesting Wikipedia entry here):
"Cloud computing Relies on sharing of resources and coherence to Achieve Economies of scale, similar to a utility (like the electricity grid) over a network. At the foundation of cloud computing is the concept of Broader Converged Infrastructure and shared services. " The figure below is a good graphic illustration of generic architecture of cloud computing, where we can observe 3 clearly defined levels:


Here we can see the main "three flavors" of cloud computing offerings, infrastructure as a service (IaaS) platform as a service (PaaS) and application (it stands for "application software") as a service (SaaS). In the market and specialized press we can find other types of cloud offerings, focused on specific areas of technology, Security as a Service (SECaaS), Storage as a Service (STaaS), etc., but ultimately, any offering cloud will be framed in any of the three categories we've mentioned:

  1. Infrastructure (IaaS) refers to the possibility of consuming infrastructure services in the cloud, such as computing power, storage, connectivity, etc. At this level, management done by the customer (subscriber) is performed at virtual machine level or storage unit (LUN) level, for example, while the provider of infrastructure services (provider) is responsible for "only" managing the hardware (plus all related concepts to the cloud, such as scalability, high availability, etc.).
  2. Platform (PaaS). When a client wants to get rid of costs and tasks associated with the management of the infrastructure (machine maintenance, procurement of space, bandwidth allocation, etc.), he has the option of using services in a “platformed” mode, where the level of management that must been made by the subscriber is much lower than the one a provider to does. In this model, the client focuses on the development of applications, in fact the real solutions to their business problems. As an example, in Azure platform, it is possible to develop a .net web application and deploy a packetized version to a cloud server without installing anything. Here is Microsoft itself who is responsible for configuring and managing machines and application servers, clients only have to worry about generating deployable software packages on the platform. At this level of the cloud computing stack is where you would frame the old ASP's (application service providers).
  3. Software / Application (SaaS). At the highest level of services providing in the cloud we find Software as a Service. It is easier to understand, since it is the stratus that we find closest to the business needs, and therefore farthest from the technical stuff. It can be easily understood with these examples: Office 365 or Gmail.

Friday, October 31, 2014

Cloud Computing: Origins

Ph. D. Julio Fernandez Vilas
¿Any comments? jfvilas@gmail.com



The arrival of cloud computing technologies to the IT market has not happened in the big bang mode.
Unlike other technologies more directly related to the research and innovative activities that manufacturers in the world of IT (as might be the use of flash technology in storage systems, higher recording densities on tape, or FCoE), Cloud Computing is not is a technology in itself, but is rather a "way of doing".

In my view, it all started in the mid-90s, with the emergence of ISPs (Internet Service Providers) and ASP (Application Service Providers). These were the years of the birth of the Internet Age. The need of an Internet connection to be a consumer of content, and the need to have "presence" on the Internet to become a content generator was quickly covered with ISPs (those that provided access) and ASP's (those that provided the infrastructure needed to publish content).

Thereafter, as a result of the acquisition of knowledge in the new technologies of the Internet by businesses, Service Providers began to lose focus, and were the companies themselves, seeing the potential of the Internet as a new channel to open new lines of business (or expand existing), which took care of personally managing its online presence. In fact, web applications that were only mere information portals, became sites where e-commerce activities can be done, both B2C (the most successful business of all time) and B2B.
 
Obviously, these business execution capabilities may be beyond the any business capabilities, leaving a margin of existence to the ASP (ISPs were gradually eliminated by the telcos). But in the second era of the Internet, where "being on the internet" was not only for disseminating information (it was mainly for doing business), the ASP had to evolve, did not come to stay static websites, which eventually generate a new way of working, and therefore a new way of doing online business. Then we came to the "dot.com" era.

Everyone wanted to do business very quickly, so there was no time to create departments, or build a datacenter, time-to-market was critical. How is this problem of getting quicker deployments resolved? A new type of service providers appeared: the hosters (as a continuation of the story we might call them HSP, Hosting Service Provider).

With HSP, companies no longer have to worry about the more purely technological side. They just need to focus on defining a good business strategy. Once defined, if a company has some development capacity, it can implement the solution and it is ready. If not, it just has to find a company that will make the development and deploy it to the HSP chosen.

Alongside all the evolution we have been living in recent years in all that relates to "the web" and mobile, the world's infrastructure has also undergone major changes. From the IBM mainframe virtualizer (now renamed to zVM) of the 60s, to the current KVM, Xen, vSphere, Hyper-V, etc., the world of virtualization has evolved in dizzying, provided a new vision and a new way of managing the infrastructure.

Workload consolidation, cost reduction, simplification of the datacenter, high availability, and a right utilization of the installed resources are some of the concepts to which we have been migrating almost without realizing the fact that we have just virtualized our hardware.

But virtualization is not only a valuable asset for companies with own datacenter. Those who are taking greater advantage of developments in virtualization technologies are undoubtedly the HSP.

HSPs have been able to finally get rid of dealing a big problem, the "power and cooling." At the time of the birth of the HSP, one operating system instance was installed in one physical machine (they were the years that became popular pizza servers, 1 server = 1U, ie, 42 servers per standard rack), and powering and cooling hosters’ datacenters was a real headache.

With virtualization the HSP consolidate operating system instances one far fewer physical machines, allowing them to lower their costs exponentially, and improve their service offerings.

And not just talking about physical space. The power consumption decreases dramatically, which also makes it also decreases the need for refrigeration. That is, virtualization has a double impact on the cost savings, as well as everything that is in itself by consolidating services, also helps reduce datacenter PUE.

HSP's as Rackspace, Softlayer, Terremark, etc ..., can offer hosting services at a much lower price, and also a very quicker provisioning process.

Infrastructure virtualization has changed the way we manage our hardware infrastructure, and the next step is here knocking on doors of datacenters: network virtualization.

We ourselves see it in our datacenters: provisioning a virtual machine software takes minutes to set up, but the configuration of the network part is a series of manual tasks that must be added to the total provisioning time.

New trends in what is known as "software-defined" are called to finish simplifying all procurement processes and operations, at the same time that it will mean the next step in the design and management of systems high Availability (HA) and disaster recovery (DR).

Software-Defined Networks (SDN), Software-Defined Data Center (SDDC), or Software-Defined Storage (SDS) are some of the initiatives (definable as best as pure technology strategies) that begin to fill the tables of CIOs and datacenter architects, and where the next battle for control of the datacenter between computer hardware manufacturers and communications hardware manufacturers as long as the software-defined trends will be responsible for completing the process of virtualization, to lead us to what I call the Virtual Datacenter.

Thursday, October 23, 2014

Cloud Application Layer Part III - Up & Down

Ph. D. Julio Fernandez Vilas
¿Any comments? jfvilas@gmail.com

Let's recall now the concepts of public cloud and private cloud, which are easily understandable if we really understand the relationship between the Internet (public) and (private) Intranet. When a company uses the public cloud, and then publishes services internally in the organization using the same technologies, i.e., in a private cloud, the company is acting as both a client consuming cloud services and a provider that offers services to its own iT services.

This is important when assessing the costs, as it will allow the consumption billed individually, affecting a business unit costs associated with the services they use.

Moreover, we must consider that the company is a living entity, constantly changing, and the fact of offering services from a cloud, whether private or public, is a key factor in making changes to services. That is, we endow the company a considerable amount of flexibility, since domestic service consumers do not know whether such services are being offered from inside or outside the organization.

In a last step in the adoption process of cloud technologies it is very important to have bothered to make a good design of services consumed in the form of a private cloud. If the design is good, the migration to a cloud service can be done easily. It is important to focus on a key concept for this migration to the cloud be done easily: STANDARDIZATION.

We need to design our private cloud services so that they are compatible with the public cloud. When we talk about compatibility or standardization, it is important to have a clear understanding of the layer of cloud technology we are focusing. For example, if we are thinking about IaaS, we have to make sure we use only x86 architectures, we are restricted to Windows and Linux, standard virtualization technologies, iSCSI, etc.

If you are thinking in SaaS (cloud email, for example), which is standardized is functionality. That is, we can migrate our on-premises platform (Exchange, Lotus, etc.) to an email platform in the cloud (Gmail, Outlook.com, etc. ..), provided that it assures me that the functionality that I will get on the new platform is at least the same as I have on the current platform. We should note that in the SaaS projects we will always find a CAPEX amount in the project corresponding to migration.

And all this in terms of going up to the cloud. And what about going down from the cloud? If we have taken into account everything said so far, the download process (or even transfer to another provider) is exactly the same. Remember that we are talking about going up to cloud from a starting point that assumes we are already working on private cloud.