Friday, February 14, 2020

Cloud Computing: elastic and orchestrated

Ph. D. Julio Fernandez Vilas
linkedin
¿Any comments? jfvilas@gmail.com

After years of talking about scalability (I remember discussing this back in the 90), in less than five years in the IT world we've been talking about scalability (scale-up and scale-out) to speak of elasticity.

And we not only care of growing, we are also concerned about decreasing, because it is what most negative impact has on the costs when the business scenario has changed.

If my company cannot grow, the IT managers will take the risk of not servicing the business, that is, the company will not be able to sell more. Yes, that is a problem, but it is not the worst situation you can deal with. Let’s suppose now that your company has entered a “low-sales” scenario (due to external factors, like crisis or cycle changes); if you cannot scale down IT costs to the new company size (due to an un-accompanying business sales), you could put the company in a risky financial situation, due to the high IT costs.

What do I need to provide optimal IT systems according to business needs? Well, as mentioned, we have to focus on the elasticity, not scalability. While scalability issues focus on solving problems such as provisioning or continuous production, elasticity theories adds scalability the ability to adjust systems to real production demands (whether they mean increasing or decreasing production).

Moreover, elasticity means also a change in the time horizon in respect to scalability. The concept of elasticity when it comes to cloud technologies is referred to minutes or seconds, unlike traditional scalability, which focuses more on the problems of supply-chain, so its time horizon is days or weeks.

Having identified the problem, we can only find the solution. How do I get resilient systems and how do I reconfigure my ability to “run the business” in minutes?

The solution is not new, is the orchestration. We refer to orchestration as a set of elementary tasks that follow a predefined script. So far nothing new, because what matters here is to have previously "written" that set of tasks, writing in the script all the steps to make a provision or un-provision of the service (whether infrastructure, application, database data, etc.).

The orchestration, scripting in fact, is what should allow instant running of procurement processes, which is obviously predefined. The advantages of orchestrating are clear, you can make your responsiveness to business PERFECTLY conform to the needs of business, both to grow and to decrease, and also this is done instantly and automatically.

As discussed in other business cases that we will include in this blog, to have farms that grow and decrease automatically with the production company, is key to reducing costs (whenever we are running under a “pay per use” operating model).

Consider that what you get with this way of working (standardizing and orchestrating) is to change the location of the provisioning problem, it will be moved to the supplier side, which impacts prices the providers set. That is, the service provider must provide elastic peak demand, which is something that the consumer of services is being waged in the cloud. For the business to remain profitable to providers there is only one explanation: economies of scale.

BOTTOM LINE
The enabler/enhancer of elasticity is the orchestration, this is to automate all processes provisioning, deployment, activation and deactivation. Elasticity is in turn the basis for reducing costs and accelerating time to market.


Wednesday, March 18, 2015

Cloud Computing: The key is standardization

Ph. D. Julio Fernandez Vilas
linkedin
¿Any comments? jfvilas@gmail.com

And suddenly there came a cloud, and many thought: why not to move all may workload to the cloud and I’ll get rid of a management problem and systems infrastructure that has nothing to do with the object of my business?

This is the question that is sure to have made themselves some many CIOs and CEOs over the last 4-5 years. And of course, technology is often not perceived as a competitive advantage, but as a necessary step to "run the business", which has significant costs, and requires expertise that contributes very little to the lines of business.

But what if the technology is a key in my business? And I'm not talking about an IT company in the world, but a company that uses the Internet as a sales channel. Think in Zara, apart from managing 100% of their e-commerce infrastructure, they will shortly open a store in Alibaba to further its Asian customers arrival. Or think in Marks & Spencer (we will come back on this later), the largest UK retailer.

There is a clear trend in the market, especially in the specialized press, to compare cloud computing with "Utility Computing", a concept for a few years now, from the point of view of functionality, is reviving with this new denomination.

The term "utility" is related to everything that revolves around consumption, with a concrete way to consume and pay for what is consumed. It's especially important when it comes to electricity, water, or oil.

When the tagline "computing" is added so the aim is to associate the computer (or IT services in general) to a form to be consumed in the style of water or electricity. The same is happening with cloud computing, and it starts to look like a form of consumption of IT services on a pay per use model.

And it's true. As mentioned in lectures and presentations and all around Internet, one of the most distinguishing characteristics of cloud computing is its “pay per use” model, i.e., lack of investment and exclusively operating costs: no CAPEX and linear OPEX (proportional to consumption, no steps).

And can we compare the use of IT services to the electricity consumption? For me (and this opinion is generally divided) the answer is easy: NO, for several reasons:

  1. An IT service is not a “power”, it is not consumed in watts. Why in the 60's and 70 local power plants began to fuse to give rise to the great monsters of the global power with which we live today? Well, it may seem obvious, because they could do it, because all of them sell EXACTLY the same product. The watts are all equal, regardless of vendor, which, as we shall see, does not happen with technology.
  2. One of the advantages of the utilities is that we can easily switch providers. For example, if I hired my power from EDF, I can easily switch to Enel, RWS or the dealer concerned, and this facility is due to the simplicity of the product sold by vendors: the watt. For IT services, each vendor is selling a different service offering. Although there are products in the cloud that seems to be consistent (Oracle in the cloud, SQLServer in the cloud, Enterprise DB, Amazon RDS, etc. ...), the reality is that they are only compatible in terms of functionality, i.e. all of them do the same things, the problem is in the implementation, since they are technically different, and one must work differently with them.


In any case, although the utility model does not apply to computers, it does not make the model posed by cloud technology is not usable, obviously.

Let’s go back to the example of electricity. Early last century to mid-century, it was not unusual to find companies that generate their own electricity, especially all industries using motors in their manufacturing processes (anecdotally, cogeneration has returned to the scene a century later). All that disappeared because electricity suppliers were responsible for "assuming" the problem over to heavy industry.

But think for a moment that we have a factory that handles 220V AC motors, DC motors 83V, or 4-phase motors. If we call an electric supplier to provide us power we will find a small problem, which is that the power supply is standardized, and vendors can only offer current two-phase or three-phase, 110V, 220V and 380V. What do I do with my engine 83V powered engine?

Well this problem seems trivial, since the "standardization" is something that has been working globally throughout the second half of the twentieth century (ISO, for example, was established in 1947). Standardization is one of the inhibitors of the adoption of cloud technologies at medium and large global companies.

So far we have referred to standardization as a rule to be applied at the base architecture (hardware, operating system, etc.). But standardization is also a key to the operation. That is, customizing a server is a resource- and time-consuming process that should disappearfrom datacenters. The aim is that the provisioning processes, deprovisioning and operation can be automated (orchestrated, as discussed later), and all these processes may also in turn be standardized. Please remember that is mandatory to apply economies of scale in order change cloud technologies into an attractive technology offer.

That is, little good is to standardize operating systems if the process of deploying applications and services are different for each server that I have in my installation.

BOTTOM LINE
The supply of cloud services, as far as infrastructure is concerned, which, as we shall see, is the most requested service today, and it is limited to only Windows and Linux. That is, entities like BSCH or BBVA with a financial core hosted on several zSeries can only take advantage of the work mode of the cloud for some of their technological base, which in addition will not be the core.

Monday, December 8, 2014

Cloud Computing: The 3 layer model

Ph. D. Julio Fernandez Vilas
linkedIN
¿Any comments? jfvilas@gmail.com



So far everything that relates to the architecture of cloud-based technology model. With regard to the implementation model or working model, we distinguish at least 3 entities, although each of them may be subdivided depending on the business needs:

a)        Infrastructure Provider is the entity that provides services in the cloud.
b)        Service Provider is the entity that consumes the services to be offered to a third party.
c)         Client Service is the final consumer.

This, with a real example would look something like:


According to the Gartner report on the current state of cloud technologies (http://www.gartner.com/document/2515316, access rights maybe required), today we can separate it into 4 modes according to "how" the customer manages the infrastructure used in the cloud:

a)        IaaS+ Middleware. In this mode the client (subscriber) manages the infrastructure and is responsible for deploying their own middleware (or use pre-built middleware that facilitates the service provider models).
b)        IaaS Cloud-Enabled. For the customer remains IaaS mode, but the provider handles the scaling part (elasticity platform).
c)         PaaS Cloud-Based. This is where the great change in the way we manage. In client mode only PaaS handles the functionality of the platform, and is the provider who manages the hardware and software infrastructure.
d)        PaaS Cloud-Native. The operation mode is the same as in the previous model, with the addition that the middleware layer is designed and configured to work in cloud mode.

In the figure below we can see the different "grade of management" that must perform the subscriber of different cloud services based on the hired mode.



Saturday, November 8, 2014

Cloud Computing: Strata

Ph. D. Julio Fernandez Vilas
¿Any comments? jfvilas@gmail.com

There is no standard definition of what "Cloud Computing" is, and two reasons are responsible for this lack:
  1. There are different implementation models (infrastructure, application, software ...), so there is no precise definition.
  2. b) There are very multifarious offerings providers, so that a strict definition would not include all offerings.

But if I had to define what Cloud Computing is, I would choose the definition included in Wikipedia (read the interesting Wikipedia entry here):
"Cloud computing Relies on sharing of resources and coherence to Achieve Economies of scale, similar to a utility (like the electricity grid) over a network. At the foundation of cloud computing is the concept of Broader Converged Infrastructure and shared services. " The figure below is a good graphic illustration of generic architecture of cloud computing, where we can observe 3 clearly defined levels:


Here we can see the main "three flavors" of cloud computing offerings, infrastructure as a service (IaaS) platform as a service (PaaS) and application (it stands for "application software") as a service (SaaS). In the market and specialized press we can find other types of cloud offerings, focused on specific areas of technology, Security as a Service (SECaaS), Storage as a Service (STaaS), etc., but ultimately, any offering cloud will be framed in any of the three categories we've mentioned:

  1. Infrastructure (IaaS) refers to the possibility of consuming infrastructure services in the cloud, such as computing power, storage, connectivity, etc. At this level, management done by the customer (subscriber) is performed at virtual machine level or storage unit (LUN) level, for example, while the provider of infrastructure services (provider) is responsible for "only" managing the hardware (plus all related concepts to the cloud, such as scalability, high availability, etc.).
  2. Platform (PaaS). When a client wants to get rid of costs and tasks associated with the management of the infrastructure (machine maintenance, procurement of space, bandwidth allocation, etc.), he has the option of using services in a “platformed” mode, where the level of management that must been made by the subscriber is much lower than the one a provider to does. In this model, the client focuses on the development of applications, in fact the real solutions to their business problems. As an example, in Azure platform, it is possible to develop a .net web application and deploy a packetized version to a cloud server without installing anything. Here is Microsoft itself who is responsible for configuring and managing machines and application servers, clients only have to worry about generating deployable software packages on the platform. At this level of the cloud computing stack is where you would frame the old ASP's (application service providers).
  3. Software / Application (SaaS). At the highest level of services providing in the cloud we find Software as a Service. It is easier to understand, since it is the stratus that we find closest to the business needs, and therefore farthest from the technical stuff. It can be easily understood with these examples: Office 365 or Gmail.

Friday, October 31, 2014

Cloud Computing: Origins

Ph. D. Julio Fernandez Vilas
¿Any comments? jfvilas@gmail.com



The arrival of cloud computing technologies to the IT market has not happened in the big bang mode.
Unlike other technologies more directly related to the research and innovative activities that manufacturers in the world of IT (as might be the use of flash technology in storage systems, higher recording densities on tape, or FCoE), Cloud Computing is not is a technology in itself, but is rather a "way of doing".

In my view, it all started in the mid-90s, with the emergence of ISPs (Internet Service Providers) and ASP (Application Service Providers). These were the years of the birth of the Internet Age. The need of an Internet connection to be a consumer of content, and the need to have "presence" on the Internet to become a content generator was quickly covered with ISPs (those that provided access) and ASP's (those that provided the infrastructure needed to publish content).

Thereafter, as a result of the acquisition of knowledge in the new technologies of the Internet by businesses, Service Providers began to lose focus, and were the companies themselves, seeing the potential of the Internet as a new channel to open new lines of business (or expand existing), which took care of personally managing its online presence. In fact, web applications that were only mere information portals, became sites where e-commerce activities can be done, both B2C (the most successful business of all time) and B2B.
 
Obviously, these business execution capabilities may be beyond the any business capabilities, leaving a margin of existence to the ASP (ISPs were gradually eliminated by the telcos). But in the second era of the Internet, where "being on the internet" was not only for disseminating information (it was mainly for doing business), the ASP had to evolve, did not come to stay static websites, which eventually generate a new way of working, and therefore a new way of doing online business. Then we came to the "dot.com" era.

Everyone wanted to do business very quickly, so there was no time to create departments, or build a datacenter, time-to-market was critical. How is this problem of getting quicker deployments resolved? A new type of service providers appeared: the hosters (as a continuation of the story we might call them HSP, Hosting Service Provider).

With HSP, companies no longer have to worry about the more purely technological side. They just need to focus on defining a good business strategy. Once defined, if a company has some development capacity, it can implement the solution and it is ready. If not, it just has to find a company that will make the development and deploy it to the HSP chosen.

Alongside all the evolution we have been living in recent years in all that relates to "the web" and mobile, the world's infrastructure has also undergone major changes. From the IBM mainframe virtualizer (now renamed to zVM) of the 60s, to the current KVM, Xen, vSphere, Hyper-V, etc., the world of virtualization has evolved in dizzying, provided a new vision and a new way of managing the infrastructure.

Workload consolidation, cost reduction, simplification of the datacenter, high availability, and a right utilization of the installed resources are some of the concepts to which we have been migrating almost without realizing the fact that we have just virtualized our hardware.

But virtualization is not only a valuable asset for companies with own datacenter. Those who are taking greater advantage of developments in virtualization technologies are undoubtedly the HSP.

HSPs have been able to finally get rid of dealing a big problem, the "power and cooling." At the time of the birth of the HSP, one operating system instance was installed in one physical machine (they were the years that became popular pizza servers, 1 server = 1U, ie, 42 servers per standard rack), and powering and cooling hosters’ datacenters was a real headache.

With virtualization the HSP consolidate operating system instances one far fewer physical machines, allowing them to lower their costs exponentially, and improve their service offerings.

And not just talking about physical space. The power consumption decreases dramatically, which also makes it also decreases the need for refrigeration. That is, virtualization has a double impact on the cost savings, as well as everything that is in itself by consolidating services, also helps reduce datacenter PUE.

HSP's as Rackspace, Softlayer, Terremark, etc ..., can offer hosting services at a much lower price, and also a very quicker provisioning process.

Infrastructure virtualization has changed the way we manage our hardware infrastructure, and the next step is here knocking on doors of datacenters: network virtualization.

We ourselves see it in our datacenters: provisioning a virtual machine software takes minutes to set up, but the configuration of the network part is a series of manual tasks that must be added to the total provisioning time.

New trends in what is known as "software-defined" are called to finish simplifying all procurement processes and operations, at the same time that it will mean the next step in the design and management of systems high Availability (HA) and disaster recovery (DR).

Software-Defined Networks (SDN), Software-Defined Data Center (SDDC), or Software-Defined Storage (SDS) are some of the initiatives (definable as best as pure technology strategies) that begin to fill the tables of CIOs and datacenter architects, and where the next battle for control of the datacenter between computer hardware manufacturers and communications hardware manufacturers as long as the software-defined trends will be responsible for completing the process of virtualization, to lead us to what I call the Virtual Datacenter.

Thursday, October 23, 2014

Cloud Application Layer Part III - Up & Down

Ph. D. Julio Fernandez Vilas
¿Any comments? jfvilas@gmail.com

Let's recall now the concepts of public cloud and private cloud, which are easily understandable if we really understand the relationship between the Internet (public) and (private) Intranet. When a company uses the public cloud, and then publishes services internally in the organization using the same technologies, i.e., in a private cloud, the company is acting as both a client consuming cloud services and a provider that offers services to its own iT services.

This is important when assessing the costs, as it will allow the consumption billed individually, affecting a business unit costs associated with the services they use.

Moreover, we must consider that the company is a living entity, constantly changing, and the fact of offering services from a cloud, whether private or public, is a key factor in making changes to services. That is, we endow the company a considerable amount of flexibility, since domestic service consumers do not know whether such services are being offered from inside or outside the organization.

In a last step in the adoption process of cloud technologies it is very important to have bothered to make a good design of services consumed in the form of a private cloud. If the design is good, the migration to a cloud service can be done easily. It is important to focus on a key concept for this migration to the cloud be done easily: STANDARDIZATION.

We need to design our private cloud services so that they are compatible with the public cloud. When we talk about compatibility or standardization, it is important to have a clear understanding of the layer of cloud technology we are focusing. For example, if we are thinking about IaaS, we have to make sure we use only x86 architectures, we are restricted to Windows and Linux, standard virtualization technologies, iSCSI, etc.

If you are thinking in SaaS (cloud email, for example), which is standardized is functionality. That is, we can migrate our on-premises platform (Exchange, Lotus, etc.) to an email platform in the cloud (Gmail, Outlook.com, etc. ..), provided that it assures me that the functionality that I will get on the new platform is at least the same as I have on the current platform. We should note that in the SaaS projects we will always find a CAPEX amount in the project corresponding to migration.

And all this in terms of going up to the cloud. And what about going down from the cloud? If we have taken into account everything said so far, the download process (or even transfer to another provider) is exactly the same. Remember that we are talking about going up to cloud from a starting point that assumes we are already working on private cloud.

Friday, December 9, 2011

Cloud Application Layer Part II.

Ph. D. Julio Fernandez Vilas
¿Any comments? jfvilas@gmail.com

One of the strongest arguments to defend cloud computing is its cost model, as we have seen in previous articles. Although performing control on costs is important, to move from a model based on CAPEX and OPEX to a pay per use model (CAPEX = 0, there is no CAPEX and OPEX is lineal), it remains especially important being able to be tied to a provider.

It is vital to be able to change provider without cost or at least paying the minimum possible cost, since the prices offered by vendors will probably be subject to revision, which could undermine the competitive advantages of the pay-per-use model.

A change of provider must be performed, typically by modifying configuration files or any kind of helper-class if needed.

Again, it is necessary, when we are talking about integrating cloud services into business applications, to create an intermediate layer that isolates applications from using the cloud services they use.

We are talking again about the Cloud Access Layer, which in this case is embodied in a software layer, typically a connector or adapter. This piece of software is what (inside our cloud computing stack) we call the cloud access point.

Develop or adapt applications that accesses the cloud via access points in the future ensures the ease of changing a provider.

This ease of switching providers will take advantage of competitive rates for the same service (form different providers), so that the cost of changing the provider is so small that could exploit the slightest variation in the cost of cloud services that we are using.

Now that we have introduced the concept of provider as an important element within the cloud computing stack, we will redesign the stack. This is CCS2.0 (Cloud Computing Stack 2.0).