Showing posts with label cloud provider. Show all posts
Showing posts with label cloud provider. Show all posts

Wednesday, March 18, 2015

Cloud Computing: The key is standardization

Ph. D. Julio Fernandez Vilas
linkedin
¿Any comments? jfvilas@gmail.com

And suddenly there came a cloud, and many thought: why not to move all may workload to the cloud and I’ll get rid of a management problem and systems infrastructure that has nothing to do with the object of my business?

This is the question that is sure to have made themselves some many CIOs and CEOs over the last 4-5 years. And of course, technology is often not perceived as a competitive advantage, but as a necessary step to "run the business", which has significant costs, and requires expertise that contributes very little to the lines of business.

But what if the technology is a key in my business? And I'm not talking about an IT company in the world, but a company that uses the Internet as a sales channel. Think in Zara, apart from managing 100% of their e-commerce infrastructure, they will shortly open a store in Alibaba to further its Asian customers arrival. Or think in Marks & Spencer (we will come back on this later), the largest UK retailer.

There is a clear trend in the market, especially in the specialized press, to compare cloud computing with "Utility Computing", a concept for a few years now, from the point of view of functionality, is reviving with this new denomination.

The term "utility" is related to everything that revolves around consumption, with a concrete way to consume and pay for what is consumed. It's especially important when it comes to electricity, water, or oil.

When the tagline "computing" is added so the aim is to associate the computer (or IT services in general) to a form to be consumed in the style of water or electricity. The same is happening with cloud computing, and it starts to look like a form of consumption of IT services on a pay per use model.

And it's true. As mentioned in lectures and presentations and all around Internet, one of the most distinguishing characteristics of cloud computing is its “pay per use” model, i.e., lack of investment and exclusively operating costs: no CAPEX and linear OPEX (proportional to consumption, no steps).

And can we compare the use of IT services to the electricity consumption? For me (and this opinion is generally divided) the answer is easy: NO, for several reasons:

  1. An IT service is not a “power”, it is not consumed in watts. Why in the 60's and 70 local power plants began to fuse to give rise to the great monsters of the global power with which we live today? Well, it may seem obvious, because they could do it, because all of them sell EXACTLY the same product. The watts are all equal, regardless of vendor, which, as we shall see, does not happen with technology.
  2. One of the advantages of the utilities is that we can easily switch providers. For example, if I hired my power from EDF, I can easily switch to Enel, RWS or the dealer concerned, and this facility is due to the simplicity of the product sold by vendors: the watt. For IT services, each vendor is selling a different service offering. Although there are products in the cloud that seems to be consistent (Oracle in the cloud, SQLServer in the cloud, Enterprise DB, Amazon RDS, etc. ...), the reality is that they are only compatible in terms of functionality, i.e. all of them do the same things, the problem is in the implementation, since they are technically different, and one must work differently with them.


In any case, although the utility model does not apply to computers, it does not make the model posed by cloud technology is not usable, obviously.

Let’s go back to the example of electricity. Early last century to mid-century, it was not unusual to find companies that generate their own electricity, especially all industries using motors in their manufacturing processes (anecdotally, cogeneration has returned to the scene a century later). All that disappeared because electricity suppliers were responsible for "assuming" the problem over to heavy industry.

But think for a moment that we have a factory that handles 220V AC motors, DC motors 83V, or 4-phase motors. If we call an electric supplier to provide us power we will find a small problem, which is that the power supply is standardized, and vendors can only offer current two-phase or three-phase, 110V, 220V and 380V. What do I do with my engine 83V powered engine?

Well this problem seems trivial, since the "standardization" is something that has been working globally throughout the second half of the twentieth century (ISO, for example, was established in 1947). Standardization is one of the inhibitors of the adoption of cloud technologies at medium and large global companies.

So far we have referred to standardization as a rule to be applied at the base architecture (hardware, operating system, etc.). But standardization is also a key to the operation. That is, customizing a server is a resource- and time-consuming process that should disappearfrom datacenters. The aim is that the provisioning processes, deprovisioning and operation can be automated (orchestrated, as discussed later), and all these processes may also in turn be standardized. Please remember that is mandatory to apply economies of scale in order change cloud technologies into an attractive technology offer.

That is, little good is to standardize operating systems if the process of deploying applications and services are different for each server that I have in my installation.

BOTTOM LINE
The supply of cloud services, as far as infrastructure is concerned, which, as we shall see, is the most requested service today, and it is limited to only Windows and Linux. That is, entities like BSCH or BBVA with a financial core hosted on several zSeries can only take advantage of the work mode of the cloud for some of their technological base, which in addition will not be the core.

Friday, October 31, 2014

Cloud Computing: Origins

Ph. D. Julio Fernandez Vilas
¿Any comments? jfvilas@gmail.com



The arrival of cloud computing technologies to the IT market has not happened in the big bang mode.
Unlike other technologies more directly related to the research and innovative activities that manufacturers in the world of IT (as might be the use of flash technology in storage systems, higher recording densities on tape, or FCoE), Cloud Computing is not is a technology in itself, but is rather a "way of doing".

In my view, it all started in the mid-90s, with the emergence of ISPs (Internet Service Providers) and ASP (Application Service Providers). These were the years of the birth of the Internet Age. The need of an Internet connection to be a consumer of content, and the need to have "presence" on the Internet to become a content generator was quickly covered with ISPs (those that provided access) and ASP's (those that provided the infrastructure needed to publish content).

Thereafter, as a result of the acquisition of knowledge in the new technologies of the Internet by businesses, Service Providers began to lose focus, and were the companies themselves, seeing the potential of the Internet as a new channel to open new lines of business (or expand existing), which took care of personally managing its online presence. In fact, web applications that were only mere information portals, became sites where e-commerce activities can be done, both B2C (the most successful business of all time) and B2B.
 
Obviously, these business execution capabilities may be beyond the any business capabilities, leaving a margin of existence to the ASP (ISPs were gradually eliminated by the telcos). But in the second era of the Internet, where "being on the internet" was not only for disseminating information (it was mainly for doing business), the ASP had to evolve, did not come to stay static websites, which eventually generate a new way of working, and therefore a new way of doing online business. Then we came to the "dot.com" era.

Everyone wanted to do business very quickly, so there was no time to create departments, or build a datacenter, time-to-market was critical. How is this problem of getting quicker deployments resolved? A new type of service providers appeared: the hosters (as a continuation of the story we might call them HSP, Hosting Service Provider).

With HSP, companies no longer have to worry about the more purely technological side. They just need to focus on defining a good business strategy. Once defined, if a company has some development capacity, it can implement the solution and it is ready. If not, it just has to find a company that will make the development and deploy it to the HSP chosen.

Alongside all the evolution we have been living in recent years in all that relates to "the web" and mobile, the world's infrastructure has also undergone major changes. From the IBM mainframe virtualizer (now renamed to zVM) of the 60s, to the current KVM, Xen, vSphere, Hyper-V, etc., the world of virtualization has evolved in dizzying, provided a new vision and a new way of managing the infrastructure.

Workload consolidation, cost reduction, simplification of the datacenter, high availability, and a right utilization of the installed resources are some of the concepts to which we have been migrating almost without realizing the fact that we have just virtualized our hardware.

But virtualization is not only a valuable asset for companies with own datacenter. Those who are taking greater advantage of developments in virtualization technologies are undoubtedly the HSP.

HSPs have been able to finally get rid of dealing a big problem, the "power and cooling." At the time of the birth of the HSP, one operating system instance was installed in one physical machine (they were the years that became popular pizza servers, 1 server = 1U, ie, 42 servers per standard rack), and powering and cooling hosters’ datacenters was a real headache.

With virtualization the HSP consolidate operating system instances one far fewer physical machines, allowing them to lower their costs exponentially, and improve their service offerings.

And not just talking about physical space. The power consumption decreases dramatically, which also makes it also decreases the need for refrigeration. That is, virtualization has a double impact on the cost savings, as well as everything that is in itself by consolidating services, also helps reduce datacenter PUE.

HSP's as Rackspace, Softlayer, Terremark, etc ..., can offer hosting services at a much lower price, and also a very quicker provisioning process.

Infrastructure virtualization has changed the way we manage our hardware infrastructure, and the next step is here knocking on doors of datacenters: network virtualization.

We ourselves see it in our datacenters: provisioning a virtual machine software takes minutes to set up, but the configuration of the network part is a series of manual tasks that must be added to the total provisioning time.

New trends in what is known as "software-defined" are called to finish simplifying all procurement processes and operations, at the same time that it will mean the next step in the design and management of systems high Availability (HA) and disaster recovery (DR).

Software-Defined Networks (SDN), Software-Defined Data Center (SDDC), or Software-Defined Storage (SDS) are some of the initiatives (definable as best as pure technology strategies) that begin to fill the tables of CIOs and datacenter architects, and where the next battle for control of the datacenter between computer hardware manufacturers and communications hardware manufacturers as long as the software-defined trends will be responsible for completing the process of virtualization, to lead us to what I call the Virtual Datacenter.

Friday, December 9, 2011

Cloud Application Layer Part II.

Ph. D. Julio Fernandez Vilas
¿Any comments? jfvilas@gmail.com

One of the strongest arguments to defend cloud computing is its cost model, as we have seen in previous articles. Although performing control on costs is important, to move from a model based on CAPEX and OPEX to a pay per use model (CAPEX = 0, there is no CAPEX and OPEX is lineal), it remains especially important being able to be tied to a provider.

It is vital to be able to change provider without cost or at least paying the minimum possible cost, since the prices offered by vendors will probably be subject to revision, which could undermine the competitive advantages of the pay-per-use model.

A change of provider must be performed, typically by modifying configuration files or any kind of helper-class if needed.

Again, it is necessary, when we are talking about integrating cloud services into business applications, to create an intermediate layer that isolates applications from using the cloud services they use.

We are talking again about the Cloud Access Layer, which in this case is embodied in a software layer, typically a connector or adapter. This piece of software is what (inside our cloud computing stack) we call the cloud access point.

Develop or adapt applications that accesses the cloud via access points in the future ensures the ease of changing a provider.

This ease of switching providers will take advantage of competitive rates for the same service (form different providers), so that the cost of changing the provider is so small that could exploit the slightest variation in the cost of cloud services that we are using.

Now that we have introduced the concept of provider as an important element within the cloud computing stack, we will redesign the stack. This is CCS2.0 (Cloud Computing Stack 2.0).