Defining cloud computing services: benefits and caveats

Tools

Michael Kennedy, ACG Research

Kennedy

Cloud computing service delivery has a strong business case that includes cost reduction, service acceleration, and improved service delivery quality and reliability. Before developing the business case, however, it is necessary to define cloud services because the term cloud has been overused.

One simple definition is that cloud computing provides computation, software, data access, and storage resources without requiring cloud users to know the location and other details of the computing infrastructure. Unfortunately, this definition is inadequate as it applies equally well to the time share services of the 1970s. In my view a cloud services definition must necessarily include services accessed via the Internet and a Web browser, minimal IT skills required for implementation, use of underlying virtualization technologies, and Web services APIs. Even with these additional qualifications an argument can be made that the definition encompasses many legacy managed service and hosting offerings.

My final cloud computing service differentiator is that the service literally leverages one or more network clouds that provide dynamic access to multiple data centers and resources such as storage and software services. Deployment models include public, private and hybrid clouds. Community clouds involve sharing of infrastructure among several organizations from a specific community with common concerns.

Cloud computing services by definition enable resource sharing across multiple data centers and other resources. This capability creates the opportunity to reduce the peak capacity (measured in numbers of simultaneous active virtual machines) of each data center down toward the average capacity of all data centers. This is accomplished by using the network (cloud) to shift capacity from highly utilized data centers to those with slack capacity. Reduction of peak capacity directly affects capex (capital expense) because data centers are sized to meet peak capacity requirements. Thus, reductions in peak capacity result in lower capex. In a recent study I found that virtual machine capacity could be reduced by 40 percent with a 37 percent capex reduction by pooling resources across multiple data centers.

Resource pooling using the cloud computing model also can be used to improve service availability at lower cost by eliminating the need to build redundant system elements into each data center. Network performance also is enhanced by pooling resources to improve network accessibility and agility, and reduce network latency and packet loss.

Innovations within data centers including service provisioning automation, virtualization of servers, storage and networks, and Ethernet fabrics, all considered to be part of the cloud computing services model. Automation of the service provisioning lifecycle produces dramatic reductions in opex (operating expense) and service implementation time as compared to traditional semi-automation and homegrown script methods. Service catalogs and process orchestration are used to create self-serve approaches to service provisioning. This dramatically reduces the manual work required to provision new services. In one case study service provisioning automation reduced opex by 88 percent and service provisioning time from eight to two months.

Data center design is evolving as applications have evolved from being client-server based to services oriented architecture (SOA) based. An application is implemented as a series of connected components distributed over and running on multiple servers with shared workloads. Server virtualization maximizes server utilization and enables resiliency and agility while storage convergence and virtualization yield similar benefits.

Data center traffic scale and volatility are growing in response to these data center changes. Traffic flow patterns are changing as well. In addition to the traditional flow of traffic from servers to the data center core (N-S), it also flows from server to server and from server to storage (E-W) in the modern data center. Consequently, data center network infrastructures must support ever increasing scale and any-to-any connectivity. Ethernet switching designs referred to as network fabrics are being introduced to meet this any-to-any connectivity requirement of the modern data center.

The fabric architecture's any-to-any connectivity feature reduces the data center switching design from three to two tiers of Ethernet switches. In a recent study I found that the fabric design achieves 58 percent to 76 percent lower total cost of ownership (TCO) and has more linear scaling of capital expense and operating expense than the traditional three-tier Ethernet switching design when deployed in very large data centers.

Claims for new technologies and concepts such as cloud computing services are often exaggerated. Some caveats, consequently, are in order. First, while there are attractive benefits associated with pooling multiple data centers using a cloud computing model, this is still an emerging concept with few full implementations. Second, while service delivery automation is very compelling, this is becoming a table stakes offering for systems vendors. The challenge is for service providers and large enterprises to re-engineer their business processes to incorporate these methods. Re-engineering business processes is gut wrenching and not typically done until external events force it. Finally, while fabric-based technology is promising, its deployment will be gradual and linked to the technology refresh cycle.

Michael Kennedy is a FierceTelecom columnist and is Principal Analyst at ACG Research. He can be reached at mkennedy@acgresearch.net.