Cloud Computing

min read

Let's start with a thought experiment that will require you to go back to the dark ages of technology — a decade ago, before Salesforce.com, Gmail, or the iTunes app store. Consider some of the major technology implementation efforts your campus IT organization has undertaken during the past decade. Has your institution substantially modified the way IT is organized or how technical solutions are delivered? Many of us likely have followed the time-honored, structured-project approach to implementing campus-wide technology solutions: identify the functional owner, develop a detailed requirements document, prepare an exhaustive RFI or RFP, assemble a review team, select a solution, negotiate a contract, develop the solution to work with your campus, build a test environment, create training materials and classes, go through a somewhat disruptive deployment process, and then finally — ta-da! A mere nine (or 12 or 24) months later, assuming the scope hasn't changed, your IT team delivers a solution. Unfortunately, more often than not, these large-scale solutions have failed to meet the needs of not only the primary sponsor but also the end-user community expected to be the daily consumers of the system. Yet for the past 20 years, this type of expensive enterprise resource planning (ERP) implementation has been the primary recipient of campus technology investment dollars.

Now consider what the most popular, highly used applications on your campus are today. Hint: they are probably not delivered from your data center. Bus schedules, campus maps (via Google Maps mashups), mobile chat (Twitter), quick reference guides (lynda.com), social networking tools (Facebook), local restaurant menus (Menuism), course peer evaluations (RateMyProfessor), laptop backups (Mozy), and even movie ordering online (Netflix) are all applications now being provided to your community directly from web-based or cloud architectures. They are easy to find, quick to understand and use, and, if you have selected carefully, relatively painless to abandon when a better solution comes along.

This is the world of cloud computing, a perpetual beta environment where solutions are deployed, modified, upgraded, replaced, or retired all in the time it previously took for our internal teams to develop a single campus application. I am not suggesting that an iPhone application and a campus financial system should have the same development profile. However, it is important to understand some of the underlying changes in computing that have enabled the current agility of solutions development and deployment. Given this new world, we should all be reconsidering the role of enterprise IT — how it is organized, how it is architected — to take advantage of the emerging opportunities enabled by cloud computing. It is time for campus IT organizations to adopt the practices now prevalent in the private sector and move away from monolithic, dedicated environments to private infrastructure and application clouds.

There are many possible roads to cloud computing adoption; however, I see four primary approaches: organic growth (everyone picks the cloud services they want to use), coordinated adoption (preferred providers are selected, promoted, and supported internally), integrated (some applications are specifically written to utilize public cloud services), or some hybrid of those three. A fifth option, full control, where every cloud service is centrally contracted, is unlikely to succeed in decentralized institutions like higher education.

While it might not be clear which approach (or blend of approaches) to a public cloud strategy is right for your campus, it is critical that you make some very conscious decisions about how to proceed. You could opt to move straight into the public cloud, but you have to consider what to move and to which provider. Even considering the compelling price points and service options, however, your institution should be cautious and gain some experience with the model before moving your core systems to public cloud options. So, how do you gain experience with cloud environments? I believe the best way to position your campus for future use of cloud environments is to start by developing an internal private cloud. Starting there will enable you to understand what changes are needed to position your campus IT organization to support moving to a rational use of public and private cloud services.

In my last column, I discussed clarifying the roles of IT professionals around the demand (planning) or supply (delivery) of technology. Once that structure is established, with clear providers of services identified, you will be well-positioned to both learn from and gain the benefits of a private cloud while minimizing redundancies that can be costly to maintain. There are many possible places to start your private cloud, but I recommend beginning with an infrastructure foundation: architecting an internal private computing and storage service to provide the backbone for campus-wide cloud adoption.

The Private Infrastructure Cloud

Developing your own private cloud architecture might seem like a daunting task, but in fact many organizations likely already have the beginnings established in some layers of their technology stack. A Berkeley research team defined public cloud computing with several key attributes, including "The ability to pay for use of computing resources on a short-term basis as needed (e.g., processors by the hour and storage by the day) and release them as needed, thereby rewarding conservation by letting machines and storage go when they are no longer useful."1 If you are going to build your own cloud, consider starting at the data center/infrastructure layer. Evaluate what changes are necessary to get a private cloud going by first considering your current server architecture. Are you still purchasing individual servers (large or small) for distinct projects? Do you have server sprawl within your data centers and across the campus? It is not uncommon for campuses to attempt to gain some economy of scale and efficiencies by standardizing on a few hardware platforms as a means to reduce this problem.

In many cases, unfortunately, we have designed large, monolithic servers and storage environments around closed architectures. For example, can members of your community spin up a shared virtual server on your mainframe or larger Unix servers? Probably not, as they are carefully, and often exclusively, architected for your high-availability applications. So even where you have scale, it might actually have contributed to the server-sprawl problem, as these expensive older servers are maintained and upgraded but not open to other uses. How do cloud providers solve this problem? Lots and lots of commodity hardware, with all servers virtualized on top of the hardware layer. If you do not already have an aggressive server virtualization program under way, implement one immediately. Not only will it give you a standard base operating environment to work from, it should lower your costs, reduce your carbon footprint, and dramatically improve the delivery speed of your server infrastructure. New project sprung on you at the last minute? No problem, fire up a few virtual machines (VMs) and provide them to the project. An older environment not as heavily used? Combine that VM with other low-utilization VMs and run them all on the same hardware. Your internal private cloud must have the ability to be flexible (or, as Amazon states, "elastic") in order to grow and shrink based on real-time consumption needs, and using virtualization is one of the first steps to achieving that elasticity.

Once your virtual machine server farms are up and running, turn your attention to another widely distributed commodity service: storage. The virtualization of storage has been around for many years, and it is likely that many campuses have implemented some large-scale storage area networks (SAN) or network attached storage (NAS) environments. However, storage frequently is designed as part of a project, where it is tightly coupled with the server environment. Cost is often cited as a factor when projects choose to buy individual low-cost disks rather than use the central SAN or NAS solution. The result is a fragmented portfolio of your campus institutional and individual data stored in environments that likely all are operating at very low utilization. What is needed is internal storage as a service, where the application designers and server administrators work with a common storage architecture that keeps the size closely aligned with actual use. The storage environment should be able to size up quickly, but in doing so should be modular so you can provide just enough capacity to stay ahead of predicted demand. That way you are only paying today's prices for today's usage rather than today's prices for tomorrow's (far cheaper) usage.

Beyond Technology

In developing your private cloud offering, architecture and technology selection will probably be the least of the barriers you will face in making your cloud service a success. It is quite likely that more complex challenges will emerge in the financial and political arenas. Instead of having a functional or funding champion for a single project- or solution-based environment — the traditional approach — under a private cloud model you will have to shift investment to base infrastructure, which is always a hard sell. Given the particularly difficult financial situations at many of our institutions, it might be challenging to justify the large-scale capital investment needed to establish VM and storage cloud layers. The political pressure against such expenditures will be particularly intense if the offerings are designed as monolithic solutions that assume a single consumption model across customers and projects.

To accommodate the differences in need, design your solutions as tiered services. For the VM server farm, you can use the same basic architecture with a wide variety of specific offerings built on that same platform. The same applies to storage. If you design your storage cloud from the beginning to expect tiered usage, you can provide different levels of service, from very basic backups and archives to file storage and even high-volume transactional storage.

Even where some technical aspects need to be differentiated, common management platforms, standard hardware and software configurations, fewer vendor contracts, and even spare parts can contribute to lower operating costs, in some cases dramatically so. Industry averages show that a single storage admin generally supports SAN-based storage at an average cost of $2.20 per gigabyte per month. At scale, that same cost can drop to under $.30 per GB per month fully loaded — and that cost is decreasing every month. In the case of servers, the gains are even more dramatic. A common metric for an efficiently run organization is 140 servers per administrator. At scale, many cloud providers are now reaching ratios of more than 1,000 servers per administrator, or a seven-fold improvement. The key in both situations is that the solution requires scale — scale of design, scale of use, and (reverse) scale of cost per unit.

Getting to Scale

To realize the kinds of savings that cloud environments promise, you really need to get to sufficient scale to see the cost-per-unit reductions and dynamic allocations possible without running into bottlenecks and provisioning delays. As mentioned, the investment in infrastructure needed to get to the benefits might be challenging, with strategic investments like private cloud infrastructure competing with immediate operational needs. In universities, where there are many fund sources, including restricted gift and grant funds, putting resources toward a shared benefit can be particularly challenging where past practices always have required capital purchase and any partnering on common environments is often perceived as a loss of autonomy or control.

A critical step in addressing these challenges is the development of a campus-wide "common good" model for shared technology resources. That model should have two components: a base investment commitment and a scalable consumption metric. For the base investment, rather than requesting additional funding, you can redirect existing resources by migrating all the major centrally funded systems to be anchor tenants of the new scalable cloud environments. That process can be accelerated by identifying all pending new projects, advancing the server and storage investments that you would be making over the course of the year, and directing them toward the private cloud infrastructure. The next step would be the migration of legacy systems in need of environment refresh. This approach should give you enough of a base that you can then develop a consumption plan (that might or might not include cost recovery from units) for the entire environment. Once in place, your internal private cloud should allow you to begin designing applications and technical solutions to use well-defined layers of shared services.

As you embark on your private cloud journey, keep in mind some of the critical success factors for your cloud:

  1. Flexible and tiered. Make sure the solution you design can provide multiple layers of service.
  2. Dynamic. The design should allow for ease of provisioning (quick to production), elasticity for high-demand periods, and quick release of capacity back to the pool when not needed.
  3. Funding model. Develop a "common good" funding model that allows for the common framework to be leveraged by projects large and small.
  4. Metered. Implement measurement tools and processes to maintain transparency to both the technical teams and your campus community about what resources are being used.

If your organization is ready and you can move toward offering private cloud services, the next challenge will be integrating the use of public clouds. In the next column, I will discuss approaches to public cloud sourcing selection strategies beyond simple price and performance, and extend the discussion to policy and data management considerations.

Endnote
  1. Michael Armbrust, Armando Fox, Rean Griffith, Anthony D. Joseph, Randy H. Katz, Andrew Konwinski, Gunho Lee, David A. Patterson, Ariel Rabkin, Ion Stoica, and Matei Zaharia, "Above the Clouds: A Berkeley View of Cloud Computing," Technical Report No. UCB/EECS-2009-28, Berkeley Electrical Engineering and Computing Science, University of California, Berkeley (February 10, 2009).