|::Operational Service Deployment::|
|Contributed by William R. Welty|
|Sep 21, 2007 at 10:17 AM|
A Discussion on Systems Management Deployment.
Information Technology changes over the last few years have held a promise of changing the paradigm of how we think about technology. We tend to deploy and think of systems individually and supporting an application. We even think of this in an application cluster situation, from the perspective, that a application cluster is a series or set of systems providing a set of operating systems for that application.
While technology has held the promise of providing services instead of systems, the shift in perspective and thought has only gone as far as to think of providing a set of systems to provide those services. Grid computing was the term that took us to thinking of service deployment but has not let us figure out how to provide the cluster services with those grid services. Largely this is a mental shift requiring a foundation from which we tend to not want to leap.
The rift between application and systems, to deployment is vast and ripe with such obstacles such as change. Change in thought, change in process and change in action. However, the tools to assist in this shift are starting to appear in conjunction with the real functional deployment of cluster able applications.
I spent a week working with a team from Qlusters which provides enterprise level support for the OpenQRM product. OpenQRM is a data center management tool. In some ways similar to VMWare's lab tool. You can deploy system templates quickly and put them back on the shelf. While each tool has it's own quirks and functions, most can be used in simplistic system management terms. These implementations have their own benefits that can be very useful.
Both systems can deploy on the fly systems for testing, again from a systems management perspective. OpenQRM can additionally deploy to virtual or physical hardware equally well. Moving a system from hardware to virtual and back again is a great feature, but it still only addresses the perspective from a systems management view.
The real benefits to using tools like this are the in the next level, where we define systems by templates in classes of services that can be deployed in a clustered services environment. OpenQRM, can, for instance reconfigure the environment in the OS. IP addresses can be in dynamic ranges per services to support things such as load balancers and reconfigure files that might contain java configurations for specific servers.
In this scenario, we can envision a cluster of BEA portal or service bus systems being managed by a tool such as OpenQRM looking at the load of those systems determining that we need to deploy a new instance of portal or SB and dynamically doing such without manual intervention. Then once the load event is past, decommissioning the system instance.
The deployment of services such as this that use commodity hardware and operating systems not only requires a planned and structured infrastructure, but the operational mentality being extended to the application deployment. Operationalizing applications is no trivial task and requires an operational discipline that is rarely enforced in application development. Simple processes that do not allow for the hard coding of system names or ip addresses in application configurations, but utilize standards such as â€œlocalhostâ€ or dynamically assigned service names.
This is the process of commoditizing the application and system deployment. This deployment on top of commodity hardware or virtualized hardware then makes a serious extension of not only maintaining application availability but application performance criteria as well. This is often termed utility computing.
This then is the next large step in Information Technology. Nicholas Carr, former executive editor of the Harvard Business Review has termed this the "Third Age" of IT from a strategic and economic standpoint. â€œCarr believes that the client-server model will be replaced, and already is, with a utility model or grid model, just as the water wheel was replaced with a utility electricity model, a centralized shared infrastructure. â€œ http://www.xtalks.com/gridcomputing0704.ashx
â€œCarr explains that IT is a general-purpose technology, not a tool itself, but a platform for them.â€ This then is the mental shift that has to take place to truly leverage the IT infrastructure into supporting services rather than applications.
This is the real advantage of virtualization. The advent and ability to enable utility or grid computing not consolidation. Consolidation is the short term justification for this infrastructure and it will realize real dollar savings in the corporate operational infrastructure. However, we should not loose site of the fact that the proper implementation of virtualization technologies allow for the transition to real services deployment instead of systems deployment.
William R. Welty holds degrees from Indiana University's Kelley School of Business in Business Management and Administration as well as Accounting. He has been and IT professional for over 20 years consulting for companies such as Sun Professional Services at such clients as Oracle Corporate and Bell Laboratories. Currently serving as Infrastructure Architect at Corporate Express NA.
|Last Updated ( Oct 01, 2007 at 02:39 PM )|