Future of deployment: Part 1 - Monuments vs Cheap housing

Robert van Loghem

I'm going to start a series on the future of deployment. How and what do we deploy in, say 5 years or so. Of-course this is my opinion and please add your own ideas in the comments below.

MonumentVsCheapHousing

To start this series off i'm going to talk about the current state of things, or at least what i see at a lot of enterprise customers. Most of the enterprises i've been at have physical servers which are used by numerous applications from different development teams. Some of these servers are old and have been in maintenance by operations for years (+4 years ;)). That means that the server has changed, lots of deltas, aka, patches, deployments etc. have been applied and as my colleague Vincent has stated applying deltas has its cons 😉 Of-course i'm talking about servers and not applications and the same rules do not apply, or do they?

Deltas on servers are bad, period.

I think the same rules do apply. Applying deltas might be faster but in the end it will become increasingly harder to map out the path you have taken from 4 years ago up till now! and this is oh-so obvious on the servers themselves. Try and rebuild a 4 year old server where every week at least 5 deployments have been executed, every month a patch or two to the OS-Middleware has been applied and every six months some change to the filesystem has taken place. It is just plain hard.

And here is the prime example

A couple of years ago i witnessed a project that was trying to move their entire server and application environment from one location to another and in the meantime trying to get rid of some out-of-date-standards which were lingering on those servers. They had automated deployment scripts for all their applications, so the only thing they needed to do is make sure they had a clean environment in the new server location where they could install the latest and greatest version of their applications. They tried for 6 months to get it working but failed because they could not properly reproduce the servers at the remote location, so much old-out-of-date-stuff on those servers was needed by applications! So finally in the end they gave up and moved all their servers by restoring a server-backup on the remote site. The lesson this company learned was to spread the amount of applications onto different servers. This allowed them to keep their servers and applications more up-to-date and get rid of out-of-date-standards more visibly.

Introducing the new is easy, getting rid of the old, just let it be...

The company created new servers which were going to be used by new applications. Therefore they could install them almost anyway they wanted to. Those new application deployments could then use the new features of those servers and almost everything was good. Whenever an old application wanted to make use of some of the new functionality only available on new servers they had to adjust their deployment and sometimes code. It was accepted that when using new functionality, you move to a new server with updated JDK, log file paths, more memory, new version of the application server or a portal and so on. Applications had a natural upgrade path, old applications are running on old servers but those servers do not require much maintenance except for the odd patch and clean log files. New applications are running on new servers with better middleware, tools, etc. making maintenance life somewhat easier on a different level.

Different levels of maintenance - Monuments vs. Cheap housing

How can more servers result in lower maintenance, isn't that just weird? Yes it is! But the difference is this; If i have one server for all my applications it becomes hard to make changes to the server and those applications. Just like 40-people-(applications) living in a monumental-building-(server). For every change you have to figure out what the impact is on the building itself and the people living there. In my own experience, every time i wanted to make some change to the server i had to go through a committee to get approval! The committee consisted of not only hardware/OS/Middleware people but also all the applications people! All 40 of them ;( You might feel my frustration as i requested my third change for that particular server in the year. When we moved to smaller-servers-(cheap housing) with less applications-(small families) it got a whole lot easier to make changes 😉 Ok, ok, so the amount of maintenance wasn't the problem, getting consent from 50 people to change and then finding out if it worked in the monument was the problem. Whereas changing something and/or building a brand new cheap house felt like a breeze!

So what about the future then?

In my next post i'll explore what is in my opinion the next big thing after the "Monument" and "Cheap housing". Of-course it has something to do with cloud/virtualization technologies. It will be all about moving appliances! and it is something that Deployit will provide support for.

Comments (2)

  1. Sai Venkat - Reply

    December 21, 2009 at 9:03 pm

    Nice post. In the current age of test driven infrastructure, automation using like puppet, chef and Cloud (virtualization included) I think it will make more sense to maintain individual servers for each app. In my current project I use linode instances for each app we have and managing them has been pretty simple especially with automated build and deployment infrastructure we have.

    --Sai

  2. mike prendergast - Reply

    January 10, 2010 at 2:43 pm

    Interesting post and I can see both sides of the arguement. I'm tasked with supporting a large enterprise with a high and increasing set of j2ee apps.
    We ae in the middle of moving of our old, creaky environment onto a new environment. This is heavily scripted, and virtulised (on aix) but still shared. we took the approach that today's industrial strength infrastructure is designed to grow and be shared.
    however our biggest issue was not the ability to depoloy infrastructure fixes, but our ability to convince the app owners that they should allow any change at all in the infrastructure. in ou new infrastructure we have it asa condition of service that there are defined infrastructure maint windows, and that it will be updated as a condition of moving apps to them.

    Technology is often constrained by fear, lol

Add a Comment