Massimo Re Ferre’ from IBM wrote an very appealing article usefull for quick impact analytics – to estimate the cost of a project high fault tolerance looking at the costs deviation of 50% is allowed.
Hardware virtualization these days is a hot topic and we all know that. There are many customers looking into it for the first time and one of the problems they are facing right now is how they are going to size their new virtual infrastructure. Lately I have received lots of requests from many people in order to help them project the hardware investments (in terms of physical servers) that they need to jump onto the virtualization band wagon. In this post I’d like to try to provide you with a very quick and dirty method to do that.
Consider that there are many alternatives to get to a "decent and professional" technical result: you can either hire a consultant for a performance analysis of your current physical infrastructure and have him/her come out with the required hardware infrastructure to support your workload or you can do that on your own with professional tools available in the market (consultants can also leverage these tools and yet provide additional value). These are the best alternatives if you want to come out with a "professional" output that could help you to better present your internal hardware purchase request; please keep this in mind throughout the document. These approaches however could have a few drawbacks:
- They are time consuming. No matter what, it takes time to gather the data and analyze them to come out with a proper sizing (professional tools can help a lot here)
- They are expensive. If you want to use these professional tools and/or consultants to do this, it will cost you some dollars/euros to come out with that magic number.
There is no free lunch. The more professional you want to go… the more expensive it gets.
Also take a look at his (somwhat outdated but still usefull) paper about scale up or scale out