I've spoken a few times about the need to benchmark value in technology management. This is an entirely different concept from benchmarking the nuts and bolts of a deployment such as a software application. The latter can be done to a large extent numerically - does the application do what its supposed to do, if not to what extent does it not fit the requirement? What are its metrics in terms of response speed? and so on. All these can be identified and measured.
I have to say, however, that even in this area, practice is polarised. One European bank has a 6-month test cycle for new software which costs €150,000 each time an application enters the pipeline. New vendors must submit to this test routine even before their software is delivered into a user acceptance testing (UAT) phase. Any change made or fault identified is logged and any fixes that are required must be built into a new version and resubmitted into the 6-month test cycle. This gives you a very clear picture of the level of conservatism in financial services. Another company I came across, a software vendor, had no quality control at all for the first ten years of its existence. Bugs found by clients were reported, fixed by the programmer that wrote the original product, tested by the same programmer, documented by the same programmer and released to the client by the same programmer.
As a result, because different clients found different bugs, essentially this company was writing code ‘on the fly' and maintaining over twenty different variations of the same product since the programmer couldn't fix the problem in someone else's code. So, my comments regarding quality control of deployments themselves must not be taken to imply that there is nothing to be said about this subject.
From a technology acquirer's perspective it is essential to have a deep grasp of the degree to which any solution, bought, built or outsourced fits the need and how any element that is not directly within your control, is managed.
In this article we will be focusing on the benchmarking of technology management, not the benchmarking of the technology itself. There are four phases to benchmarking the performance of management:
2. Budget control
4. Ongoing maintenance
Now, part of the problem is that a benchmark, as opposed to a target, makes a presumption about the efficiency of a process in relative terms, usually relative to market practice. So, without knowing the particular kind of technology deployment - communications level, systems level or application level, let alone the expected size of the deployment which could range from a simple application to the implementation of a global communications network - it is going to be impossible to give absolutes. We should consider this a bit like the relationship between budgeting and planning. If I set a budget ‘x' within my overall plan of action and I ultimately come in under budget by 50%; in most businesses, looking just at the expenditure, this would be looked upon as a good thing. We would need of course to check to make sure that the budget underspend did not materially impact any of the other variables, particularly quality of product. But this highlights the point I am trying to make from a management perspective. An underspend will have created a completely new task caused by some fear of failure and that task will take time and energy. So, from a management perspective the underspend is a bad thing. It highlights one of three possible management failures:
1. the planning wasn't good enough;
2. the budgeting wasn't good enough or
3. both weren't good enough.
Our website is not responsible for the information contained by this article. Articleinput.com is a free articles resource thus practically any visitor can submit an article. However if you notice any copyrighted material, please contact us and we will remove the article(s) in discussion right away.
Note: This article was sent to us by: Jonathan F. Riess at 01032010
1. Project underspends and overspends
© 2009 ArticleInput.com.