There are numerous scorecard solutoins on the market today including PerformancePoint from the ubiquitous Microsoft.

If you take a look at any of these systems, the sheer prettiness of the display is usually breathtaking.  PerformancePoint for example provides a central point to create formulas and displays for Key Performance Indicators (KPIs) which are then made into a variety of context-sensitive scorecards or visible displays.  A demonstration of the system shows data from a variety of sources including financial, sales and even geographic were woven into a single display. 

The cream topping of such a demo is perfectly positioned as it often integrates data from project management tools like Microsoft Project Server woven right into the Business Scorecard display.  The whole structure sits within Microsoft’s Sharepoint Portal Server technology which, if you haven’t seen, is probably worth a look.

Now, Microsoft gets plenty of plugs and certainly doesn’t need mine but this display can be impressive.  I’ve been doing a bit of looking at scorecard systems since I wrote about it in this column over a year ago.  This new product by Microsoft is certainly not the first scorecard product on the market.  It’s been preceded by many others but I find it interesting that the movement has picked up enough momentum that Microsoft felt compelled to enter the industry with its own offering.

After a recent demonstration by Microsoft, I was approached by several attendees who discussed the relevance of using Scorecarding with Enterprise Project Management and of course, I reiterated my enthusiasm from a couple of years ago.  One comment in particular however, brought me up short.

If the underlying data has questionable quality, said one of my colleagues, wouldn’t that make the entire system suspect?

It has gotten me to thinking and, the more I think about it, the more it disturbs me.  The answer is yes of course, but I realize that in the systems we design and deploy that there are huge assumptions in the enterprise project systems that get deployed. 

We depend in great part on the natural human filtering that goes on when data is examined and entered.  In systems which have an extensive legacy culture such as core financial systems, we accept the basis of how data is entered because it is common to all organizations. 

It is when we apply the same notion to other enterprise systems that we often see how challenging it is to establish stability in this type of data.  CRM, Enterprise Project Management, HR type systems turn out to all require a remarkable amount of effort to make their data as trustworthy as core financials. 

It’s not the technology, of course, it’s the standard for using that technology which must be adapted with sufficient uniformity to make sense.  In my own firm, we’re currently struggling with CRM terms for what makes a lead vs. a contact, vs. an opportunity… well, you get the idea.

When we create an executive display system to roll all that data up to easy-to-read one page displays on which core business decisions are made, we’ve got to take pause.

I’m sure executive dashboards of scorecard type information will be hot sellers but before we deliver a system that eliminates the need for human filtering of data, we’ve got to make sure that the underlying source data for those displays delivers what we need.  Not doing our homework with this kind of system could pay back dividends that won’t be too welcome.