We have an unnatural confidence in dashboards. When we see dashboard indicators we make all kinds of assumptions about them that may or not be true. But we depend on them all the same. This isn’t restricted to just project management dashboards though I have more experience with those than any others but dashboard indicators of all kinds.
Think about this. When you are driving and you see a green light, don’t you go through that intersection at full speed with complete confidence that it is safe to do so? Yet, only a couple of weeks ago, I watched someone do that very thing and miss being hit by a fire truck by only a few feet. He assumed complete confidence even though it’s possible to have emergency vehicles go through a red light from time to time.
While we’re talking about green lights, what else do we assume? We assume that the lights are working perfectly and that they haven’t chosen this moment in time to be green from all directions. We assume that no one will run the red light and hit us from the side. We assume that we are seeing all indicators.
If that’s so for driving to work, what about once we get there and we look at our real-time dashboard indicators of our projects or businesses? The same logic applies.
We look at a dashboard on our mobile device and make all kinds of assumptions. We see green and we know everything is fine. We assume that the indicator is up to date; in real-time in fact. We assume that it is complete; that all data is accounted for. We assume that someone somehow has approved the data; that is has been authorized for publication. We assume that the algorithm that is generating the indicator is not only accurate but that it has accounted perfectly for the display.
Yet, I’ve done a number of dashboard audits and found that indicators are often changed manually by the very people being measured. What is the incentive for these people to put a yellow or red indicator? There is none. “Might as well leave it green until I can get the project fixed,” they might think.
Even if the dashboard is working properly, would you make a decision about resource management if you didn’t know that you were looking at 100% of the data? Most dashboards don’t display the completeness or quality of the data. What if you were only looking at 80% of your projects and it didn’t look like you were overloaded. What might the incentive be of project managers of the hard-to-resource projects to make sure their data was up to date? There is none. “Better not to show how bad it is and just ask for more people,” they might think.
We also typically make another dashboard assumption that can cause real trouble. We assume that if an indicator is negative that someone must be doing something about it. But I’ve seen many organizations where that’s just not the case or where there is no structured response to a negative indicator. At McGill where I’ve taught Advanced Project Management in the past, we have used a real use-case IT study from one of the world’s leading aerospace manufacturers. Part of the data of this study shows the history of dashboard indicators from a large multi-month IT project. Several of the indicators are red for month after month and one question that is always asked is “What was done when these turned red?” The answer is, “nothing”. “We thought it would get better,” is the answer. That reaction isn’t unique. It could be true for almost anyone.
If you have a project or portfolio dashboard, here are a few tips on avoiding a dashboard disaster of your own:
Don’t measure too much
It’s easy to create a dashboard indicator but make sure it is measuring something that can leave the viewer empowered to take action of some kind.
Indicate the dashboard’s quality
Is the data complete? Is it timely? Does all the data displayed come from the same time period?
Have an action for each indicator
This may be the most important tip of all. For each dashboard indicator, have a structured response of what must be done and by whom if the indicator is green, yellow, red, or whatever type of indicator you use.
If you have an existing dashboard, then a regular audit of what data are the indicators being driven from and is the data still valid and is the dashboard still relevant is healthy. If you don’t see documented standards for how to act based on the dashboard indicator, then putting those in can be a powerful improvement for very little effort.
Simplest can be best. A dashboard isn’t just a pacifier. It should be something that enables decision making and, just like when the light turns red at the intersection, makes you hit the brakes to stay safe.