Category: Articles

Home Articles ()

The Project Communications Plan

CommunicationsAt one time the notion of a communications plan in project management consisted of whatever the project manager was willing to share with you. Back in the days when project management was synonymous with project scheduling and the primary industries that used project management were construction and defense and heavy industry, the project manager’s word was law and whatever he (or she) decided to report, that was it. Output from the scheduling department might be a simple list of key target dates or perhaps a summary barchart written by a draftsman and annotated all over the page.

Of course those were also the days before cell phones, email, faxes and the Internet.

Project management in today’s world is expected to be as extensive as possible and technology can go a long way towards making the project manager’s life a whole lot easier.

For new project managers, they might not have even thought of doing a communications plan but some preparation before your project gets going can save you enormous suffering later.

Start by identifying who will need to be communicated with. Project stakeholders is an easy way of saying everyone who has some interest in the project but these might include: The executive sponsor, the client, the project management team, the resource leaders of resources included in the project, sub-contractors, prime contractors, the end users and, of course, the project team.

Next think about what you might need to be in communication with these people about. I tend to divide this by: period, by incident, by key milestone. For example, you might need to talk to your team about your weekly project meeting. You might also set up informing the executive sponsor about project progress at a summary level on a monthly basis. Those are per-period type of communications. You might commit to updating the client and a steering committee about status at a stage-gate point such as at the end of a phase or you might do a team project review at the end of each major deliverable. Those are per milestone communications. Finally you might commit to communicate with the executive sponsor or the client if the project exceeds threshold levels such as more than 10% late or more than 15% over or under budget. Those are per-incident communications.

This can be done on a simple grid on a spreadsheet. There are plenty of examples on the Internet.

If you’re thinking that that’s already a lot of communicating, fear not. Technology can now play a hand to make a huge difference in executing that communication.

Email of course is a great way to get information from one person to another and also provides some audit of communications delivered. These days many business people are reading their emails from smart phones on their hip so if the information is urgent or if it requires a rapid response, email is the obvious choice.

A lot of reporting however can be done through one-to-many communication tools. Setting up a Google Group takes a couple of minutes and provides a place to store documents and make short announcements and even provide a place for people to update you with comments such as a review of a draft design document. Google is hardly the only service to provide such group setups but it’s free, comes with several gigabytes of storage and can be kept private by defining the group as by invitation only.

Keen to go a little further, both Google and Microsoft offer Application areas that are either free or available at very low cost. They typically include a place to make lists of things, store documents and display calendars for the participants to which they can subscribe. Users can usually also set up alerts for new information posted to either a Group or Application area which will then appear in their email.

In some environments, one tools that has become more and more popular is setting up a blog from a key team member such as the project manager or a key developer. The blog is closed typically but it’s a great way for team members who might not be located nearby to keep up with short news about how the project is progressing from that person’s perspective and even to comment on developments as they occur. Blogs can be set up in dozens of places but and are common and free destinations. Blogs can also be setup internally with almost no technical effort at all.

When there is a lot of technical information that must be documented and the documenters will be a varied group, then creating a wiki is an excellent choice. Most of us have stopped by Wikipedia to look up information at one time or another. Wiki’s however aren’t restricted to that one site. The technology can be installed or used as a free service from dozens of locations. What makes a Wiki interesting is that a shell of subjects can be created by a central authority and then anyone with the appropriate rights can contribute to the actual content. Imagine, for example, that you’d like to create a document of best practices for the new application you’re writing. You create a private Wiki and put as headings the functional menu choices of the entire application. Now Beta Testers, Documenters, Developers and ultimately the end users and clients can contribute their thoughts to the Wiki and enrich the entire user community with practies, tips and techniques that have been learned. We’re doing something like this ourselves for our own product TimeControl right now.

When it comes to meetings most of us enjoy using video conferencing from people like GoTo Meeting, WebEx or others and sharing our PowerPoints while doing so. There are, however, some great technologies for making meetings more effective. Instead of a static one-to-many powerpoint that everyone speaks to from their speakerphone, how about using a more dynamic online screen that includes the agenda, commitments that were made at the last meeting along with who made them, decisions that are taken at this meeting and documetns that everyone can see at the same time. I know that the free version of Windows SharePoint Services that comes with Windows Server has a template exactly like this which can be activated when you create a “worksite” to a recurring calendar item. People’s jaws usually drop when I show off this free feature and they’ve had it available to them before I even arrived.

Having chosen whatever technology is appropriate for your organization, your communications plan can be rounded out with creating some templates for regular communications in advance so you don’t need to invent it on the fly when under the pressure of a project that’s underway.

Whatever your communications plan and the tools you’re going to use to deliver it, set expectations of the stakeholders before you start. If people know and agree on a frequency of a certain kind of communication long in advance, it can save a tremendous amount of grief and misunderstanding later.

Workflow wonders

WorkflowOne of the big challenges with project management and project management systems has always been the entry of critical information. Project management information is so dependent on the context that getting it wrong is very easy. Think about it for a moment. We ask for a task description. But relatively few people have the skill and experience required to define a task description properly. We ask for a duration but asking several people results in as many different durations as we have people. In the project management industry we’ve always responded to this by saying ‘more training’ and, to be sure, training makes a difference but we’re about to see a focus on project management software that will try to do away with this challenge.

It’s all to do with the term Workflow. Workflow refers to the concept of taking a process and organizing the sequencing of that process including conditional branches. This is very familiar to project management people as the thinking is just like the flow of a project. Workflow software however has become all the rage and it has the potential to make a profound difference in project management systems when linked with databases, lists and, in particular: forms.

Think of this scenario. A new project manager is assigned to create a new project. He goes to the corporate Intranet where he clicks on “Create a new project”. A form appears. He fills in the blanks. Depending on some entries he makes in the form, he might be asked for more information. He completes the form and submits it. Depending on other entries in the form, the approval process might be directed to one person or another.

This is the result of Workflow.

But wait, we’re not done. The workflow engine also sends an email to the person who approves his kind of project. The pending project is put on a list of potential projects in the portfolio management system. A portfolio manager is notified by the Workflow engine that a pending project has been added to their portfolio and asks them to intervene to give it a priority.

The manager approves the project. The portfolio manager sets the priority and as a result, resource scheduling determines the schedule. The Workflow engine wakes up to these responses and sends an email back to our newbie project manager letting him know that his project is approved and telling him when he’ll be getting staff to get it started.

Sound like science fiction? It’s not. The tools for doing this exist right now. Microsoft has been promoting the Windows Workflow Foundation (WWF and yes, I know that’s also the acronym for the World Wrestling Foundation) and how it links to their Infopath forms that are part of Microsoft Office SharePoint Server. Demonstrations that are now viewable of Microsoft Project Server lean heavily on this technology as a method of collecting and organizing data and as a way of communicating with users in an unattended fashion. It doesn’t take a rocket scientist to see that this kind of workflow is a future marquis feature for Microsoft and that it is likely to figure strongly in future versions of Project Server.

Are we stuck with Microsoft though to do this kind of thing? Absolutely not. If you’ve already got SharePoint Server in house, then you’ve bought the technology already so you might not want to look elsewhere but there are numerous workflow solutions in the open-source community. You’ll find a range of possible solutions on SourceForge ( . For information on the WWF, do a search on any search engine of “SharePoint Workflow”.

We’ve not begun to scratch the surface of what’s possible with this kind of approach to collecting data. The beauty of it is the level of intelligence that can now be woven behind the scenes into the forms that are being filled out. Let’s take our new project example.

The form could ask our new project manager to complete a “Risk Assessment” form if the value of his project is over $10,000 or if the duration is more than 26 weeks or if the work is being done for the government. The form could insist that this Risk Assessment for be completed before he even tries to submit his request for project approval.

The form could decide to send the request to a supervisor for approval if it’s value is less than $10,000 and to a senior manager if it’s worth more.

The form could decide to immediately email the portfolio manager if the project is marked as “urgent” or if the executive sponsor is the CEO.

And so on and so on.

Notice anything here? I keep saying the “form could decide” but of course, forms don’t decide anything and therein likes the rub. Any of the workflow we’re discussing here has to be coded or programmed or defined in some way for the workflow engine to understand and that can be a challenge. In many of the project organizations I visit, the manual workflow process is ill defined, not honoured or not defined at all. This kind of solution will demonstrate wonderfully and people will Ooooo and Awwww when they see it but delivering the demo will take a lot more work. A lot more. Just because the technology enables this kind of thing doesn’t mean that organizations are ready to do the homework required to adopt it. I’ve been involved over the years in numerous workflow design meetings. In order to establish just one workflow process successfully you need to bring together everyone who intervenes in that process now as well as those who are clients of the process (meaning they receive the product that comes out of the process) and the suppliers of the process (meaning they provide input that results in the product). You have to look at all the possible permutations. You have to look at where the data will come from, all the possible actions you want to come out of the form(s) and process based on possible answers to questions and then you need to document, train, test and document some more. Yes, the potential efficiency improvements might be huge but you shouldn’t trivialize the work involved in automating the workflow of a process even with great technology at your fingertips.

One of the other hidden pitfalls of workflow automation is the maintenance of this in-the-background-intelligence. What happens when the team that defined the workflow moves on to other things? Who will maintain the rules in the form and make sure they’re adapted to changing business conditions? How were these rules documented? Who maintains that documentation? And, so on.

No matter what, you’re going to hear a lot more about workflow automation in the year or two because it’s just too compelling a technology to ignore and for larger organizations, automating any process that is done many times can result in huge efficiency savings but, like all of these tools, be sure you know not just the benefits but also the total costs before you start your own implementation.

Integrated real-time project management – Myth or reality?

stop timeI suppose it shouldn’t surprise me. After all, it’s been 25 years and the request is almost identical each time. Again this week a very senior project manager asked for it again. “It” is a real-time, integrated project management system. The request came in a round-about way while I was demonstrating software for enterprise-wide use in a project environment so I had to double check. “Let me just be sure that I understand what you’re asking for,” I asked. “You’d like an enterprise-wide, web-based project management system that would manage its data in real time. In such a system the CEO for example, could click on a button and see a view of where all project stand at this moment. If they were to return in a few hours, the view would now represent all the changes that had occurred in the intervening time.”

The project manager’s eyes lit up. “You understand completely,” he said.

It fell to me to give him the bad news. “Such a system is demonstrable,” I said, “but could never be implemented.”

He was crushed. Who can blame him frankly? I remember standing in the exhibit hall of the annual Project Management Institute Symposium with the senior executives of two of the largest and long-standing project management software vendors. We were shaking our heads as we looked over the exhibit floor. Every booth it seemed was offering the same thing: “Enterprise-wide web-based project management”.

With all the promises from all these vendors of project management tools that offer this type of solution, surely at least one of them has figured out how to actually deliver it.

The problem is, that it’s not a technical problem. You see, as desirable as such a real-time report seems at first blush, it’s virtually impossible to deliver. The first issue is the context of the data you’re looking at. If I present a report of the status of a project, there are some basic assumptions that people make about the report. You assume, for example, that the data is accurate to the best of my abilities. To be accurate, project management data needs to go through at least some kind of review. You also assume that you’re looking at apples and apples, not a mix of different kinds of data. For example, if I show a report with 10 projects summarized on it, you would reasonably assume that the projects had all been statused on or around the same time. A report showing the status of some reports as of this week and others as of last month would be at best nonsensical and at worst could show a misleading view of our distribution and loads of resources.

This means that to create any such report prior to displaying it to management, I really need to update data on or around the same time and then validate that data through someone responsible for the project. This is the definition of batch processing. It is the very antithesis of real-time.

As soon as I point these kinds of issues out, it becomes obvious that there is going to be no easy solution to this request regardless of what the different booth displays say on the exhibit floor but there is another element of the issue that is almost never asked of the requestor.

Such requests typically originate from someone in the executive suite. A manager or director or senior executive of some kind asks why, with all the products on the market that promise such features, can’t he or she get the real-time analysis they’re asking for.

My first question of such people is “What will you do with it?” It seems trite I suppose but it’s really not. Imagine for a moment, such a system in place. Let’s imagine that through a petition to Harry Potter and his magic wand, we’ve managed to resolve all the issues with collecting, analyzing and validating our project data. Now, it’s 9:00am in the morning and our CEO is checking on the projects. He (or she) pops up the “corporate-wide-real-time-budget-vs-actual project report” Bad news. It shows the actual cost curve slightly higher than the budget curve. Our combined projects are over-budget. But wait! It’s 9:05 and the curve has just changed. Our budget curve has moved up, now the variance is alright. The CEO breaths a sigh of relief. Wait again! The curve has changed. It’s 9:15 and Bob, down in projects has just updated a half-dozen tasks with their new projected finish dates, now the curve is showing us over-budget again and we’re running late besides! But wait! The curve is undulating again. Even if this hapless manager were to take a snapshot and run down to the project department with it to ask how we’re going to get on track, the response is likely to be “Oh, you’re looking at the 10:23am report, you really need to look at 10:45, the picture’s all different.”

It’s obvious that such an environment would be completely unproductive yet the desire for it continues year after year unabated. This brings us back to the now dejected project manager who I met this week who wanted to know what he should do now? My recommendation was to change his question. Instead of searching for the real-time project management system, why not start asking what changes to his project management environment could he implement that would make them more effective. Not surprisingly, when he sets his sights a little lower, there is lots he can do.

It’s perhaps a good question for all of us.

Enterprise Project Systems – The Holy Grail?

72762531I hear it all the time, at least a couple of times a week, “We’d like an Enterprise Project Management System.” Given that I spend a lot of my time speaking with project personnel in Fortune 1000 companies it’s pretty clear that this desire for a global perspective on project management is almost universally desirable. It’s easy to see why any executive would be keen for such a system. The new economy has dictated a movement away from large mega-projects with their attendant Project Management Offices where project management would be easy to manage from the top down. What today’s business world favors is a large number of smaller, more maneuverable projects, each controlled by small number of employees and often sharing resources with many other projects. It’s highly effective from a communications, motivation and flexibility point of view (that’s what is critical in today’s fast-paced economy) but it’s a killer to manage from an executive perspective.

Well, with all this desire for an enterprise system, it’s no surprise to find that virtually every project management system on the market is suited for “enterprise-wide” use. At least that’s what they say on the box. The funny thing is, this demand for centralized access to project information was not created last week. It’s been with us since the early 90’s at least. There are a number of project management products which really can lay claim to enterprise-wide functionality yet, week after week, companies ask me if there’s any hope of them actually finding one they can implement successfully. The disconnect between happy enterprises and the number of products which claim to be able to make them happy has to make you wonder.

The problem certainly isn’t that all software publishers are liars (although I’m not promising that we’re not) or that all project management products really don’t have enterprise-oriented functionality or that there are no differences between these products. I review project management software all the time, and there really are some differentiators which would make some more suited for enterprise use than others.

Most project management systems can now be implemented with a client/server database as their repository. I think this ability is critical to having any chance of bringing enterprise project data together. This was once the purview of only the largest, most complex systems but it’s now almost universally available. Systems which had this enterprise architecture used to be difficult to use and thus difficult to deploy across the enterprise but Microsoft’s standards for ease of use have changed that for everyone. Desktop planning systems like Microsoft project are the most widely distributed project management tools in the marketplace. It is now almost a given that every company will have Microsoft Project somewhere in their office.

So what’s the problem? We’ve got database connectivity, ease of use; that’s it then isn’t it? Well no, unfortunately it’s not. The problem with bringing these users together is more fundamental than a database selection or ease of use. A visit I paid this week to one of the world’s largest insurance companies exemplifies the point. This company has over 2,000 project managers spread across the enterprise. Some of them are working within business units which have chosen a particular project management tool, some are working in a more independent fashion. There are thousands of copies of MS Project in the office, it is the most widely held project management tool and yet virtually every day, someone in the company is lobbying for why the company should accept the product they’re using as a corporate standard. With all the users to be wooed, project management vendors swarm the reception desk on a regular basis with their latest wares, upgrades and offers. The company has now determined that despite having some very skilled project managers, that the company executive is handicapped by not being able to report on all project work at once or to be able to influence the project process by injecting management perspective data such as project priorities. Management has now allocated a significant budget to resolving the problem and so possible alternatives are being considered.

Proposed solutions come in a variety of formats. There are still the venerable project management firms now with new updated products. These companies have the most years of experience with implementing systems across the enterprise from the early project management software days when an enterprise often referred to a mega project. There are tools which claim they will automatically deliver an enterprise wide system because they are so easy to use that everyone will adopt them. The problem is that while ease of use may guarantee the widest deployment, being compliant does not make you integrated. Finally, there are add-on or layered products trying to bridge functionality. Their pitch goes like this; “Keep your Microsoft Projects. We’ll operate as a layer underneath them automatically imposing the centralized standards that are essential to delivering an enterprise system.” It’s this latest group that has the most insidious approach and it’s that group of products that I was asked about this week at the insurance company.

I had for my friends some bad news though. There is no easy path between an independent desktop planning system and centralized control I told them not matter how important management decides that is. The problem isn’t technical. Technical solutions for bridging these kinds of products have been around for a long time. The problem is cultural.

When you ask management what it is they mean by an enterprise system, the answers are almost always the same. They would like to bring project data together into one source in order to do reporting and analysis across the enterprise. When you ask individual project managers why they insist on sticking with a desktop planning system, their answers are also universal. They want ease-of-use and an ability to manipulate the data quickly on their desks to reflect what they want to track today.

These goals are almost certain to be in conflict. In order to be able to gather data together and analyze and report on it, we need to impose some standards. For example, I can’t use the code ME to refer to Mike Edwards, Mechanical Engineer and Maintenance Engineer (i.e. Janitor) in three different projects if that data will one day be brought together. If I do, Mike, the engineers and the janitors will all make up the same histogram bar in a resource analysis – it won’t make sense. If I want to bring this data together, I need to impose controls across the company.

The bad news is that the degree to which I need to impose those controls is the degree to which end users will wish to abandon use of the product. It goes directly against the maneuverability that attracted them to their favorite tools in the first place.

So, is there hope? Of course. I’m often caught quoting Napoleon Bonaparte with my favorite project management quote, “A battle plan lasts until contact with the enemy.” I’m sure however, that Bonaparte didn’t mean that we shouldn’t plan.

My advice for the insurance company was to reset its goals. Think process, not product is my suggestion. If you need to impose standards, impose the very fewest that you can think of. Don’t stress over getting every minute change in a resource plan to reflect into a top-level project resource analysis. Work instead on the broad strokes; project management training, project reporting standards and good planning practices and let the managers manage.

Core systems are Ok? – Time to look at secondaries

7251961ERP deployments seek to be the core business system for all business functions but they virtually always start at the center of the Finance department.  ERP firms, a niche with a tiny home in 1995 became a behemoth of an industry by the end of the 90’s as some firms became desperate to abandon old non-integrated systems in favor of a complete solution that would be not only Y2K compliant but more effective at the same time.

The wave towards major ERP deployments has continued unabate for the last decade.  What those vendors have been doing to continue their growth has been fascinating and in many ways affects the project management world.

Core systems are, of course, defined differently for each firm but generally can be thought of this way: Production systems come first; If we can’t make the plant work, it doesn’t matter what paper work we can’t do. Next, core financial systems; General Ledger, Accounts Receivable (and Billing), Accounts Payable, Payroll. Finally, anything that is absolutely required to make the first 2 categories work.

When we talk about ERP deployments however, we start with a common manta for all vendors: “All systems – one place.”  The idea is that everything will be integrated together and the first systems to be deployed by finance are always the 4 pillars of the financial environment: GL, AP, AR and Payroll.  Production systems are typically left unmolested and unintegrated while these core modules are stabilized.

This leaves a lot of room for systems to not be included. Worse, with the penalties for non-compliance of core systems being the ultimate price, the only sensible course of action is to put a huge degree of testing on those core systems, abandoning much of the work that might otherwise have been placed on secondary systems.

As we all know, compliance of applications alone iss only half the battle. Virtually all hardware, operating systems, communications protocols etc has to be tested and validated as well, all making even more incentive to put aside for the moment the secondary systems not essential to the functionality of the core systems.

Once the core is stabilized, it’s time to extend.  Secondary systems which might be working fine now become the targets for change in order to feed the mantra of everything in one place.

Just like the core systems, upgrades, changes and replacements will have to be prioritized. Systems may have been declared “secondary” yet, once missing can carry enormous impact on primary systems and on operations in general. After all, all of these systems; primary and secondary were presumably performing useful functions before the new ERP was deployed. You will have to look at what systems should be brought on-line before others; which resources you have available to complete the work and whether it might be better for your organization to put some of these secondary systems out of their misery altogether.

While I am sure that that some of these secondary systems have enormous impact on your organization, it is equally certain that some of them no longer fulfill a useful business purpose. It’s obviously silly to spend time, money and resource updating a system that makes no difference just because “we’ve always had it”.  It’s often amazing to me to review systems that have been in place for many years only to find that no one uses the product from those systems.

Systems that we can consider secondary includes different categories for different organizations but inventory, human resources, electronic timesheets, fleet management, analysis, project management, CRM (customer relationship management) are some of those which are often targetted.

All the skills that IT managers and project managers have been able to put to use in effectively updating the core systems now come in handy as the attention turns to the other systems.

The news for IT managers is not, however, all good. The chief problem to be working on immediately is maintaining management support that was so essential in getting the core systems in the first place. Deploying an ERP can be such an exhausting experience for senior management that keeping their enthusiasm alive for the secondary systems may be an issue.  Also, the success of the deployment of the ERP in the core systems may be a factor.  If that project went over budget and schedule, it may be difficult to find the funds required to update secondary systems in a timely fashion. Even where these secondary systems have already been purchased, say as part of a global ERP implementation, the extensive resources required to deploy the secondary functionality may be hard to find money for.

One thing you can’t do is ignore the problem. If the intent was to take a working environment and gradually upgrade it as part of a centralized system then you’ve either got to make sure those secondary systems become part of the central core or that you adopt a ‘best of breed’ approach to building bridges from the central core to those systems.

Go for the little victory

7621529It continually amazes me how an otherwise perfectly sane human being can design a project that is unimplementable. A colleague of mine gave it the perfect label this week. She calls it the “Ending World Hunger” syndrome. Now don’t get me wrong, I’m all for ending world hunger but there are some of us who are so intractable that it’s world hunger or nothing. When people like these become project managers or, worse, the implementers of project management systems, the consequences can be dire.

People who are out for ‘ending world hunger or bust’ are unwilling to settle interimly for feeding the nation or feeding our county or our town. It’s all or nothing.

Imagine the following two scenarios and you can see the problem quite clearly. In both scenarios, the final goal is to deploy an enterprise-wide mission-critical I/S system.

In scenario 1, the project manager has determined that the only way to successfully deploy the entire corporate system is to do it as a single, corporate sponsored initiative. First an exhaustive design phase must occur where every possible requirement has been elicited from the staff and has been scoped out in the requirements document. In this situation, no work in soliciting vendors or in coding the application will be considered until the entire organization has approved the design plan. Once the design has been approved, the new system will be either purchased or constructed with exacting precision so that it exactly matches the requirements document. Then the system will be deployed to all users simultaneously. It will be used in parallel for a short period with the existing system then all legacy data will be transported to the new system and the old system(s) will be turned off simultaneously across the entire organization.

Now, let’s place what kind of organization we’re talking about. We are not referring here to a 10 person high-tech consulting firm. If we were, it wouldn’t matter what our strategy is. Any time we’d have a conflict, a yell from the president down the hallway to all staff members will fix it. No, what we’re referring to here is an organization large enough that implementing a corporate-wide system is a challenge. It has perhaps hundreds or even thousands of members. It may have multiple sites and even in (gasp!) multiple countries.

There’s a couple of problems with scenario 1 in such an organization and if you’ve ever been part of such a project you’re probably cringing with the memory of it right now.

The first problem is the first point. Getting the whole organization to agree to a design specification. First of all in any decent sized organization, some managers feel an obligation to not give blanket support to corporate initiatives just on principle. Also, in any decent sized organization, the chances that someone in management changes during the decision making process is almost 100%. Many managers feel compelled to question outstanding decisions by their predecessors lest they become responsible for them. If unanimous consent is required before creating the global system can be begin, you’ve got a big problem.

As if that’s not enough, while some people argue over whether or not some particular feature should or should not be included, another issue is getting everyone to agree to do the same thing in the same way. When the requirements document is actually assembled, it’s entirely likely that there will be requirements that conflict. (It’s got to be easy, but it’s got to have many complicated functions. It’s got to be driven by finance and it’s got to be driven by project management etc.)

But wait there’s more. Should a miracle occur and consensus over the design actually occurs, there’s the entire programming or acquisition phase to endure. If the system is being written in-house (always a treat), then there are all the issues that come with a high-pressure, high-tech programming project. Technology, skill availability, design-mismatch to programming results, risk and, the all dreaded scope-creep. But wait! Didn’t’ we say the design was corporate approved – locked down? Sure, but what are the chances that during the programming phase someone in the management team changes? The larger the system, the more likely it will happen. Just think about how often a high-tech project actually ends with the scope it started with. It’s not too common.

If the system is being acquired, the chances that it is not a 100% match for the extensive requirements specification is almost certain. Now, there starts another whole debate over what level of compromise is acceptable to match a commercial system to the requirements.

The kicker to all of this is the worst part. During this entire period of specification, acquisition or programming and, of course, negotiation with everyone who’s got to agree during a long contiguous time, no one gets any value. No one gets the first screen, the first report the first anything out of the all-seeing, all-knowing corporate system because it’s still ‘in-design’.

Now, in scenario 2, the project manager has a similar goal – to deploy a corporate-wide, mission critical I/S system. This project manager however, has decided to go for the easy-wins, the little victories. In this scenario, the project manager starts with smaller departments or divisions, starting with the most likely to succeed. With each deployment, he or she look to see what works and what doesn’t and refines the process for the next group. Before they know it, there’s a small band of users in different departments who are all reasonably satisfied with their new system. The project manager uses the successes to leverage still more. Suddenly the company newsletter is doing a story and now even more departments are calling the project manager asking when they can be put on the new system. The project manager sticks to their guns, working across the company slowly, never exceeding (at least by much) the availability of the implementation skills available. The most difficult department; the least likely to go for the new system is saved to last.

In this scenario the risk is that it is entirely possible that not 100% of all departments adopt the new system in the end. However, it is much more likely that those that do will get a system that works. With both the specification and the implementation process being refined through repeated use, the system will improve. Moreover, as everyone knows who has ever looked at a brand-new system, once it’s in actual use, the users’ perception of what is required will change. This kind of process lets the system be tried without having to wait until the entire company is on board.

The best part about scenario 2 is that the organization starts to get some benefit right away. As each department adopts the new system, the benefits of switching become available to it and by extension to the organization. It’s true that the benefits of the entire organization being on the same system can’t be realized until the last department comes on board but it’s easy to imagine that in scenario 1, that may never happen anyway and, in the meantime, with scenario 2 at least you can have the organization realize some benefit.

Finally, for those of you who are career minded, scenario 2 doesn’t have your start shine quite as bright as scenario 1 but be warned – ‘ending world hunger’ type projects have been known to produce casualties. More than one career has been hung out to dry with a failed corporate-wide initiative.

If you’re considering a corporate wide system, I recommend taking a long look not just at the features you’ll require, but also at how you plan to implement them. Going with a phased-in implementation isn’t quite so dramatic or quite so spectacular but it’s also a lot more likely to deliver value than the big-bang approach. If you’re out to end world hunger, why not start with feeding your local town first.

Project and Project Server August 2009 Cumulative Update

lgo_msp2003_medMicrosoft has released a new cumluative update for Project 2007 and Project Server 2007. dated August 25, 2009. Information on the key elements is linked below. See Microsoft’s guidance on deploying cumulative updates for more information on how to deploy these updates if they’re applicable to you. The links below include both descriptions and download links so you can see if fixes you require are included in this cumulative update.

For more information on all updates for Project and Project Server 2007, visit our Project Server updates page.

Are we headed back to the IBM 360?!

ibm-360Ahh, the good old days. It’s easy to reminisce. Back when CIO’s were known as DP managers and their word was law. Keepers of the key to the old centralized mainframe systems were gods. Their will was done by all. Want a report? Sorry – that’s got to be done by our centralized staff. What? You need access rights to some data? Sorry – our database manager will get to you soon. You need a new interface to some simple data written for your department? – Sorry – Our programmers are a little backed up right now.

Sound a little too familiar? With all the movement towards web-enabled technology and n-tier architecture and Java front ends, it makes me wonder if the pendulum hasn’t swung completely back to a centralized mainframe type of approach to IT systems.

For those of you who don’t remember the pendulum swinging at all, it started like this: First came huge, unwieldy mainframe computers that were difficult to program and even more difficult to use but which could process an enormous amount of data from a huge number of sources terribly efficiently. If you wanted an application, you had to have it written and/or installed on the mainframe by the centralized IT staff. The application would have to be given rights to a database and then individual terminals could log into that application using a dumb terminal. If you wanted a change, such as a new sort on that report you get regularly, then you had to petition the IT Manager (then usually called the Data Processing Manager). Petitions were not always considered. Sometimes a ritual sacrifice was required or other offerings as might please the DP God.

Then came the PC. End users were ecstatic. For the first time, they had a system that allowed them to do their own analysis, their own reporting, their own management of the data within their control. Spreadsheet programs like Visicalc (and later Excel) and departmental database programs like dBase (and later Access) ruled. It is a testament to the changing times that about 9 years ago, it was estimated that more lines of dBase code were in existence worldwide than lines of Cobol code on mainframes. In 10 years, individual PC users had written as much code as had been written in the previous 20 years on mainframes.

Advances in networking computers together in the late 80’s helped individuals users bring departmental applications into being.

The advent of Windows of course started a drive towards the ultimate ease-of-use for end-users but paradoxically GUI programming was more difficult for developers. Additional levels of complexity such as client/server databases, object-oriented programming, internetworking across networks first internally and now through the internet has added to the minimally accepted developer skill level and now we find ourselves in a position not much different from 20 years ago.

Client/server databases are bringing data back to a more centralized point than it’s been since the great Diaspora of the early 80’s. The ERP movement of the last 5 years has driven data even closer together. ERP implementations have been complex enough to re-empower the IT manager in a way he or she has not enjoyed since they were 20 years younger.

The Web has sounded like a wonderful thing but are you starting to wonder how much personal control you’re left with if the only thing you need on your PC is a web-enabled browser?

The jury may be out on Application Service Providers who make all manner of application available as a centralized network application (Oh where is that Decwriter!) but if they catch on, the movement back to a centralized world may be complete.

There has been a movement of some vendors such as Oracle Sun to make “Network” PCs. They would need only a browser and a connection to the Internet. Indeed, my daughter’s Sega Dreamcast video game comes with a browser CD and the potential to run an entire business on my living room TV between bouts of Sonic and NFL 2K.

There are, of course, some key differences from the 70’s when all applications were centralized. First all, the end user experience is much different. Where once the language needed by end users to operate their systems was not recognizable by mortal man, today’s interfaces are designed to be understood by 5 year olds. They’re graphical, intuitive and even fun. Second, the whole notion of internetworking has changed the face of computing forever. Where once you could only access applications and data from the mainframe controlled by your own IT department, now data and applications can be made available from any network connected to the Internet. It serves to make your own structure much less oppressive. Finally, the 20 years of personal control that people have enjoyed from their PCs and their laptops are certainly not going to give way overnight. While there is a certain amount of restriction already in centralized Java type applications, user expectations of being able to customize and tailor these systems personally is driving them to be more open than any mainframe system of 20 years ago.

Given these expectations, it’s almost certain that we’re never going completely back to the types of applications and tyrannical environments that some of us are still trying to get over since we were just out of college.

Still, you might want to keep a small gift handy for your IT manager and the makings for a quick shrine should never be too far away. They may not be quite a god yet, but the chances that they become one are getting better every day.

Tough times? Look to Project Management

dollar in a viceWritten originally in the tech crash that followed Y2K, this article describes many of the challenges and opportunities facing the project management industry in today’s economy.  The article is reposted here from its original publication in Computing Canada Magazine.

If you’re in the high-tech industry you know we’re in the middle of the largest reorganization of the industry since its inception. For years, we’ve seen waves of growth in the computing field but this is the first time ever that the pace of growth has slowed. You can argue that we’re in a temporary “correction” or a long term readjustment of growth but either way the playing field has changed.

Last year while the high-tech stocks were flying we saw a newfound interest in IT operations becoming more effective. No surprise, many of us in the industry were finding resources difficult to find and the results required by the market in terms of completed projects more plentiful. According to Admiral Ken Mattingly (You may remember him from Apollo 13 fame), “A project is an undertaking to produce a result with limited resources within a specific time”. Sound familiar?

Yes, while much of the economy is struggling with avoiding a recession, project management is on a tear. It was the same in the early 80’s when our firm got started. We were a couple of young upstart consultants who specialized in automating project management environments. It would not be until years later that we would realize how timely our entry to the industry would be. The recession of the early 80’s was very difficult for many firms but not for ours. The difficult business environment was tailor made for us to be successful. Whenever there are difficult economic times, project management leaps to the fore. It makes sense. The science of managing projects is done, after all, in order to be more efficient at producing a result. The objective is to produce more results (e.g. more projects) with the same resources or the same results with less resources (e.g. less people and money and time). Again this sounds very familiar to today’s scenarios for people in the IT field.

If you’re in a struggling then revenue growth at any cost is no longer your goal. You’re now being pressed to deliver similar results but with less cost. If you’re in a more established organization (did someone mention Telcos?) Then you’re in a squeeze too. You may have just arrived at work to find that you’re one of the lucky ones, left behind after a huge reduction in force. That means you’ll be asked to produce the same or similar results as the company has been producing only with much fewer staff.

This is a perfect call for project management.

Happily, IT has already had a big introduction to project management through the frenzy of the Y2K phenomenon. Project managers were squeezed to produce results then too but the limited resource in 1999 was time, not people or money. Still, the exercise is the same and many IT firms that I meet have been clever enough to retain an d even enhance their project management skills in the intervening time.

No matter which way you turn, if business is tight, then project management is a good choice. The benefits for firms in this situation are numerous.

  • First of all, if money and time and people are restricted, then managing by result (i.e. by project) is essential. The company will be successful or not based on the projects you can complete. In the IT industry, this may be directly tied to revenue that arrives from projects that are completed. To manage by department or to just manage by guesswork and good intentions is insufficient. Managing by project will allow ineffective projects to be quickly identified and allow management the oversight to make changes when required or to provide additional support when it will make a difference.
  • Once you’ve got your project management system going, you can get an almost immediate return by being able to simulate “What If” scenarios to assess the impact of different events. Without a centralized project system, you have no way of returning to management to ask for additional resources, to suggest that a project should be lowered in priority or to renegotiate deadlines
  • Next, if you’ve got an established project management environment, you can get one of the biggest benefits of all; being able to assess which projects are best for your organization to take on. In the absence of a good project management system, management or marketing makes decisions about what projects should be undertaken and when they are required. If they had the analysis available from a project management system, they might find that some projects are unfeasible because they will take too long, cost too much or (most likely) will tie up key skilled resources that are already committed to other work.
  • Finally, if you are facing the worst case scenario and some of your project people must be let go, then a resource-leveled project management system will enable you to determine the impact of any resources that must be cut. You can analyze resource loading from various perspectives. What would be the impact on projects of proposed cuts? What cuts would result in the least impact or what cuts would result in the impact to the least revenue-generating projects. The results of such analysis are simply not available to organizations that have not implemented project management.

Ten years ago, project management was virtually unknown in the IT industry but that’s not the case anymore. If you’re finding yourself under pressure to perform at the moment, then look to your project management group. They can’t reverse the economy on their own but they can mitigate potential damage from cuts or reductions in resources and may well spell the difference between those IT companies that survive or fail in the new economy that will emerge from the heyday of the 90’s.

Pilot Projects – Do they need Air Traffic Control?

air traffic controlAhh the pilot project. It’s a term that software sales people hate to hear but almost every major software implementation project works its way through this phase. Even I have recommended using pilot projects as a safe way of determining the suitability of software for a particular purpose.

The notion of a pilot project isn’t new. The idea is to implement the entire software solution being proposed on a minor scale with real data to see if it can meet the needs of the organization. Often, just like in drug testing, there is a ‘control group’ and a ‘test group’. These groups are best organized as similar sized departments with a similar workload. The test group will use the new system, the control group will continue with normal procedures. At the end of the pilot, the performance of both groups will be evaluated and the relative efficiency of the new system can be measured.

It’s a great plan. After all, the only change between the two groups will be the introduction of the new software so the effects should be easy to test. For people who have just finished their MBA and are now working in the middle-management ranks, this provides perhaps the only opportunity to determine a Return on Investment (ROI). This is particularly difficult to come up with otherwise with certain systems such as project management software where the hard-cost benefits are difficult to determine and the soft-cost benefits such as bringing in the project early are tough to prove in advance.

The final result of the project is the identification of the right software to implement and even some tips on how to implement it. All of this at little to no cost, right?

So, if it’s such a great plan, why aren’t more pilot projects successful? Often a pilot project is undertaken only to find out that the results are never tabulated or that the results are delivered but they result in no decision or that the project is terminated prior to completion.

There are a number of hidden challenges in a pilot project which must be dealt with if you are to have any chance of success.

First of all there is a fundamental underestimation that I find in almost 100% of pilot projects which I encounter. Often the software itself for such a project is offered for little or no cost and the assumption is then made that the cost of the pilot project is therefore zero. When we’re talking about an enterprise system, nothing could be further from the truth.

In order to see the results of any enterprise software system, it must not only be installed, it must be implemented. As anyone who has done such an implementation can tell you, the costs of installation and purchase of the software are often dwarfed by the size of the rest of the implementation budget. There is legacy data to be configured and uploaded, the configuration of the system to be determined, reports and entry forms to be created or adjusted, training, more training, external systems to be linked, practices and procedures to be created then adopted and, (did I mention?) more training.

This is so whether you are looking to try the system out on one week of data or one out of fifty projects or one out of 20 departments.

Once a system has been implemented, expanding its use to the rest of the company is an incremental cost which is the least costly and least effort of the project overall.

An implementation of an enterprise system of almost any kind for the first department or first project or first core element of an organization may be scheduled to take weeks, yet, so many organizations that I speak to talk about a pilot project like it will take a week of effort and will cost the organization nothing.

Without an understanding of how much effort such a pilot will really take, management can quickly become frustrated and your sponsorship of the project will be in jeopardy.

Even if there is adequate management support, there are a few other pitfalls to overcome. First of all, it is ongoingly stunning to me that almost no pilot projects I encounter actually have metrics, for how the test will be measured. I see this over and over and over again. When pressed to how the pilot team will know that the test was a success (or even if it’s complete) the general feedback seems to be “We’ll know.” As though some feeling will be imparted to the team when the system is good enough.

Also, for the test to be fair, there should be no other influencing factors that would mask the effectiveness of the system in question. That’s great but in today’s business environment, everything is changing all the time. And actually isolating a team to avoid such influence would be an unnatural influence by itself.

It’s also worthwhile to mention that the implementation of any significant change in the systems of a group almost always results in a short-term drop in performance as the personnel adjust to a new way of doing business and as the analysis or data from the new system is integrated into the group’s decision making process. Very short term implementations such as a pilot program may show negative results where a longer term perspective might show a better example of the potential benefits.

So, is it hopeless, are you now reduced to choosing software from the description on the side of box? It’s not quite that bad. For some types of software, it is easier to implement a pilot than others. If we take project scheduling systems, for example, it is relatively easy to have one department work with one tool while the rest of the organization works with another. You can even have implementations done by staff who have been previously familiar with different tools to show how they can be put to their best advantage.

For other types of software, however, pilot projects may not be your most effective evaluation method. You might use other tools such as reference site visits or the use of an independent consultant to help shape the overall implementation plan. This cost and effort would be recoverable regardless of which system is ultimately selected and is often very revealing to the internal cultural issues that will need to be overcome to take advantage of the enterprise system in question.

One way or the other, the most important thing to remember: pilot projects aren’t free. Make sure you’re getting a good return on investment on the effort and money that the pilot itself will cost.