Snakes and Ladders

Risks come in two flavors: Negative and positive.

We all know about he negative ones (a.k.a. Threats). They’re the ones that threaten to rain on our otherwise sunny parade: A critical project resource falls ill; the new version of a vendor’s module isn’t as compatible as advertised; a firewall refuses to let traffic through during a major infrastructure implementation.

Most of us are also aware of the standard responses to negative risks: Avoid (schedule critical work outside flu season); transfer (engage the vendor’s professional services team to integrate its module); mitigate (run a mock implementation to find surprises ahead of time and reduce the likelihood of the risk occurring); and/or accept (develop a contingency plan and brace for impact).

It’s the positive risks (opportunities), though, we tend to forget about — which is like playing Snakes and Ladders (is it still called that?) without the ladders.

What if in-scope work finishes early? This could translate to a big advantage on a fixed-price engagement with an incentive fee for finishing earlier. What if there’s a possibility of reducing development, integration and on-going maintenance costs by simplifying the design? What if we could create a buffer in the infrastructure implementation schedule as contingency in case implementation doesn’t go as planned?

By investing time up front with the team to plan for positive risks, we can be prepared when opportunity knocks.

You could exploit an opportunity by making sure it happens (use senior resources to complete work earlier without sacrificing quality); enhance it by increasing the chances of it occurring and/or the resulting positive impact (just how much can we simplify the design through peer reviews and refactoring?); and/or share the opportunity with someone so both benefit (share an earlier implementation window with another project in exchange for sharing a key resource from our project). Of course, as with negative risks, we can simply accept the fact that Lady Luck may pass our way, and be prepared if she does by keeping scope/schedule/cost flexible.

The risk planning approaches above can be applied to everyday life: Picking up relatives at the airport? Building a deck? Planning a birthday party? What could go wrong? What could go right?

I applied these techniques when planning a motorcycle ride from Toronto to Alaska and back last summer: I planned for the very real possibility of running out of gas on The Alaska Highway with a combination of avoidance (topping off the fuel tank whenever possible), transferring the risk (CAA, where feasible) and mitigating by carrying extra fuel. On the positive side, I left my schedule loose enough to accept opportunities that might come up — such as a much-needed, off-road riding course during a stopover in Calgary.

If we are focusing on just the negative risks, we are only addressing one side of risk planning. Sure, there are snakes ahead; but let’s be prepared for the ladders as well.

Yes I Khan

Recently, I mentioned to a colleague that I wanted to learn to develop JavaScript programs. It’s not that I want to be a JavaScript developer; rather, there are a couple of simple projects that I would like to bring onto the front burner, and I wanted to try doing-it-myself rather than outsourcing the effort. The icing on the cake was it would afford me an opportunity to take a walk in the shoes of the developers and subject matter experts (SMEs) on my project teams that I find myself collaborating with regularly.

My vision was not to become a JavaScript guru; and I wasn’t looking for certification. On hearing this, my colleague suggested the on-line training centre, Khan Academy. It would provide the basics I was looking for; and, best of all, the price was right—it’s free (although donations are encouraged). Perfect!

Those with children may be familiar with The Khan Academy—these are the same folks that provide on-line tutorials to help kids learn math. I hadn’t realized they had training in other subject matter areas. How much could I really learn from a kids’ study tool, though?

A lot as it turns out. Over the course of the lessons I’ve gone from understanding the fundamentals of programming to creating some relatively-complex object-oriented programs. The lessons are comprised of tutorials that are demo-based, and include audio that’s up-beat and light making what are sometimes complex or abstract concepts fun to learn. Each tutorial is followed up with a coached challenge to reinforce the just-learned content; and, at the end of all the tutorials and challenges in each lesson, a project is assigned in which you must create a program using the concepts from the lesson.

To be sure, the content is targeted at kids, with Pokemon-type characters popping up to provide words of encouragement and direction. But, so what? I’m still accomplishing my goal; and it’s a refreshing change from the often dreary and mundane adult training programs that are out there.

Because Khan is web-based, I can complete the lessons whenever/wherever I want. I usually login from home for an hour in the morning before the workday begins; however, I have just as easily logged in from Starbucks. Also, the tutorials are always available, in a chronological order, I can go back and replay them when I get stuck.

Has it been worth the time invested? I think it has. I now have the basic programming skills I was after (the outcomes from my labors will ultimately become part of the Professional Services Plus website–stay tuned to PSBlog for more on this as we approach the release date). In addition, when development teams are walking me through technical designs and prototypes or discussing issues, risks and changes, I now have a better context within which to receive and frame this information.

You can find out more about The Khan Academy at khanacademy.org.

FEATURE: Think you’ve got your requirements defined? Think FURPS!

One of the best ways I’ve found to ensure all the bases are covered when defining or reviewing system requirements is to use the FURPS checklist.

Created by Robert Grady, FURPS is an acronym for:

Functionality:   This is the one most of us jot down when defining requirements. It answers the question, “What do I want the end product to do?” In addition to considering product features and capabilities, remember to think about what level of security is required.

Usability:         Who will use the product? How will they use it? What look-and-feel do you want? What about help screens and self-help “wizards”? One often overlooked area is that of user documentation and training–often sub-projects unto themselves!

Reliability:        What is your expectation in terms of system up-time? What do you consider an “acceptable” system failure? How quickly should the system be able to recover from a failure? What should the mean time between failures (MTBF) be?

Performance:   Consider the functional requirements you have defined. What level of performance are you expecting? Think about speed, efficiency, availability, accuracy, response time, recovery time, and resource usage.

Supportability:             How easy should it be to test the system; and how would this be done? What about maintenance–what’s your expectation in terms of system care and feeding? How configurable should the system be? What about installation–who should be able to install it?

Grady’s FURPS definition actually includes a “+” (FURPS+) lest we forget to consider:

+ Design Constraints: Anything the design team should be aware of in terms of how you would like the system designed?

+ Implementation Requirements: For example, do you expect the implementation team to adhere to a standard?

+ Interface Requirements: Any legacy or other external systems the product should interact with? How / When should this interaction occur?

+ Physical Requirements: Material? Shape? Size? Weight? (This one’s more geared toward hardware requirements)

I’ve just scratched the surface in this post–each of these areas easily warrants a dedicated article to elaborate on details. For a far more detailed explanation than I could ever provide here, check out Robert Grady’s book, Practical Software Metrics for Project Management and Process Improvement, (Prentice Hall).

Organize your email with Outlook categories

I just love any technology that can take care of mundane, “admin-intensive” tasks on my behalf.

Take communications management, for example. Maintaining control over project communications is critical on any project—get it right and you’ll succeed; get it wrong and things will fall apart pretty quickly. Technology can help; but any communications management system still requires constant maintenance to be effective. Especially email. As we all know, email requires regular house keeping–a task that is… well, it’s mundane.

While I don’t have the solution to the world’s email challenges, here’s a gem I find helpful:

For many of us, Microsoft Outlook is pretty-much a staple for gaining control over communication. However, while many use Outlook’s email functionality as a mechanism to receive-open-read-reply/forward-file email, its powerful email organization capabilities are often unknown or forgotten.

Enter Outlook’s “categories” feature. Categories are colour-coded tags you can assign to significant emails to keep them visible and help with searching. You can assign categories to email by right-clicking on the “Categories” heading at the top of most mail folders: Select an email; choose a category; and your done.

Outlook comes with default descriptions for each category (Red is “red”; Yellow is “yellow”); and you can re-label the category to whatever works for you. Here’s a list of the categories I have set:

  • Orange – “Change”
  • Light Green – “Decision”
  • Purple – “Issue”
  • Light Blue – “Reference”
  • Yellow – “Risk”
  • Red – “SOS”
  • Green – “Status”

For more information on how to configure colour categories in Outlook, check out this Office Online article.

MOTIVATE the people you work with

Here’s a great way to remember some key success factors to keep in mind when motivating others:

M – Manifest confidence when delegating (if you believe they can, so will they)

O – Open communication (Can you say transparency?)

T – Tolerance for failure (Success is often based on learning how NOT to do it)

I – Involve others (We’re in this together)

V – Value efforts and recognize good performance (Can you say “Thanks”?)

A – Align business objectives to individuals’ objectives (Are we on the same page?)

T – Trust your team and reciprocate by being trustworthy (Critical to motivation success)

E – Empower your team (Don’t micromanage!)

 

- based on The Human Aspects of Project Management by Vijay K. Verma

FEATURE: Dr. Winston Royce on Managing the Development of Large Software Systems

Father Guido Sarducci, in presenting his idea for a Five Minute University, explained Economics in simple terms: “Supply and Demand. That’s it.” (Five Minutes doesn’t leave much time to devote more to the subject.) Funny thing is, he was right. These are the cornerstones upon which everything else is built.

Which brings me to a white paper I’m reading: Managing the Development of Large Software Systems, by Dr. Winston Royce. It’s a fascinating read on what (and what not) to do to ensure success when developing incrementally. Dr. Royce describes all software development as having just two, essential steps:

“There are two essential steps common to all computer program development, regardless of size or complexity. There is an analysis step, followed second by a coding step,” he says.

Of, course, he goes on to explain that, while these two steps are sufficient for very small, low-risk projects, limiting larger projects to these two steps dooms them to failure. Go figure.

To reduce the risk of failure, larger projects require more than the basic steps—notably requirements definition steps before the analysis step; a design step in between the analysis and coding steps; and a testing step after coding. Also, as anyone familiar with iterative development techniques will tell you, if we develop iteratively, with each iteration building on the deliverables of its predecessor, we can reduce risk further because we move our development baseline forward with the completion of each iteration—that is, the more we build, the less risk there is to contend with.

There’s one hitch though: Because our view of the over-all solution is limited to the code delivered  with the iterations that have been completed, any testing is also limited. Thus, it is entirely possible that, when testing Iteration 4, we discover a fundamental flaw related to work done in iteration 1; which, in turn, could require significant rework of components from iteration 1 on-ward.

Says Dr. Royce:

“The required design changes are likely to be so disruptive that the software requirements upon which the design is based and which provides the rationale for everything are violated. Either the requirements must be modified, or a substantial change in the design is required.”

To respond to this risk, Dr. Royce suggests five steps:

First, introduce a “preliminary program design” step between requirements generation and analysis to determine storage, timing and operational constraints (a.k.a. Performance and Supportability); and hone the design by collaborating with analyst input. In other words, establish a baselined architecture to minimize architecturally-significant risks. To ensure success, Dr. Royce suggests three, key factors to ensure success:

  1. Begin the process with the designers
  2.  Design, define and allocate the data processing modes (Architectural Candidates)
  3. Write an overview document (a.k.a. Vision) so that everyone has an elemental understanding of the system.

Second, document the design. A lot. “Management of software is simply impossible without a very high degree of documentation,” he says. Why so much documentation?

  1. Evidence of completion. As opposed to a verbal statement of completion status (“How a far along are you?” “I am 90 percent done, same as last month”), design documentation is forces designers to provide clear, tangible evidence of completion state that management can base decisions on.
  2.  When documented, the design becomes real. Until that happens, the design is nothing more than people thinking and talking about the design. (Reminds me of the astronomers’ adage that, if you didn’t write it down, it never happened.)
  3. Downstream processes (development, testing, operations, etc.) require strong design documentation in order to succeed. Take testing, for example: “Without good documentation, every mistake, large or small, is analyzed by one man who probably made the mistake in the first place because he is the only man who understands the program area,” Royce says.

Third, “Do it twice”—that is build a functional prototype that simulates the high-risk elements of the system to be built; then, once you are satisfied that the high-risk items have been addressed, proceed to build the real thing—the version that will be delivered to the customer. Royce is careful to point out the unique background required by project staff involved at this stage:

“They must have an intuitive feel for analysis, coding and program design. They must quickly sense the trouble spots in the design, model them, model their alternatives, forget the straightforward aspects of the design which aren’t worth studying at this early point, and finally arrive at an error-free-program.”

Fourth: plan, monitor and control testing. Two of Royce’s suggestions for success are:

  1. Testing must be completed by independent testing specialists who have not contributed to the design based on documentation created in earlier stages.
  2.  Visual code scans to pick up code errors. Again, this should be done by someone who’s not as close to the code as the developer. Can you say verification testing?

Finally, involve the customer—early and often. Says Royce, “To give the (contractor) free rein between requirement definition and cooperation is inviting trouble.”

Does any of this sound familiar? It should—it’s RUP 101. Here’s the kicker: Dr. Royce wrote this paper in 1970. His white paper can be found at: Managing the Development of Large Software Systems