Beyond Extreme Programming

C'mon all you questioners! I built this page for you to raise alternative process components for comparison with XP. Pitch in! Or if you don't like this page as a start, write the one you like!

A little help here. Often people question whether they could use XP because they, or their management, believe in, or use, various techniques. "We use Hypertensive Planning, could we use XP with or instead of that?"

Are there practices or processes that you'd like to compare and contrast with XP? If so, please list them here. Thanks! -- RonJeffries


Many development projects use even less "formal" process than XP, and the challenge is to get them to use any at all. However, some groups or companies are already using some pretty standard process components. Let's talk here about some famous process components that might be in use, and how they relate to XP. Please offer practices that you'd like to talk about, and comments on the ones that show up here. -- RonJeffries
Code Reviews:
A formal or semi-formal process whereby, at intervals, developers come together to go over some part of the program. Most often these reviews cover the work of one developer at a time. The practice may call for reviews of all code, or just critical code. The most common approach allows reviewers to raise issues, but not to propose changes, and allows the developer to answer questions but not defend the work. Code Reviews cover standards conformance, defects, what else?

Two ExtremeProgramming practices bear most directly on the practice of Code Reviews: PairProgramming and UnitTests.

PairProgramming can be seen as a kind of live, two-person review of code that is coming into being from the review. If I had developed my code with a partner, I would feel much more comfortable going into a review that the code would meet standards and do what it was supposed to do.

XP's UnitTests are written during development of each feature - ideally before the development. An XP-written object has UnitTests for all of its behavior that could possibly fail. If I had developed my code with UnitTests, I would feel much more comfortable going into a review that the code would do what it was supposed to do.

Used well, code reviews would track review effectiveness, in terms of standards violations found, defects found, and so on. Most review processes classify the things they find as to whether they are important or not.

One of the biggest drawbacks to code reviews is that they take place after a relatively large amount of code has been produced. The changes the review proposes can be somewhat disruptive to the development process. While reviews surely save money compared to letting the problems go further downstream, even better would be process improvements upstream.

Suppose that a team instituted PairProgramming and UnitTests upstream from reviews. Might the problems found by reviews reduce so far in number and importance, that the reviews were no longer economic?

-- RonJeffries

BradAppleton raises IssuesOnReviews.

I have been pushing PP for some time at work, and pointed people to the research by LaurieWilliams. However our Quality Manager was unconvinced and asked if there was any research comparing the effectiveness of PP against code reviews. I had to admit that there was none that I knew of. Has anyone done an experimental comparison? -- DaveKirby


PERT Charts:
PertCharts, Gantt, and other "program management" tools are often used as part of the planning process. In many cases, the schedule shown by these tools bears little relation to the actual schedule. Especially welcome here would be comments on where these tools have actually helped.

I (RonJeffries) have used, directly or with my project administrators, several generations of these planning tools. They had several drawbacks: they were time-consuming to keep up; they would fall out of synch with reality very rapidly; they seemed to be more true than they were, all printed so nicely; often we couldn't make the program display what we really thought would happen because of limitations in the software.

XP uses the PlanningGame, with CommitmentSchedule to show the overall look of things, and IterationPlan to show the next few weeks. It is trivial, literally, to change the plan: pick up a card and move it from one place to another on the table.

Could some way of recording what the plan is, and how progress is being made, either replace your favorite PERTGanntHarvard standard, or feed into it to generate company standard reports? This way you could have a combination of both worlds: you could know what was really happening, and you could print out the reports people have come to respect. It'd be more expensive than just doing XP's planning, but most of the rest of the work could be done outside the mainstream planning and development, so it needn't slow things down.

-- RonJeffries

One project GotaHandleOnStatus. -- BradAppleton

I have seen such charts used to good effect in the past. IMO, the difference between good and bad use of charts lies in recognizing that the future projected in the chart is an estimate. When I've seen charts used positively, the chart is used separately from deadlines on the project. Deadlines were set first, then the Gantt/PERT/etc. charts were used to see if the project was on track to finish by the deadline. Management recognized that this was approximate. -- BrentNewhall


AlejandroGoyen's discussion of management-required Gantt/PERT/etc. charts on an XP project moved to ChartsOnExtremeProgrammingProject.


Alternatives to OnsiteCustomer

An alternative to the OnsiteCustomer I found to be highly useful was "Concept Engineering." I haven't heard much about this approach since the early 1990s and I haven't been able to locate my reference material (sorry!).

The approach used was to create multiple two person interview teams from the development staff and to identify your potential market segments. Within each market segment, you went out to interview 10-20 customers and potential customers. Each interview was transcribed. After the interview, key descriptive statements were identified. After all interviews were completed, a group evaluation method was used to consolidate the individual details into a hierarchical description.

I have yet to find any method I would consider its equal in getting the development team to understand the customers' environments. There is nothing like seeing first hand what the customer is trying to do and the problems he may face.

Especially in the commercial world, it may not be feasible to identify a single truly knowledgeable individual and have him relocate to your development site. If you are the new kid on the block, you often have to develop the first release of your product before you have any end users.

There are other means of requirements gathering, but in the end, I think that the responsibility for generating UserStories, doing the PlanningGame, and creating AcceptanceTests usually fall to the development team. It might be nice to have an OnsiteCustomer to do these things, but I think the situations are rare where the OnsiteCustomer is applicable.

-- WayneMack


Please add more practices and experiences to this page.


CategoryExtremeProgrammingDiscussion
EditText of this page (last edited June 1, 2004)
FindPage by browsing or searching

This page mirrored in ExtremeProgrammingRoadmap as of April 29, 2006