The term BigDesignUpFront is commonly used to describe methods of software development where a "big" design is created before coding and testing takes place. Several ExtremeProgramming (XP) advocates have said that such "big" designs are not necessary, and that most design should occur throughout the development process. While Xp does have initial design (the SystemMetaphor), it is considered to be a relatively "small" design. Much of this page disputes the amount of up-front design required for software projects.
"every OOA/D begins with identifying the problem space entities that are relevant to solving the problem. Once the relevant entities are identified one identifies the relationships in the problem space between those entities that are relevant to solving the problem in hand. One records both in a Class Diagram. Only then does one move on to identifying detailed responsibilities, dynamics, and flow of control collaborations." - H.S.Lahman (in 2006, folks)
"In preparing for battle I have always found that plans are useless, but planning is indispensable." [Eisenhower quote taken from PlanningExtremeProgramming]
Or as JerryWeinberg put it "the documentation is nothing; the documenting is everything".
"A mistake in initial dispositions can seldom be put right." and "No plan survives its first contact with the enemy." [Field Marshal von Moltke] -- JoshStults?
Perhaps it is the social and intellectual activity of designing, rather than the designs themselves that are important. Is it the scope of design or the attitude towards design that needs to change? -- ChrisSteinbach
[This section originally followed the Smalltalk design/prototyping discussion.]
Even in Smalltalk, though, some design has to be done up front, especially on really big enterprise-scale systems. There has to be some sense of what goes on the desktop, how many services there are and how many machines are needed to support them, what has to connect to the legacy systems and how, and that sort of thing. I can't really stand up in front of a project review team for a $60M call center installation and suggest that they "read the code" in order to understand whether the resulting system will satisfy the very real 3 second response time constraint.
How does writing something in a design document provide any understanding of "whether the resulting system will satisfy the very real 3-second response-time constraint"? Of course one would not suggest that the project review team "read the code", one would run the code and show that it provides acceptable response time. Stepping further back, one realizes that the "3 second response time contstraint" is probably also a product of BigDesignUpFront and that a 3.5 second response may be sufficient or a 2.9 second response time may not be acceptable. The numbers in Functional Requirements Documents rarely are justified and probably cannot be justified in paper experiments.
I prefer the DesignIsNotaDevelopmentPhase approach, where the idea is to capture the design, and therefore encourage it to evolve. Presumably the big seams begin to stabilize relatively early on, while some small ones stay in flux until deployment (or even afterwards...).
-- TomStambaugh
Even in Smalltalk, though, some design has to be done upfront [...]. [What goes on the desktop, number of services and support machines, legacy connections, etc.]
Which of these things can't be decided one at a time, when its turn comes to be implemented?
I can't really stand up in front of a project review team for a $60M call center installation and suggest that they "read the code" in order to understand whether the resulting system will satisfy the very real 3-second response-time constraint.
What makes reading the document more convincing? When our top customer challenged whether our system could be fast enough, we measured it, ran it in 10 parallel processes and measured it again. Did the math. That was convincing. -- RonJeffries
Did you document those tests? Make them repeatable? Did you prove that they would scale and the results would scale predictably?
The ones that have to be funded by the client before the subsystem is built. The ones where the client wants to know ahead of time how much the total solution is likely to cost.
What makes reading the document more convincing?
The fact that the document is written in English, and most of the review team (particularly the managers and recommenders whose approval is needed to close the phase and pay the bill) haven't read or written code in 30 years, if at all. Many system problems, such as call centers, don't succumb so readily to simple demonstrations of blazing speed. In my example, the response time of a call center is as likely to be affected by network load and the number of network transfers (because of network latency) as the performance of any one system. A data transfer that requires 100 hops at 100 milliseconds per hop is going to burn 10 seconds in transfer time, even if the participating systems are infinitely fast.
In the hardware world, the "design", especially of a complex system, is how you answer questions about how close to critical limits you are, what margins of safety you've built in, how you know that a particular beam is thick enough or cooling element big enough. The same sort of thing has to be done in software, especially in mission- or life-critical applications. And it is NOT all in the source code, it's as much in the interconnections among components. And those aren't in the source code. No code is an island.
The words BigDesignUpFront do bring to mind something of the aim of much architectural design in the building sense. Good design often follows where Big Ideas precede Detail Design. Conversely it is rare for small scale, pragmatic Detail Design alone to bring forth Great (or even worthy) Architecture. Of course, some Big Ideas are so far removed from the pragmatics as to be spurious as Design generators. As noted in NooHasNothingToDoWithSoftware, the key seems to be to correlate Big Ideas that have a genuine relationship with the item being designed.
BigDesignUpFront in architecture does not require that every aspect of the design is predetermined. However it does provide a powerful reference for evaluation of subsequent ideas in the iterative design process.
-- MartinNoutch
We are perhaps learning the need to be humbler at this stage of software development, Martin. But I guess the nearest parallel in ExtremeProgramming to what you're referring to as good and necessary BigDesignUpFront in physical architecture is the SystemMetaphor. After that greatness comes (if it ever does) through brilliant teamwork on the detail. But see also the unresolved questions in ExternalAndInternalDesign. -- RichardDrake
I'm not sure about the ExtremeProgramming people or the exact meaning of BigDesignUpFront, but a high-level up-front design is generally regarded by most as important to large-scale object-oriented software systems. There's a tremendous amount of quantification to support the position. However, in software, this high-level design is very dynamic and changes as implementation is created. For a long time, one could get away with nothing more than CodeAndTest and Refactoring, but as systems become more complex, a need arises for some consideration to high-level designs and strategies. This is maybe why there is such a rift between SoftwareArchitects and ExtremeProgramming. Extreme Programmers don't agree with this position - and they may be right for the type of apps they develop. But some ExtremeProgramming enthusiasts are even hostile to the idea of SoftwareArchitecture. -- RobertDiFalco
I'm clearly the only programmer left who doesn't use Smalltalk at all.
I'm in an environment where I get pieces of a very large system to write. I start out by writing up a little analysis document that I use to ensure that I understand the requirements. I then write up a high level design and check to make sure that it will take care of all the requirements in the analysis. Then I refine the high level design into a low level design. I often write code to do this, sometimes I decide that TheSourceCodeIsTheDesign, other times I UsePrototypesToChallengeAssumptions. I don't, however, allow the design document to omit any information that is needed in order to understand the code. That means that I'm always twiddling the design doc while I'm writing the code. It's not really all that onerous either because I use the design doc as a sort of checklist that tells me when I'm done with the code. If I change my mind about how to do something I change the doc so that I can check off what I've really done. If I didn't do that then I'd have stuff on the doc that never got checked off and I'd have code that got written for no apparent reason. It seems like the same kind of relationship the Extreme guys have with Unit Tests. One result is that the design is definitely not done until the code is. We keep our design docs in revision control and I find that I always check in my designs when I check in my code.
I guess my point is that I believe strongly in BigDesignUpFront - but only for a little part of the whole program at a time. Wouldn't that be LittleDesignAllAlong??
An ExtremeProgrammer might write tests to represent what the code had to do, and then code until they ran. What is the documentation cycle adding to the story? If you have the tests, why aren't they sufficient to know when you're done? And if you don't, how do you know when to check something off? -- RonJeffries
What output, in the end, does one have from the testing process? Is it correct that essentially one has, after the tests have run, simply a linear numbered list such as "test0001 - OK. test0002 - OK. test0003 - failed. test0004 - OK."? Such a list fails to relate the significance of the test to the architecture of the system, in terms the customer can understand. Therefore, it could be argued that what the documentation cycle "adds to the story" is an additional level of abstraction of describing what the system should do in an arbitrarily expressive form (i.e. natural or formal language). However, with additional tools, perhaps the output of the testing process could be raised to a higher level of abstraction - say, by putting tests within a hierarchy, modeling dependencies between tests, etc. However, perhaps exactly this sort of modeling is the type of "Design" that XP claims is unnecessary.
Actually, it helps if the tests are actually named after what they're testing - "testFunctionBlahReturnsTrueForGoodValues - OK, testGettingResultsFromDatabase - OK, testSomeOtherBusinessRule - OK"
I don't keep up to date on the methodology wars, and it seems everyone's more on a patterns bandwagon of late, but last I recall ~92 the prevailing sensible view (i.e. matching mine) seemed to be some variation on cyclical development (recursive-parallel, iterated "analyze a little, design a little, code a little, test a little", whatever).
Haven't looked at TheDeadline, but the write-up here make it sound as though a lot of experience is being discarded, especially as indicated in the "don't allow implementation until the very last minute" notion.
I'm big on design, compared to many here (to me the work is in design; coding is the process of converting that into a specific computer language and typing it in, and occurs quickly and semi-automatically) but always within that context of a larger cycle. This BigDesignUpFront thing as described seems to be trouble waiting to happen. -- JimPerry
Maybe BigDesignUpFront is the correct solution when you already know everything you need to know and are ever going to know about a system before you start it. Why experiment and iterate if you aren't going to learn something?
Sounds impossible as stated, boring if it were. If I knew everything, wouldn't I know the design and not need BigDesignUpFront? -- r
Maybe BigDesignUpFront is the correct solution when the medium you are working in is incomprehensible once it is built/written (e.g. machine code).
No one ever built a machine code program without debugging. We used to do lots of thinking and desk checking not because it was better than debugging, but because it was hard to get time on the machine. -- r
Maybe BigDesignUpFront is the correct solution when the medium you are working in is unchangeable once it is built/written (e.g. Hubble Telescope).
[actually, wasn't there a big problem (focus) when Hubble (the telescope) first went up in 1990? Oh, and since it was only corrected 3 1/2 years later, this is where BDUF comes in...]
Actually the Hubble was built on the ground, and changed in space. I think, though, that the learning point is most important. You will learn: what's the best way in your circumstances? -- RonJeffries
Interestingly, the Hubble's big problem came about because a test was incorrectly set up, and the telescope was built to pass that test. -- DaveVanBuren
-- StanSilver
Off-topic clarification: The Hubble's main defractor was ground based on a faulty reference lens. The design was perfect, but an implementation flaw crept in. The flaw was an aberration of some sort, I think the result of a incorrectly calibrated instrument. In relation to that lens, though, the main Hubble mirror was the most precisely built instrument ever at the time it was created. The application of the aluminum coating, used for ultraviolet reflection, was a big worry (if the aluminum oxidized, it would have been about as reflective as black felt to ultraviolet light), but went off without a hitch because the process was so well designed and understood in advance. The reflectivity exceeded their best hopes! They called the fix "glasses" for the Hubble, but I forget whether it was optical or electronic. There are many web sites devoted to the instrument. I'm not sure whether this anecdote has any application to this discussion. Beware of metaphors! -- Brent
Additional notes to the above clarification, at the risk of drifting further OT: The corrective lens was optical, and called COSTAR. They were actually two lenses on a single device, that was placed between the Big Lens and space. They defracted the light in such a way that it nullified the imperfection in the main lens (light from the center and light from sides weren't being focussed on the same place, resulting in blur) The problem occurred due to a tiny piece of paint that bladdered off a metal tube, thus creating an imperfection on one of the calibration lenses, results shown above. To bring it back to the main point of this page: I don't think any BDUF-designer could have thought up that something as miniscule as a small bladder of paint would have cause this problem, as it's not in the scope of the Big Design. Had the engineers followed a continuous testing-implementing-testing cycle they would have been more likely to spot such a problem, I think. (Sorry if this is 'Wrong'-ish on this Wiki, first time posting after lots of reading with great interest.) -- SanderBol?
Maybe the primary purpose (primary use?) of design notations is for tinkering, not for specifying? Maybe the better they are for specifying, the worse they are for tinkering, so the complicated ones don't get used?
If I understand correctly, XPers seem to have come to the conclusion that stories, CRC, units tests, Smalltalk (and Wiki?) are the best phases/mediums for doing their tinkering and rearranging. Others do major tinkering in some design phase/design notation during iterations; still others do major tinkering in a big design phase/design notation at the beginning of a project. -- StanSilver
I can't speak for XPers, but I prefer to do my tinkering and rearranging in a medium that can tell me whether my current tinkering is working. I just don't trust my ability to think through everything - I know from experience that I always overlook something. I do most of my tinkering in code because it lets me know right away when my thinking is flawed. -- JasonArhart
Group 1 concerns themselves with specifying and capturing the complete and correct output of each transformation phase (since the assumption is that it is correct).
When some of group 1's code isn't accepted by the user, it is a "mistake" - they didn't do one of their transformations correctly. When some of group 0's code isn't accepted by the user, they fix it. -- StanSilver
I think the tendency to do BigDesignUpFront may come from the confusion that non-programmers (I suppose) have: They see coding as manufacturing, not as design. So, quite logically, they want the product to be designed before it's manufactured, like in any other industry. Except that, after the supposed design is done, another design (plus refactorings) must be done by the coders. I suggest that, most of the time, seeing coding as manufacturing is a NotionSmell?. -- AnonymousDonor
'Design' has to be one of the most over-used and mis-used words around at present. Consider whether other words are really meant, eg 'make', 'write' etc. For me 'Design' encompasses a wide ranging process of investigation, evaluation, problem-solving and creative decision-making. Thoughtful analysis is the key to this.
The outrage of so much of modern (building/environmental) 'design' is that there is clearly no commitment to this process of thoughtful analysis, just a quick, commercially led and superficial decision-making process that only just precedes the actual making of the building by the Contractors.
-- MartinNoutch
-- JeffMantei
A good example IMHO of ConceptualInertia is the Tunes Project: http://www.tunes.org
I think BigDesignUpFront (a.k.a. BDUF) is a waterfall-style attempt to do lots and lots of one phase (specifically design) before starting the next, and thus, to forsake any chance to get feedback.
The alternative probably ought to be called DesignAsYouGo (later: or better still, ContinuousDesign). -- PaulChisholm
Actually, I think that it should be called NotEnoughDesign? - I don't have complete enough design that I can formally test against a formal requirements (in XP being code and unit tests), but I have enough of a design to send me on a wrong way with costly re-writes.
I'd argue that all people do some level of up front design (on any level of granularity), simply by knowing the context. Of course, it would be a very interesting experiment, to try to build a system w/o people who build it knowing the context. Say just give a programmer a story (or part of), existing code w/o anything else. And then have a control team that works with the full context knowledge.
I see this fairly often: programmers build "infrastructure" without knowing how those systems will be used. The result is typically bloated and difficult to use, because YouMightNeedIt. A clear idea of requirements and/or the YAGNI principle is needed to cure this ailment.
[Don't know whether the following still belongs here]
I prefer now instead of building a system say discovering a system. Then, you think about your system as a tree with the "whole" represented by the root and definite details as the leaves. With traditional methodologies, you do breadth first search/discovery. With agile ones, you do depth first. As anyone who implemented tree search algorithms, in depth first you need to hold less state -> your tree can actually change outside the branch you're just exploring w/o affecting your search.
-- VladEnder
I think XP is more about going from the leaves up to the root, and BigDesignUpFront is more about going from the root to the leaves. This is why XP tends to discover the tree. It comes from the leaves (the real need of the customers), and refactor as it discovers more leaves. Then it design some branches, and so on, up to the root. For me, that's the meaning of DesignAsYouGo or ContinuousDesign.
-- CyrilleGachot?
I found this statement interesting because I found it 180 degrees opposed to my viewpoint. I feel the customer (specifically the front line users) know exactly what the project should become. It is the developers who need to know what the project should become. The question are the more effective and efficient means of communication from the users to the developers.
The problems I see with "Big Design Up Front" are that the two parties most interested in the communication (users and developers) are omitted; and the context of the user environment is lost when a large document listing precisely defined requirements is created.
I have become firmly convinced that there is no substitute for having individual developers spend a day with individual users and understanding the environment of the user's job. The developers invariably come back from the experience with drastically changed views of what they should be doing. Developers have to intuitively make hundreds of unconscious decisions while creating software; direct user experience provides a context for those decisions.
Again, the point is how best to get information and knowledge communicated from users to developers.
-- WayneMack
I think we should preserve the wording of "Big Design" linked with "Up Front". I don't think we can limit the size of the design, just the amount we do up front. Design should be an ongoing activity.
I'm not sure that you can prefer spatial thinking to logical. Both are heavily ingrained in the brain, as evidenced by a few sentences taken from any conversation. Things that have nothing to do with space or force are expressed in those terms.
I think both types of thinking are needed, the question is the order in which they are applied. In BigDesignUpFront, one tries to apply only spatial thinking to create a design followed by only logical thinking to implement the design. In the XP iterative approach, one first applies logical thinking to DoTheSimplestThingThatCouldPossiblyWork, then one applies spatial thinking to ReFactor, then one repeats the entire cycle. -- WayneMack
Sorry. Such is the nature of ThreadMode. This page is not a design - it is an ongoing discussion waiting to be trimmed down to DialecticMode some fine day. Have at it.
I missed the irony. Is it that the BigDesignUpFront page was not designed and developed in one pass, but was done iteratively?
My long years of experience in systems (over 35 years) have taught me that up-front design is always good. The only problem is in how much knowledge is available to do the design. BDUF is only possible in some situations. Consider these differing types of projects:
But in many other cases, up-front design will save overall time and money, and contribute to a better system. For example, in a large project (over 100 staff), how do you even know it is a large system unless some up-front work is done? Can a project manager set 100 programmers loose without some idea of what is to be built? (maybe in the case where the team has done the system many times already...).
An experienced project manager knows that a plan (or design) is not perfect, and will evolve as more information becomes available. Good development methodologies allow for multiple iterative passes from the high level to detail, and call for prototyping of critical components early in the project.
I hope this contributes to the debate. -- NeilCarscadden?
First off, the BDUF crowd needs to acknowledge that even in their world architecture and design are not completely specified up front but are in fact emergent. In nearly all projects, a certain amount of experimental work has to be done to answer any significant questions or mitigate any serious risks that could impact the success of the effort. There nearly always is a set of unknowns that can be resolved only through actual construction. This process is generally called R&D (research and development), and the result is a prototype that defines the basic nature of the end product. It establishes a framework upon which everything else can be built. The BDUF crowd tends to delineate this as a separate task (or sometimes even a separate project), and in many cases it's performed by a different team. XP, on the other hand, just considers it to be part of the overall process.
Second, the XP crowd needs to acknowledge that even they commit to a design early on. One has to in order to get anything done. One cannot rework the whole system for each new feature. There comes a point where a particular approach must be selected and adhered to. Again, the BDUF crowd tends to explicitly demarcate when this happens, whereas XP does not.
Really, the big difference between the two approaches is in how they treat R&D. BDUF tends to separate it from actual production work, with one leading into the other. XP, on the other hand, combines the two, thus enforcing the idea of emergence.
-- Milo Hyson
CategoryDesignIssues, CategoryPlanning, CategoryAnalysis
This page mirrored in ExtremeProgrammingRoadmap as of April 29, 2006