Friday, March 9, 2007

Code Read 7 - David Parnas on Star Wars Software

In 1985, David Parnas resigned from his position on a panel convened by the Strategic Defense Initiative Organization, which was overseeing the "Star Wars", or SDI anti-ballistic missile defense program. Along with his resignation he submitted several short essays explaining why he thought the software required for Star Wars could not be built, at that time or in the foreseeable future. Those essays were later collected and published, and are the subject of Code Read 7.

Some of the essays deal with issues such as using SDI to fund basic research (an idea in which he did not believe), or why AI would not solve the problems, but his core arguments focus around two main themes.

1) Software can not be reliable without extensive high-quality testing, and such testing could not be done for SDI.
2) Our ability to build software is insufficient to build SDI.

Scott Rosenberg, the author of Code Reads, seems to be asking what, if anything, has changed since 1985. Sadly, the answer is "Not much". Indeed, Parnas's paper is the most current of all the Code Reads sources, in is view of the software industry. It could have been written in 2007 just as easily as in 1985.

Testing is important

Of his two main themes, the first is the easiest to discuss. Basically, software is built broken. It needs to be tested before it works smoothly enough to be considered functional. Nobody has ever done it otherwise, despite their best efforts. Some have come close, but many have failed completely. All software needs to be refined, in situations very similar to its real usage, before it can be considered reliable. This is not news to anyone. Parnas makes it very clear how difficult it will be to do this with SDI.

Even if you exhaustively work to prove each component correct, or test each component extensively as you build it, the resulting system is still not trustworthy until it has been tested.

"If we wrote a formal specification for the software, we would have no way of proving that a program that satisfied the specification would actually do what we expected it to do. The specification itself might be wrong or incomplete." - David Parnas


A classic, and tragic, example of this problem is the Mars Climate Orbiter. Despite a rigorous testing process, a software error still caused the crash of the probe.

Software is hard

His arguments about our ability to build software go along three basic steps:

- Software is harder than other things we build.
- The way be build programs ensures there will be bugs.
- There does not seem to be hope for a much better way to build software.

Dijkstra, Brooks, and Knuth (as quoted in previous posts) have explained many reasons why software is hard. Parnas provides another reason - the discontinuity of software. He compares the structures of analog hardware, digital computer hardware, and software, and argues that since software is discontinuous, and has a large number of discrete states, it is much less amenable to mathematical analysis. This analysis is the main reason why non-software engineering projects are reliable.

For example, a structural member in a bridge has two states, "intact" and "failed". The behavior of the "intact" state is well understood: we have good mathematical models for the part's deformation under load, response to temperature, resistance to wind or water, degradation over time, and so on. The transition between "intact" and "failed" happens under fairly well understood circumstances. And we mostly just hope the "failed" state never happens. The same pattern of logic can be applied to almost all parts of the bridge.

Software systems, on the other hand, have many components, each with generally poorly understood behavior (compared to physical engineering), and many states. Indeed, most software approaches the what we now call "chaotic" behavior. Although it does always fulfill all three formal requirements, most software comes quite close. So on top of the layers of complexity, and depth of scale, most software is also, for practical purposes, chaotic.

How we build software with bugs

We try to manage this complexity by creating a logical model which we can use to break out smaller components, which themselves are broken into smaller components, and so on, until we are writing step-by-step instructions.

But this process is hard to do well. While we can write precise formal specs, "it is hard to make the decisions that must be made to write such a document. We often do not know how to make those decisions until we can play with the system... The result will be a structure that does not fully separate concerns and minimize complexity."

And "even in highly structured systems, surprises and unreliability occur because the human mind is not able to fully comprehend the many conditions that can arise because of the interaction of these components. Moreover, finding the right structure has proved to be very difficult. Well-structured real software systems are rare."

Additionally, we have the difficulty of translating those structures into code. Generally, we write programs as step-by-step algorithms, "thinking like a computer". We can sometimes do this in a top-down fashion, as Dijkstra proposed in his "Notes on Structured Programming", but even that uses a "do-this-then-do-that" approach. Various attempts have been made to find other ways, but none has found wide success.

"In recent years many programmers have tried to improve their working methods using a variety of software design approaches. However, when they get down to writing executable programs, they revert to the conventional way of thinking. I have yet to find a substantial program in practical use whose structure was not based on the expected execution sequence."

This provoked a heated discussion at Code Reads, but I think the fundamental point is that while other techniques exist and do provide real benefit in many cases, they are all ways of working with a larger problem, of structuring the overall approach. At its finest level, software is algorithmic, and algorithms are specified in sequential steps.

There are generally two main non-algorithmic ways to program. The first is to relieve ourselves of some of the work of creating complex sequential algorithms by specifying rules and having some system to implement those rules. And the second is formally isolating non-related portions of an algorithm so they can be run in parallel.

In the first case, using rules, is simple enough in restricted cases, but such systems become as complex and general programming languages in more general cases, and one eventually finds oneself creating rules to describe an underlying algorithm.

The second case works quite well, but each of the many parallel computations ends up being done using the same old step-by-step sequential executions.

And in both cases, we continue to make the same fundamental mistakes about the overall structure of our system because we don't fully understand its behavior.

Better tools and tecniques

Lastly, Parnas addresses the hope that improvements in methodology or tools will alleviate these problems. At the time he was writing, he saw four main threads in tool and methodology improvements (I'm combining two of his essays):

1) Structured programming
2) Formal abstraction
3) Cooperating sequential processes
4) Better housekeeping tools

Back in 1970's, according to Parnas, this was academic "motherhood" - nobody could object. Today, in my experience, this view is industry wide. A few people will argue that we've all been brainwashed and are now blind to alternatives, but even in the portion of our community which is most open to new ideas, these four still are the dominant paradigms.

Parnas argues that we are now in the days of incremental improvements in software, rather than rapid and dramatic advances, saying "Programing languages are now sufficiently flexible that we can use almost any of them for almost any task." And even things like non-algorithmic specifications still suffer from the same problems as writing code: "...our experience in writing nonalgorithmic specifications has shown that people make mistakes in writing them just as they do in writing algorithms. "

Conclusion

One could argue that Parnas is a dead-end thinker, saying nothing more than the status quo is bad, and it is all we will ever get. Instead, we must remember Parnas is talking about the most ambitious and complex software project ever conceived, and saying that the particular project is beyond our capabilities, not that software in general is beyond our capabilities.

However, I do think he misses a possible way out. I say "possible" because I do not know if it is a real solution, or just a fantasy. I think we need to improve the way we think about our own solutions. Then we can build systems that are less prone to the kinds of complexities which befuddle us.

To use an analogy, after the Tacoma Narrows Bridge disaster, civil engineers added some new factors to the way they think about bridges: "wind resistance" and "harmonic effects". Software engineers are still looking for what those factors are. I do not believe that we have a mature set of factors to consider when we design software, and I do believe that we can discover what those factors are.

In fact, I hope this blog will help me discover them, and I'd love to hear any suggestions.

Read more...

Monday, February 19, 2007

Code Read 6 - Mitch Kapor's Design Manifesto

This installment of Code Reads takes a huge leap from the world of academia and the historic foundations of our discipline to something more recent: Mitch Kapor's Software Design Manifesto. In this passionate essay, Mitch Kapor extols the virtues of designing software with the user experience in mind, and advocates developing a profession of software design.

Today the phrase "software design" (like "architecture") has come to have so many definitions that Humpty Dumpty would grinning from ear to ear. So before we can comment on what Mitch Kapor had to say, we need to pay some attention to what he actually did say, and not what we read with our modern vocabulary.

First of all, he says "Software design is not the same as user interface design." However, he does go on to say that user interface design is an important part of software design. In his effort to wrench software design away from the pure engineers he repeatedly comes back to the importance of the user interface, and his prime motivation is a better "user experience", so it is easy to forget that he his talking about something larger that the UI.

He fervently opposes taking a purely engineering view of a program. "One of the main reasons most computer software is so abysmal is that it’s not designed at all, but merely engineered. Another reason is that implementors often place more emphasis on a program’s internal construction than on its external design..."

When he talks about software design, he is talking about the "metaphor" of the software. When describing his experience with good design, he says "It is the metaphor of the spreadsheet itself, its tableau of rows and columns with their precisely interrelated labels, numbers, and formulas ... for which [Dan Bricklin] will be remembered".

A modern example of a successful metaphor is the GUI windowing system that almost all of us love and know. It was not any individual UI that gave this approach its staying power - it was the simplicity and utility of the metaphor.

Unfortunately, it is an almost universal experience that finding such good metaphors is hard work. Finding non-programmers who have the skills required to do this is hard: "Many people who think of themselves as working on the design of software simply lack the technical grounding to be an effective participant in the overall process. Naturally, programmers quickly lose respect for people who fail to understand fundamental technical issues. The answer to this is not to exclude designers from the process, but to make sure that they have a sound mastery of technical fundamentals, so that genuine communication with programmers is possible." I have had quite a few arguments about software design with people who lacked even an accurate technical vocabulary, let alone a solid grounding.

However, the converse is also often true - many good programmers lack the leadership and aesthetic skills to make good designers: they often veer towards the stereotypically dictatorial auteur, or allow the engineering of the software to usurp the design.

People who try to play this role must have a very broad background, and depth of insight in a variety of fields, as well as strong leadership skills. Mitch Kapor suggests, "technology courses for the student designer should deal with the principles and methods of computer program construction. Topics would include computer systems architecture, microprocessor architectures, operating systems, network communications, data structures and algorithms, databases, distributed computing, programming environments, and object-oriented development methodologies." To which I would add "graphic design, industrial design, ergonomics, the study of perception, and psychology". Then I would also add some management training, and small-group dynamics, because good designers must also be good leaders - able to convince people of their point of view, and inspire collective effort.

We also clearly need some technical development in the field. "In both architecture and software design it is necessary to provide the professional practitioner with a way to model the final result with far less effort than is required to build the final product. In each case specialized tools and techniques are used. In software design, unfortunately, design tools aren’t sufficiently developed to be maximally useful." In fact, I would say they are only just beginning to be developed to be minimally useful - there is nothing very good right now for modeling the metaphor, or design of a program, without actually building the program.

Despite these obstacles, I am a firm believer in the ability of good design (especially a good metaphor) to make the difference between a Chrysler Building or a Robert Taylor Homes. This is, in fact, what I am hoping to learn about as I continue my education, and I will share what I find here on this blog.

Read more...

Saturday, February 17, 2007

Code Read 5 - Knuth's "Structured Programming with Go Tos"

Code Reads 5 takes us from Dijkstra to Knuth, and his humorously titled "Structured Programming with Go To Statements". In it, Knuth addresses how the field seems to have missed the real point of Dijkstra's structured programming, and instead focused on mindlessly eliminating "go to" statements. Knuth quotes Hoare when saying the most important point is "the systematic use of abstraction to control a mass of detail", not eliminating a particular programming tool.

The main topic of Knuth's article is not abstraction, but rather examining the uses of "go to" that improve a program, and how newer (in 1974) language constructs (like break statements and case statements) can almost always be used to express the meaning of those "go to" statements. In fact, he ends by saying that although he would personally like to keep "go to" available in higher level languages, he would probably never need to use it.

In general, this piece feels a bit dated, which is a measure of its own success - Knuth said that his aim was to put the "go to" controversy to rest by showing it to be moot, which, except for a few lifelong partisans, is where it is now. And many of the structures he discusses (using break to exit from loops, early versions of case statements) are common in almost all languages. So it makes a fitting end to Code Read's exploration of the roots of language semantics for abstraction.

What I found most interesting was his study of optimization, which the process of changing a program so that it does its work more efficiently. Almost all are examples of optimization. Here he makes some very relevant points, but I think one of his observations is particularly dated.

His most important point is something which many beginning programmers still do not understand: "Premature optimization is the root of all evil". We should write our programs in the clearest way, he says, so they will almost certainly be correct. Then, if there are performance problem, we should analyze the code to find where those problems are, and only optimize those sections. Optimizing first is a surefire way to render a program difficult to understand and likely to contain bugs.

He goes to to give some of good examples of fine grain optimization technique, things that all programmers should be aware of, but will only rarely be used.

Where I feel his view has become outdated is how he relates to small optimizations. "In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal". I think this is no longer true for computer science.

People often ask why building software is not like building bridges. This is one reason. Nobody expects a footbridge to carry a truck. Yet we routinely use the same software to handle hundreds of items that we use to handle hundreds of millions of items. So while a 12% improvement in bridge strength is something meaningful, a 12% improvement in software performance is not nearly so impressive. (In very specific cases, it might be meaningful, but generally its not.)

Suppose you're writing part of a program which is interactive - that is a person will be waiting for the result. And suppose that it takes 60 seconds. A 12 percent improvement is just over 7 seconds, which means the optimized code still takes 52 seconds. Still too long to wait, and not long enough to get a cup of coffee. In fact, I doubt most people will notice the difference, since we are much more susceptible to psychological factors when waiting for computers. (Just one example of this.)

In Knuth's day, people had a different relationship with computers, so this kind of performance gain was more relevant. Not so today. In fact, these days algorithm performance is typically described using "big O notation", which loosely quantifies how an algorithm works as the size of its input changes. And it provides what is by and large a more meaningful estimate of the performance of an algorithm.

To understand this, it is helpful to use example. Suppose we want to compare three ways of sorting a list containing 1000n names (so for 1000 names n = 1, for 2000 names n = 2, etc). We analyze the algorithms as Knuth did in his paper, and come up with the following formulas. The first method takes take 6(2n) milliseconds, the second takes 3(2n) + 15n milliseconds, and the third takes 25n2 milliseconds. From a simple test of 1000 names, with n=1, we get:

Method 1: 12 milliseconds.
Method 2: 21 milliseconds.
Method 3: 25 milliseconds.


We might decide that method 1 is the best say to go. But we'd be wrong, if we wanted to sort bigger lists. Using big O notation makes that clear.

In big O notation, we ignore any constant multiplication, and only take the most rapidly increasing term. So we say method 1 is O(2n), method 2 is also O(2n), and method 3 is O(n2). This is the essential information about which is faster, and since 2n is much larger than n2, for large n, we know that the third technique is actually the fastest in general. For example, when n=10:

Method 1: 6.1 seconds.
Method 2: 3.2 seconds.
Method 3: 2.5 seconds.

This is exactly reversed from what we had before - method 3 is the fastest and method 1 is the slowest. And things get much worst for methods 1 and 2 when n=100:

Method 1: 240,000,000,000,000,000,000 years
Method 2: 120,000,000,000,000,000,000 years
Method 3: 250 seconds

At this scale methods 1 and 2 are completely useless, leaving method 3 as the only option.

To tie all of this back to Knuth's paper, almost all of his cases for the use of "go to" statements involve optimizations which do not change the big O performance of program at all. So while there are programs out there where this is important (certain key inner loops in operating systems or scientific software) most of us will always be able to rely in library calls where someone else has already does this kind of super-tight optimization, and we should focus solely on selecting the appropriate abstractions for our algorithm. That is, we need to select abstractions that give us the best big O performance. And using "go to" or not using "go to" has nothing to do with this kind of optimization.

Read more...

Tuesday, February 13, 2007

What is new under the silicon sun?

For a student of software design (like myself), a recent post by Peter Van Roy (perhaps this Peter Van Roy) at Lambda the Ultimate was quite interesting. He posited that there was a Golden Age of computer science from 1964 to 1974, gave a list of a 11 major developments from that era which seem to have set our direction and haven't been replaced by anything better, and then he asked what people thought. Although I waded in to question myself, what I think is much more interesting are things that everyone else suggested he missed. When I run out of meaningful things to say, probably fairly soon, this post will be quite helpful for my further studies.
Read more...

Sunday, February 11, 2007

Code Read 4 - Dijkstra's Notes Part Two

Edsgar W. Dijkstra's "Notes on Structured Programming" is definitely a meal. Its quite a lot for one Code Read. So I broke it into two parts. First, we had the appetizers, now here's the main course.

Program families and evolving software

Dijkstra points out that programs exist in large families of similar programs. When we make changes, we are transforming it from one member of the family to another. By thinking of programs in this way, the importance of a programs structure becomes even more important - because we would like the families to share not just code, but also correctness. If the structure is confusing, the changing the program is difficult because lots of code changes and that code all needs to have its correctness reexamined. A simply structured program tends to require changes in areas that are already well-isolated, thus less code changes, and we need to think less about the correctness of the new code and the code that depends on it. (Similarly, although Dijkstra does not make this point explicitly, a layered structure can make it more clear which areas rely on the changed code, so they can also be verified easily.)

Moreover, Dijkstra points out, from the structure of a program we can understand not just how it works, but also what kind of changes will be easy to make - what other members of its family are close. This stuck me as another important point, since a great deal of time and effort is spent managing and responding to change.

Specifying clarity

Starting to get to the meat of his subject, Dijkstra spends a chapter discussing subroutines, or what has come to be called "structured programming". He sets out to avoid "motherhood" statements that are unobjectionable but hopelessly vague, and to be specific about what makes clear, easy to understand, and easy to modify programming.

Leaving aside his discussions of the fine points programming semantics, he makes several key arguments about subroutines:

  • subroutines should not be used to simply shorten code, but rather to create a reliable abstraction
  • properly used, they become helpful in limiting the scope of changes, by allowing us to replace the implementation of a subroutine with a different implementation
  • they help clarify at a particular moment in time which associations are still valid between the state of the machine and the meaning we assign to that state
  • they allow work to be divided into small units without having to re-invent the wheel for each unit
What is most relevant about this for us is not necessarily the application of these ideas to subroutines, but to any software design construct - objects, aspects, models, etc. Many novice programmers get so excited about a particular technique like Object Oriented programming, that the miss the fundamental reasons for using it in the first place - which Dijkstra has helfuly laid bare for us.

Also its important to note that Dijkstra is not describing the "lego-like" building block we'd all like to have, and which we may never get. He never describes having a large library of such objects and simply selecting and combining them to build programs. When he does refer to such "lego-like" tools, he only mentions the most basic of abstractions, things like "integer". And he seems to confine these largely to the realm of predefined hardware and language constructs:

"although these facilities have to be provided in some form or another - providing these facilities falls outside the scope of the programmer's responsibility and also that the programmer will accept any reasonable implementation of them."

In order words, we should not be too picky about our tools, since these tools will probably be equally suited to our task. Although certainly there are cases where this is untrue, the message seems to be that there are general libraries available for the basic things we do as programmers, and that baring specific needs, one will work as well as another. But Dijkstra says nothing about more complex libraries of building blocks - instead he introduces, in his next section, the image of a string of pearls.

The string of pearls

Having built his argument carefully, Dijkstra comes to his grandest image - the layers of abstraction in a program as a string of pearls.

In this metaphor, designing a program becomes a process of creating (at least intellectually) a somewhat larger set of pearls than we need, and then selecting the ones that we will finally use to build our program. Other programs in the same family would use a different subset of pearls, in a different order.

When we need to modify a program, we can replace a pearl, and some of those below it, with a new set of pearls. Again, Dijkstra is not simple enough to suggest that one could simply replace one pearl with another, and that everything would work out. Instead, he saw that along the string there are various concepts used by the pearls, and many of which will be shared among different pearls. Changing those concepts will require replacing all of the pearls that use that concept.

By looking at how many concepts span which pearls, one can get a sense of the complexity of a particular program (or design) - more complex program will have "thicker" and "longer" weaves of concepts. Such thick programs will be more difficult to create, understand, and debug, because they require more mental work for us to understand all of the pieces woven together. Thinner weaves are easier to understand, since there is less to hold in our heads at one time.

Dijkstra's image is not the image of massively reusable bits of code, but rather a way to think of the complexity of a program - almost to measure it - and a way to compare alternatives. And the central concept of this, as in almost all of Dijkstra's writings that we've read in Code Reads is this - simplicity is brought about by isolation of details through abstraction.

Its quite an important idea. Almost every development in software design techniques since then has been an attempt to provide this isolation and abstraction. And it has been very successful, in that we now regularly write working programs tens or hundred of times as long and complex as what Dijkstra was talking about. Of course, sometimes they do not work, so lets see what comes next in Code Reads.
Read more...

Saturday, February 10, 2007

Code Read 4 - Dijkstra's Notes on Structured Programming

Edsgar W. Dijkstra's "Notes on Structured Programming", which is Code Read 4, stuck me as shockingly prescient - or perhaps it is just that we creators of software are very slow to learn these lessons. Surely something written almost 40 years ago should feel more dated than this., and we should have learned or discarded all the lessons in it. But a careful reading shows that, as we have seen before, Dijkstra was one smart cookie.

Overall, he starts by making the point that techniques which work for small programs do not scale to large programs, and therefor we need to be careful how we approach large programs. He then given some techniques and examples how to actually build large programs. Finally he describes a way to think about programs that makes it easy to understand its degree of complexity and the scope of required changes. In all of these sections I found useful insights for my own work.

Misunderstanding Dijkstra

My experience when reading the notes was to continually mistake what Dijkstra was saying for something that has become a hot-button contemporary issue. (From the comments to Scott Rosenberg's post, I think this is a common experience.) This is similar to Dijkstra's own experience with his famous "Go To Statement Considered Harmful" paper (see my earlier post), which he later described as being misunderstood by people who took one relatively superficial idea to be its entire point. But a careful and pensive read reveals surprising depth to what Dijkstra was saying, and what are to me some new (or rather forgotten) insights.


Differences between small, medium and large programs

Dijkstra's central point is that what works for small programs does not for large ones, and that only through great care can we successfully work with large software. Given that he described "page", "chapter", and possibly "book" length programs as large, whereas now even a mid-sized project might have several hundred thousand lines, and thus thousands of pages of code, we should recognize that we are dealing with yet another layer of scaling.

What we will find, though, is that the issues Dijkstra was dealing with are precisely the issues we now face in our day to day work of designing and coding software. So even tough some of his techniques might not work for us, and new techniques have been invented, his insights are at the core of what has driven our field for the last 40 years, and are still quite valuable.


Demonstrating correctness and proofs

Although a lot of what he says early in the paper is a restatement of things we've covered is the first Code Reads, some points stand out. Ensuring that a program is correct is one of the most important ones. Dijkstra gives a complete and mathematically rigorous proof of a very simple of programs. It takes several pages, is largely an exercise in logical reduction, and is mind-numbingly boring. Dijkstra goes on to say that such proofs are not the point - that the great difficultly and length for even a simple program is in fact the point he was making. He also shows how it is impossible to completely test a program. So we must be able to convincingly argue the correctness of a program without being able to rigorously prove it, or test every possible case. To so this, he says, we must structure our programs so that their correctness is clear. This is a repeated theme of his - keep it simple, make it clear. It is something that takes hard intellectual work on our part, but it pays off.

He also makes a very valuable point about abstractions - when using a tool such as a library, or sophisticated programming language, it is very important to know the operational limits of that abstraction. For instance, you can not always add two integers and assume the result will be correct, because the sum could be larger than the integer representation will allow. This is one of the key points about using such layers of abstraction - although you do not need to know how it works, you do need to know exactly what it does and what its limits are. This is such an important point it might be worth its own post in the future.

Understanding and comparing programs

Dijkstra starts leading to his goal of program design by talking about how we understand programs. I think we all have experienced that it is much harder to understand someone else's code than it is to write our own. One touchstone of a really good software designer is that their code and design are easy to understand. Dijkstra looks at exactly what coding techniques make code easier to follow, and also proposes some techniques for figuring out what was happening after a critical failure (a "core dump"). Remembering that this was written almost 40 years ago, I am frankly awed. Even though I doubt he invented it all, almost everything he supported is now so commonly accepted as best-practice that most of us have forgotten there ever was even a question. The basics of programming, "if-then-else", "case" statements, "while-do" and "repeat-until" loops, and stack-based subroutines are all there. The only thing he proposed which has not caught on widely is the idea of keeping an absolute count of the number of times each loop has been executed. And his reasoning is very important - he supports all of these because they make the structure and flow of the program easier to understand.

Dijkstra also points out that it is hard to compare programs that are anything but superficially different, and that anything other than such superficial differences constitutes a design decision. He's foreshadowing the rest of the paper, so this will have to wait a little bit.

It is in this section, and the next, however, that the limits of what Dijkstra was talking about are felt, however loosely. Many of the more modern techniques for coping with large systems evolved as approaches for coping with the issues Dijkstra addresses in this paper, and for much larger systems. The value of Dijkstra's paper is that it identifies the core issue, not wrapped in the theory of a particular solution.

Actually writing code

A lot of the paper is devoted to how one actually writes code - an examination of the thought processes involved in going from the English instruction "print a list of the first 1000 primes" to the software instructions that actually do just that. (He does this twice, the second time with equally pedagogical problem). Although I tend to prefer a more intuitive approach, his process of "step-wise" construction of code is a very good technique to use if one gets stuck. Moreover, it is easily generalized and applied to designing larger software systems.

Near the end of the paper, once he has laid the foundation for it, he makes a very interesting point which comes from his step-wise technique: "Programming (or problem solving in general?) is the judicious postponement of decisions and commitments!" By working with unimplemented abstractions, we are free to understand the problem better by the time we need to actually decide on an implementation. This is a valuable idea, which can be applied almost every day. Before you decide that everything looks like a nail, see if maybe you have more than a hammer.

There's a lot more to say about this, but this post is already quite long, so I'm going to put this one out as is, and say more in my next post.

Read more...

Monday, February 5, 2007

Code Read 3 - The Humble Dijkstra

The third Code Read that Scott Rosenberg chose was another Edsgar Dijkstra essay - this one called "The Humble Programmer". Vastly oversimplifying, Dijkstra is making this very important point: despite all of our achievements, we are limited creatures, and our intellect can easily be overwhelmed by our own creations. Particularly as access to computing power increases, and our expectations of its ability increases, our current approach to software will lead us into an inescapable swamp of unmaintainable and horrendously expensive computer systems.

More than thirty years have passed since he said this, yet we are still wandering around the edge of that swamp. Dijkstra does give us a way out - his argument: an appropriate understanding of the system we are building will make it easy to build and maintain. Furthermore, we have the tools to achieve this understanding.

The most important point, he says, is to "...confine ourselves to the design and implementation of intellectually manageable programs".

After reading the comments at Code Read 3, I think this point created some misunderstanding, which can easily lead people to miss the value of this essay. One might take this to mean that we should avoid hard problems, but we must believe that Dijkstra was not so simple as to suggest this. The point is that when building programs, we must choose an intellectually manageable approach, and reject the apparently easier approach of just starting to write code and seeing what happens. In order words, Dijkstra is telling us to make our software well organized, or not at all.

Big deal, one might say. But in my experience this is the single most common root of failure and near failure. It sounds simple - if you don't understand it, don't build it. Yet all too often, we start to build things with a superficial understanding of the problem, instead of taking the time to think through our solution a bit more.

Furthermore, Dijkstra provides some practical steps we can use (which correspond to one or more of his six arguments) in order to make sure what we're doing is intellectually manageable.

The first (arguments one, two, and three) is perhaps the most confusing and most powerful. Dijkstra urges us, when thinking of a high level design of a program, to start by thinking of how we would prove the program correct, and base our design on the structure of the proof. The confusion here is that Dijkstra was not referring to a "proof" in the way academic computer scientists understand "proof of correctness", nor to the way a high-school student understands a geometry proof, nor to something like test driven design (although all can be valuable, in the right context). Rather he was referring to "proof" the way a mathematician understands proof - the first step of which is a description of the problem in a way that clarifies the most relevant points. The most beautiful proofs in modern mathematics are treasured by mathematicians not because of their clever application of obscure logic, but because they provide a method of looking at a problem that makes the solution obvious.

This is what Dijkstra is promoting - finding an organizational structure for your software that makes it obvious what the code needs to do.

The discipline lies in not embarking on large projects until we have found this way of looking at things. It is admittedly very difficult, but also very important. In fact, I would argue that without this view of the problem, a project is doomed to failure or near failure. Discovering that we don't really understand the problem in the middle of a project can get very expensive very quickly, whereas spending the time in the beginning is much more cost effective. Its like sailing across the ocean - better to make your plans in port, than discover you forgot something halfway between San Francisco and Honolulu.


The second practical step Dijkstra proposes is one that we can use to help achieve this simplifying view (argument four). It is to use abstraction:

We all know that the only mental tool by means of which a very finite piece of reasoning can cover a myriad cases is called "abstraction"; as a result the effective exploitation of his powers of abstraction must be regarded as one of the most vital activities of a competent programmer. In this connection it might be worth-while to point out that the purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise. - EWD

When I first started programming, I worked on a project that had its own implementation of a hash table. We had to worry about hit rates, hash collisions, and so on, and could only work on strings! Now, whether you're writing in Java and using a HashMap, or in Perl and using associative arrays (aka hashes), or any other language, a perfectly reliable hash table is available for free, works simply and reliably, and for most data types. Likewise we think nothing of writing code which adds an integer value to a floating point value - yet this too was once a headache.

And the key point of an abstraction here is to find ways that we can use A and B in the same way, thus freeing us from the intellectual work of keeping them distinct - like adding a float to an int. In a shopping cart we might define a group of things that are "line items", all of which can have a price, a discount, a tax charge, and so on. Whether the thing is shipping or a widget, if we can treat it as a "line item", we will have made calculating and re-calculating the total much easier.


Argument five is our third practical step - using a good programming language. This is a very loaded subject, and many reasonable people feel it has been crushed beneath the weight of the ranting terabytes already written about it. But Dijkstra, as usual, has something quite profound to say about this:

Finally, in one respect one hopes that tomorrow's programming languages will differ greatly from what we are used to now: to a much greater extent than hitherto they should invite us to reflect in the structure of what we write down all abstractions needed to cope conceptually with the complexity of what we are designing. - EWD

This is, when it comes down to it, the strongest argument in favor or against a language - does it clearly reflect the structure of the idea behind the program, or does it obscure the structure. Almost all modern languages (C and its descendants, Java, Perl, Python, Ruby, and so on) can be used to clearly reflect the structure of the problem, and all are much superior to now out-dated languages such a BASIC, or Cobol, etc. However, unskillful use of these same languages can also create great obscurity. Dijkstra describes at length how our choice of language affects our thinking, which is quite true - but I believe the languages we use today much more similar to each other than the choices he faced 30 years ago, and we are now struggling less with languages and more with design paradigms (and that's definitely another post).

Finally, the fourth step - make your structure hierarchical (his sixth argument).

I do not know of any other technology covering a ratio of 1010 or more: the computer, by virtue of its fantastic speed, seems to be the first to provide us with an environment where highly hierarchical artefacts are both possible and necessary. - EWD

This basically means to layer abstraction on abstraction. Actually, this is often described as a problem, and it can be a serious problem. Its like the architecture of a building - if the layers are complimentary and harmonious, the building is successful. If the layers are put together willy-nilly, the result is a rickety structure prone to collapse. This is really where craft, or skill, comes in, which is the theme of this blog. But for now we're just exploring the foundations. And Dijkstra's instruction is clear: carefully use layers to make our creation intellectually manageable.

This post has been longer than hope will be usual - and it has taken longer too. But this essay of Dijkstra's made quite an impression on me, and it has quite a lot of meat on it.

Read more...

© 2007 Andrew Sacamano. All rights reserved.