Tuesday, May 22, 2007

Skillful Software has a new URL

The time has come to move off of Blogger. No hard feelings, it's been great. Anyone out there considering Blogger, go for it.

I've just developed a serious case of WordPress envy: I want pages, widgets, and Technorati pings that work (knock on wood). And since I tend to go on and on, I really need better "Read More" functionality.

So, I hope to see you all at upayasoftware.com...
Read more...

Tuesday, May 15, 2007

Software Design 101

Scott Rosenberg's book Dreaming in Code, and the Code Reads section of blog have really inspired me to think and read more about my job. The "assigned reading" for Code Reads has been really great, so recently I started on a tangent - a book mentioned in Code Read 6, titled Bringing Design to Software, by Terry Winograd. Prof. Winograd teaches software design at Stanford, and the book is a collection of essays that came out of a 1992 workshop on software design. So far, I've only read the introduction and the first chapter (which was the text for Code Read 6). So far the ideas have been very interesting, but a word of warning if you're thinking of purchasing this book. Reading it is a chore because of the awful printing. Addison Wesley, the ACM, and Prof. Winograd could have done so much better than reproducing these low-res weirdly half-toned pages. It looks like the master pages were printed on an old 16-pin dot matrix printer, and designed using "creative" shares of gray. But I'll forgive them the bad printing if the rest of the book is as interesting as what I've read so far.

Before I get too into that book, though, I want to tell a quick story.

Integration Can be Hard

Recently I was meeting with a prospective client (aka interviewing for a job). This company was staffing a team to do a large project for a government agency. It was the first such job their contact at that agency had ever handled, and the companies first fixed-cost contract. After seeing the initial time and cost estimates the government client said "This seems kind of high to me, after all it's just tying different pieces of existing software together." My client asked me how I would respond to that.

I smiled and said "Integration is the hard part. In one of my previous jobs I built a e-commerce solution. Building the shopping cart was the easy part. The hard part was figuring out what to do when the computers in the warehouse said that 1 item had shipped to a customer, but 2 items has been returned. Making different systems work together is much harder and takes much longer than building brand-new software." I think they liked the answer.

The bulk of the time I spent fixing bugs on that project was spent unmangling the communication between the e-commerce system and the back-end systems. Customers returned more items than were shipped, sometimes because more items where shipped than were reported, and sometimes because of data entry errors. It took us weeks to realize that dozens of orders of a particular type were being shipped, but the shipped status was never being sent back the e-commerce system, so we were never collecting on the credit cards. Needless to say that caused a mess in the accounting system. As did the people who went right into the back-end systems and changed orders, instead of using the e-commerce interfaces. We finally just decided that if the warehouse said they shipped 3 items instead of 1, we would assume a manual change had been made. This worked fine until there was a table with unexpected duplicate rows, which led to a join-related triplication of all shipping status information, which led to all kinds of problems. And so on...

This is more than just an example of a difficult integration. This is the kind of thing that happens all the time when software designed for a particular environment is exposed to a new one. The back end systems ran in a wholesale-centric, batch-job, run-your-business-on-a-mainframe environment, where data entry was done via dumb terminals and green screens, and everything shut down for the end-of-day and end-of-month jobs. Suddenly we needed it to "play nice" in a real-time, retail, 24/7 e-commerce environment. No wonder there were issues. The mainframe system worked great, was highly reliable, and had run the business smoothly for years. Unfortunately, bringing the system out of its natural environment exposed all kinds of hidden assumptions in our project.

Inhabiting Design

What does a book of software design have to say about this? First, we should ask, what is software design? Prof. Winograd's book provides no single answer, but rather allows each contributor to take their own approach. The one common theme is that software design is everything left over after you address the purely engineering aspects: correctness, performance, scalability, reliability, and maintainability. It is the process by which one person tries to determine what the user of a system will want out of it, and how the system will provide that. Prof. Winograd says software design is "a user-oriented field, and as such will always have the human openness of disciplines such as architecture and graphic design, rather than the hard-edged formulaic certainty of engineering design."

The introduction hits the high points that will be covered in the rest of the book, and also describes important general points of design. Some of it seems obvious now, but was less commonly appreciated ten or fifteen years ago, such as recognizing that the aesthetic aspect of an interface matters. But some has a timeless relevance, and yet is even so often forgotten.

I was particularly struck by the idea that design is a conversation between the designer and the thing being designed, not a unidirectional act. All too often we approach software as though there is a single optimal solution which we must find. Only extremely well funded projects get to create and experiment with multiple prototypes. This is not so much a question of choosing several page layouts on a web site, or arrangements of dialog boxes in a desktop application, but rather a question of the exploring the underlying metaphor. For example, there was a time when many applications made half-use of the document metaphor. I remember many small development tools that dutifully had a "File" menu with "Open", "Save", "Close", etc, but which only allowed one window to be open at a time. These applications would have been better served using an entirely different metaphor, something more like iTunes, which is not at all document based. But the designers were focused on a particular metaphor, newly in vogue.

Another key idea from the introduction is that designing software is the process of creating "virtualities - the world in which a user of the software perceives, acts, and responds to experiences." The concept of "virtuality" is a generalization of the idea of a "software metaphor", encompassing ideas as disparate as a windowed GUI desktop, to Tetris, to the internet itself, and encompassing also the underlying assumptions about how these virtual worlds work. In this light, software design becomes the process in which "the patterns of life for [a virtuality's'] inhabitants are being shaped", much like an architect shapes the patterns of life for the inhabitants of a building. As I've said before, finding good virtualities, or metaphors, is the true mark of skillfully designed software. With the right metaphor, many of a system's most difficult problems suddenly become clear.

So What About my Database Issues

Even though Prof. Winograd uses virtualities to describe the virtual worlds created for human interaction, I think the idea is also applicable to the equally complex world that non-human-facing software systems inhabit. Many of the problems between the e-commerce system and the back-end system happened because the two systems had fundamentally different virtualities. When is data meaningful? What are the consequences of errors, and the processes for correcting those errors? When will services be available? These are all implicit aspects of a software virtuality, and determine much of its behavior, just like the layout of a high-rise with an attached parking lot determines much of the behavior of the people in the building, no matter what their job. When we designed the e-commerce system, we were operating under very different assumptions about these questions than the designers of the back-end systems. The systems were both good inhabitants of their own virtualities, but were poorly suited to operating in the other's virtuality. The difficulties we encountered could probably have been avoided had we looked beyond the simple metaphor of orders and items, and the simple operational issues of latency, throughput, semaphores, and so on, and really thought about the worlds these software systems inhabit.

Read more...

Wednesday, May 9, 2007

Worth Reading

While I'm working on my next big post, here are a few things worth reading.

Scott Rosenberg's post on ambiguity was right on. Ambiguity is a double edged sword - it can make things elegant, or intractable. Scott's insight is very sharp, as usual.

Last year, Basil Vandegriend put out a concise and helpful post on writing good unit tests. Most people agree tests are important, but many do not know precisely how to make them work. Basil addresses real issues, and gives good advice. I wish I had read this ten years ago.

Basil's latest post on the top five essential practices for writing software is also bang on. It is a quick must read for programmer trying to make the leap from just coding to professional software development.
Read more...

Sunday, April 29, 2007

Code Read 9 - John Backus and Functional Programming

The 1977 Turing Award went to John Backus, and in his Turing Lecture, "Can Programming Be Liberated from the von Neumann Style?", he made a vigorous case against traditional "von Neumann" programming (VNP), and for functional programming (FP). Unfortunately, the vigorous rhetorical style of this paper has infused the discussion of FP, resulting in countless flame wars over the last 30 years. The overly broad and weakly supported statements that make up the bulk of the discussion often boil down to these:

"Traditional programming languages are clumsy. FP will solve all your problems, if only you are smart enough to use it."

versus

"FP is a toy. Nobody has done any real work with it, so it must be useless."

Both sides are throwing out the baby with the bathwater, and the good ideas of FP are tainted by their association with their more problematic brethren.

Backus's Paper

Let's start with Backus's paper. From the beginning, he clearly has an ax to grind - he describes conventional programming languages foundations as "complex, bulky, not useful", and conventional programs, while "moderately clear, are not very useful conceptually". Whereas, he says later in his paper, FP programs "can accommodate a great range of changeable parts, parts whose power and flexibility exceed that of any von Neumann language so far." Yet he makes very few points to support these statements.

Expressive Power and Simplicity

One of his recurrent themes is the superior "expressive power" of FP. When he does defines "expressive power", it is not clear that even under his definition FP is any more expressive than VNP. His definition revolves around on having functions that are expressible in one language and not in another, yet he offers no supporting example.

He also asserts that VNP languages are necessarily complex, requiring new features to be added to the language itself, rather than implemented using existing language features. As one of the creators of many VNP languages, he's more than entitled to make such statements. But newer languages (like C, and many OO languages) have reversed this trend by simplifying the underlying language and making it easier to define new functions. They do this without using using FP, so this argument also falls apart.

An Algebra of Programs

The most coherent argument in the paper, and also longest, is that FP allows a relatively straightforward mathematical theory of programming, which is much more difficult in VNP. This is true, but I'm not convinced it is that important. He proves the equivalence of two algorithms for matrix multiplication, and also discusses using this algebra of programs to optimize algorithms safely.

However, formal proofs of programs are only marginally useful, usually taking far longer to do well than to write an algorithm, and are just a prone to misunderstanding of the problem to be addresses. So while it may be easier to prove an FP algorithm does what it tires to do, it is just as hard to prove that what it tries to do correctly solves the underlying problem.

Optimization is important, but as I've argued before, the most effective optimizations involve using an newer algorithm that is not equivalent in a formal way, but only by virtue of a deeper understanding of the problem. For example I once had to optimize an algorithm which had to go through a list doing pairwise operations on each element and all previous elements (e.g. f(1,2), f(1,3), f(2,3), f(1,4), f(2,4), f(3,4),...) This approach takes big-O n-squared operations, so the implementation ran in seconds for n = 100, but took hours for n = 1000, and days for n = 10000. No amount of code optimization was going to improve that. But when I realized that there was a way to run it operating on the elements of the list and only a few previous elements, the program went from needing big-O n-squared operations to big-O n operations, which meant a run time of a few minutes even for n = 100000. No formal manipulations of the algorithm would ever have gotten me there, only the insight that most of the operations were unnecessary by virtue of the problem itself, not by virtue of the comparison algorithm.

Oddly, enough, when talking about this algebra of programs, Backus himself says that "severely restricted FP systems" are easier to analyze, "as compared with the much more powerful classical systems." He seems to be implying that the ease of analyzing FP systems justifies their restrictions, but I think the consensus view today is the other way. Outside of academic circles, formal analysis has never caught on.

Rethinking Computers from the Ground Up

One of the most visionary themes in his paper is the proposal that we change the way we relate to state (aka history, or storage). VNP relies on making many small, incremental, changes to the stored data in a computer. Thus the overall state of that data is consistent and meaningful only rarely - usually it represents a job half done. Moreover, subsequent, or concurrent execution runs the risk of operating on that inconsistent data, resulting in what is called "side-effects". Instead of operating piecewise on small bits of data, leaving the overall state inconsistent during the calculation, Backus proposed not just a programming style but a change in hardware that prevents these side-effects. Not only does this make it easier to analyze the code, it becomes trivial to split larger programs into smaller pieces of work, and run those pieces in parallel. Indeed, this idea seems to be the root of many the "FP" successes - for instance its use in telecommunications software, and in Google's MapReduce. However, I put FP in quotes, because this idea is a fringe benefit of FP, and can easily be implemented in traditional languages (in the right context), and even Ericssons's Erlang projects make extensive use of C libraries. Ironically, the most successful idea of FP - a formalized way to eliminate side-effects - seems to be a side effect itself.

Conclusions

Since Turing and Church proved that "Turning Machine Computable" and "Lambda Calculus Computable" are the same thing, and thus FP and VNP can perform exactly the same computations (with suitable extensions to FP), there are two basic questions. Is either one inherently better than the other in:
1) simplicity/ease of writing code (fewer bugs, easier to understand, better at expressing underlying problem space, easier to fix bugs)
2) speed of execution?

I do not find FP to be any more "expressive" or easier to understand than VNP (and my first programming language was LISP). If anything, the restrictions of FP make it harder to model some problems in code, and thus make it harder to understand that code. It is easier to optimize FP in some cases, and thus optimizations bugs are less likely. But these bugs are a small fraction of all bugs, and among the easier to fix once identified.

As for speed of execution, on existing hardware FP programs tend to be slower, whereas they may be faster on purpose build hardware. This is because of the elimination of side-effects. On conventional hardware, eliminating side-effects often requires duplicating data. On purpose-built hardware, eliminating side-effects allows automatic parallelization. So FP programs may be faster in cases where both problem and the hardware support simple parallelization. But parallelization is notoriously difficult, and automatic parallelization technique are only modestly successful on general problems. The marginal payoff of adding more processors quickly diminishes. Instead, special parallel algorithms need to be built which carefully manage data movement in the system, minimize duplication, and so on. These techniques are equally suited to FP and VNP.

So in the end, I think FP will fade away, except for leaving its name attached to extensions to VNP languages which encourage side-effect free programming for easier application to distributed systems. These extensions will prove to be very useful, especially as programming for the web, and programming for multicore processors becomes more and more common.

Hopefully the dogmatic proponents of FP will be satisfied by this, and the dogmatic detractors will be satisfied, so we can all move on to more productive discussions.

Read more...

Saturday, April 7, 2007

Code Read 8 - Eric S Raymond's Cathedral and Bazaar

The ever evolving "The Cathedral and the Bazaar" by Eric S Raymond (aka CatB) has become something of a lengthy read. Scott Rosenberg's Code Read 8 dives on in, and rightly describes it as a "classic essay" and says it "has proved its importance" in the literature of software development. I first read it several years ago, it was considerably pithier then. However, it is still full of important ideas.

Cathedrals and Bazaars

The most important idea is its title track - two different ways to build software. In the "cathedral" style, a master architect and a small group of hand-picked skilled craftsmen work toward a grand vision with lots of direction and coordination. This is the traditional model of software development. The "bazaar" model, which is common to many open-source projects, particularly Linux, is one in which there are no hand-selected craftsmen, and no master plan. Instead the people in power select the best contributions from those people interested and able enough to contribute something worthwhile. The direction in which the project evolves is determined by the availability of volunteers willing to push it in that direction, as well as an entity, often an elected committee, which acts as an editor.

CatB presents a strong case that high quality software can be produced quickly and efficiently using the bazaar approach. Actually, to many people who just came into the software world in the last five years or so, this seems self-evident - Linux, Apache, MySQL, PHP, and Mozilla are just a few of the high-quality, feature rich, and reliable projects built in the bazaar mode. They are part of the landscape of the Internet which many take for granted. Yet less than 10 years ago there were many thoughtful, intelligent people who had serious concerns about the utility of any of these. Today, of course, the only people who seriously argue that open source development can not produce good software are those who have a stake in commercial alternatives. So while CatB's argument is convincing, it has already been won.

Much of CatB is also devoted to describing how successful open source projects work. If you are planning on running an open source project, these are must-read sections. But even if you are not, the idea of project leader as editor instead of architect is a very interesting one, and reflects a common gem of an idea in leadership theory that is subtle and often misunderstood: some of the best leaders do not lead by inspiring others to follow the leader's ideas, but rather the leader finds and supports the best ideas of the people they are leading. What makes Linus Torvalds (the creator and benevolent-dictator-for-life of Linux) such a genius is not his ability to write code or convince others of the correctness of his ideas (which both are probably impressive), but rather his ability to pick and choose the best from among the many contributions to Linux. There is a great deal that goes along with that style of leadership that is difficult for a type-A ego, and CatB delves into all of it with great insight.

The Twainian Passing of Closed Source Software

The second main thrust of CatB is that traditional management styles and closed-source software will ultimately be washed away under the coming wave of high-quality open source software and its management processes. This is a bit more controversial - and in some cases I think it is just plain wrong. Certainly it is possible that Apache may become the only web server anybody uses. It is also possible, although a bit more of a stretch, that open-source databases will replace commercial databases, or that Linux will become the dominant operating system, or that OpenOffice will become the only office productivity suite. But it unlikely that anybody but E-Bay will ever see the software that runs eBay, or that Google will ever open-source their search software. Sometimes, software is so tied to the fundamental service a business provides that there is just not enough interest in an open-source equivalent. We would all like to be making money like eBay, but how many of us are actually trying to write on-line auction software to match eBay's? There are simply to few developers to support it - especially since any E-Bay competitors needs to distinguish itself, which will probably require substantially different software. So there are some markets in which there is simply not enough demand for open-source software to make it viable.

Another example is corporate web sites, which will always be paid for in the traditional sense, even if they are built using entirely open-source software, simply because nobody but Acme Widgets needs an Acme Widgets web site.

This is one of the things I think Cat B misses in its open-source evangelism. Open-source projects work well when many programmers need the same thing to support the businesses they work for - so we see web servers, operating systems, programming languages and tools, web-site management software, a shopping cart or two, image and photo editing software, an office productivity suite, and so on. But the success of an open-source project depends on having a legion of programmers who need it. Yet CatB argues that commercial software is going away, along with all of the management processes that come with it.

I just do not buy it. Instead, I see the comercial software market becoming smaller, and more individualized. Nobody buys compilers anymore. GCC, gmake, Ant, Eclipise, and dozens of other free products all work fine. Moreover, support for open source products is often better than support for commercial equivalents. (I speak from personal experience.) A lot of software is going open source. But there will always be a market for software to support the specific business processes of individual organizations, which will require one-off construction - even if it does use off-the-shelf open-source components.

The Corporate Embrace of Open-Source

Indeed, many open-source projects are now significantly supported by companies (like IBM, RedHat, and others) that make money building exactly that kind of custom software. The underlying components are no longer something that distinguishes one competitor from another in the marketplace, so companies that compete against each other are joining forces to make everyone's job easier. Companies like IBM, RedHat, Oracle, and countless others are paying programmers to write code which the company will then give to its competitors.

This is one of the most interesting facets of the open-source revolution - the evolution of business strategy and corporate intellectual property policy in the face of the commodification of software infrastructure. In some ways, the software market has become an interesting experiment in altruism. More and more technology companies are realizing that despite superficial appearances, their software is not the core value they provide. And in that case, it makes financial sense to cooperate with their competitors (and anyone else who is interested) to build and maintain that software, while they focus on the what real value that they do provide.

P.S.

One final note - people can be slow to accept change. Some managers and programmers are still protective of "their" code against people in their own organization. This seems to me to be doubly backward, and I hope that as more and more open-source software succeeds, IT departments will learn let more open-source software practices through their cathedral doors. It can only make our lives easier.

Read more...

Friday, March 9, 2007

Code Read 7 - David Parnas on Star Wars Software

In 1985, David Parnas resigned from his position on a panel convened by the Strategic Defense Initiative Organization, which was overseeing the "Star Wars", or SDI anti-ballistic missile defense program. Along with his resignation he submitted several short essays explaining why he thought the software required for Star Wars could not be built, at that time or in the foreseeable future. Those essays were later collected and published, and are the subject of Code Read 7.

Some of the essays deal with issues such as using SDI to fund basic research (an idea in which he did not believe), or why AI would not solve the problems, but his core arguments focus around two main themes.

1) Software can not be reliable without extensive high-quality testing, and such testing could not be done for SDI.
2) Our ability to build software is insufficient to build SDI.

Scott Rosenberg, the author of Code Reads, seems to be asking what, if anything, has changed since 1985. Sadly, the answer is "Not much". Indeed, Parnas's paper is the most current of all the Code Reads sources, in is view of the software industry. It could have been written in 2007 just as easily as in 1985.

Testing is important

Of his two main themes, the first is the easiest to discuss. Basically, software is built broken. It needs to be tested before it works smoothly enough to be considered functional. Nobody has ever done it otherwise, despite their best efforts. Some have come close, but many have failed completely. All software needs to be refined, in situations very similar to its real usage, before it can be considered reliable. This is not news to anyone. Parnas makes it very clear how difficult it will be to do this with SDI.

Even if you exhaustively work to prove each component correct, or test each component extensively as you build it, the resulting system is still not trustworthy until it has been tested.

"If we wrote a formal specification for the software, we would have no way of proving that a program that satisfied the specification would actually do what we expected it to do. The specification itself might be wrong or incomplete." - David Parnas


A classic, and tragic, example of this problem is the Mars Climate Orbiter. Despite a rigorous testing process, a software error still caused the crash of the probe.

Software is hard

His arguments about our ability to build software go along three basic steps:

- Software is harder than other things we build.
- The way be build programs ensures there will be bugs.
- There does not seem to be hope for a much better way to build software.

Dijkstra, Brooks, and Knuth (as quoted in previous posts) have explained many reasons why software is hard. Parnas provides another reason - the discontinuity of software. He compares the structures of analog hardware, digital computer hardware, and software, and argues that since software is discontinuous, and has a large number of discrete states, it is much less amenable to mathematical analysis. This analysis is the main reason why non-software engineering projects are reliable.

For example, a structural member in a bridge has two states, "intact" and "failed". The behavior of the "intact" state is well understood: we have good mathematical models for the part's deformation under load, response to temperature, resistance to wind or water, degradation over time, and so on. The transition between "intact" and "failed" happens under fairly well understood circumstances. And we mostly just hope the "failed" state never happens. The same pattern of logic can be applied to almost all parts of the bridge.

Software systems, on the other hand, have many components, each with generally poorly understood behavior (compared to physical engineering), and many states. Indeed, most software approaches the what we now call "chaotic" behavior. Although it does always fulfill all three formal requirements, most software comes quite close. So on top of the layers of complexity, and depth of scale, most software is also, for practical purposes, chaotic.

How we build software with bugs

We try to manage this complexity by creating a logical model which we can use to break out smaller components, which themselves are broken into smaller components, and so on, until we are writing step-by-step instructions.

But this process is hard to do well. While we can write precise formal specs, "it is hard to make the decisions that must be made to write such a document. We often do not know how to make those decisions until we can play with the system... The result will be a structure that does not fully separate concerns and minimize complexity."

And "even in highly structured systems, surprises and unreliability occur because the human mind is not able to fully comprehend the many conditions that can arise because of the interaction of these components. Moreover, finding the right structure has proved to be very difficult. Well-structured real software systems are rare."

Additionally, we have the difficulty of translating those structures into code. Generally, we write programs as step-by-step algorithms, "thinking like a computer". We can sometimes do this in a top-down fashion, as Dijkstra proposed in his "Notes on Structured Programming", but even that uses a "do-this-then-do-that" approach. Various attempts have been made to find other ways, but none has found wide success.

"In recent years many programmers have tried to improve their working methods using a variety of software design approaches. However, when they get down to writing executable programs, they revert to the conventional way of thinking. I have yet to find a substantial program in practical use whose structure was not based on the expected execution sequence."

This provoked a heated discussion at Code Reads, but I think the fundamental point is that while other techniques exist and do provide real benefit in many cases, they are all ways of working with a larger problem, of structuring the overall approach. At its finest level, software is algorithmic, and algorithms are specified in sequential steps.

There are generally two main non-algorithmic ways to program. The first is to relieve ourselves of some of the work of creating complex sequential algorithms by specifying rules and having some system to implement those rules. And the second is formally isolating non-related portions of an algorithm so they can be run in parallel.

In the first case, using rules, is simple enough in restricted cases, but such systems become as complex and general programming languages in more general cases, and one eventually finds oneself creating rules to describe an underlying algorithm.

The second case works quite well, but each of the many parallel computations ends up being done using the same old step-by-step sequential executions.

And in both cases, we continue to make the same fundamental mistakes about the overall structure of our system because we don't fully understand its behavior.

Better tools and tecniques

Lastly, Parnas addresses the hope that improvements in methodology or tools will alleviate these problems. At the time he was writing, he saw four main threads in tool and methodology improvements (I'm combining two of his essays):

1) Structured programming
2) Formal abstraction
3) Cooperating sequential processes
4) Better housekeeping tools

Back in 1970's, according to Parnas, this was academic "motherhood" - nobody could object. Today, in my experience, this view is industry wide. A few people will argue that we've all been brainwashed and are now blind to alternatives, but even in the portion of our community which is most open to new ideas, these four still are the dominant paradigms.

Parnas argues that we are now in the days of incremental improvements in software, rather than rapid and dramatic advances, saying "Programing languages are now sufficiently flexible that we can use almost any of them for almost any task." And even things like non-algorithmic specifications still suffer from the same problems as writing code: "...our experience in writing nonalgorithmic specifications has shown that people make mistakes in writing them just as they do in writing algorithms. "

Conclusion

One could argue that Parnas is a dead-end thinker, saying nothing more than the status quo is bad, and it is all we will ever get. Instead, we must remember Parnas is talking about the most ambitious and complex software project ever conceived, and saying that the particular project is beyond our capabilities, not that software in general is beyond our capabilities.

However, I do think he misses a possible way out. I say "possible" because I do not know if it is a real solution, or just a fantasy. I think we need to improve the way we think about our own solutions. Then we can build systems that are less prone to the kinds of complexities which befuddle us.

To use an analogy, after the Tacoma Narrows Bridge disaster, civil engineers added some new factors to the way they think about bridges: "wind resistance" and "harmonic effects". Software engineers are still looking for what those factors are. I do not believe that we have a mature set of factors to consider when we design software, and I do believe that we can discover what those factors are.

In fact, I hope this blog will help me discover them, and I'd love to hear any suggestions.

Read more...

Monday, February 19, 2007

Code Read 6 - Mitch Kapor's Design Manifesto

This installment of Code Reads takes a huge leap from the world of academia and the historic foundations of our discipline to something more recent: Mitch Kapor's Software Design Manifesto. In this passionate essay, Mitch Kapor extols the virtues of designing software with the user experience in mind, and advocates developing a profession of software design.

Today the phrase "software design" (like "architecture") has come to have so many definitions that Humpty Dumpty would grinning from ear to ear. So before we can comment on what Mitch Kapor had to say, we need to pay some attention to what he actually did say, and not what we read with our modern vocabulary.

First of all, he says "Software design is not the same as user interface design." However, he does go on to say that user interface design is an important part of software design. In his effort to wrench software design away from the pure engineers he repeatedly comes back to the importance of the user interface, and his prime motivation is a better "user experience", so it is easy to forget that he his talking about something larger that the UI.

He fervently opposes taking a purely engineering view of a program. "One of the main reasons most computer software is so abysmal is that it’s not designed at all, but merely engineered. Another reason is that implementors often place more emphasis on a program’s internal construction than on its external design..."

When he talks about software design, he is talking about the "metaphor" of the software. When describing his experience with good design, he says "It is the metaphor of the spreadsheet itself, its tableau of rows and columns with their precisely interrelated labels, numbers, and formulas ... for which [Dan Bricklin] will be remembered".

A modern example of a successful metaphor is the GUI windowing system that almost all of us love and know. It was not any individual UI that gave this approach its staying power - it was the simplicity and utility of the metaphor.

Unfortunately, it is an almost universal experience that finding such good metaphors is hard work. Finding non-programmers who have the skills required to do this is hard: "Many people who think of themselves as working on the design of software simply lack the technical grounding to be an effective participant in the overall process. Naturally, programmers quickly lose respect for people who fail to understand fundamental technical issues. The answer to this is not to exclude designers from the process, but to make sure that they have a sound mastery of technical fundamentals, so that genuine communication with programmers is possible." I have had quite a few arguments about software design with people who lacked even an accurate technical vocabulary, let alone a solid grounding.

However, the converse is also often true - many good programmers lack the leadership and aesthetic skills to make good designers: they often veer towards the stereotypically dictatorial auteur, or allow the engineering of the software to usurp the design.

People who try to play this role must have a very broad background, and depth of insight in a variety of fields, as well as strong leadership skills. Mitch Kapor suggests, "technology courses for the student designer should deal with the principles and methods of computer program construction. Topics would include computer systems architecture, microprocessor architectures, operating systems, network communications, data structures and algorithms, databases, distributed computing, programming environments, and object-oriented development methodologies." To which I would add "graphic design, industrial design, ergonomics, the study of perception, and psychology". Then I would also add some management training, and small-group dynamics, because good designers must also be good leaders - able to convince people of their point of view, and inspire collective effort.

We also clearly need some technical development in the field. "In both architecture and software design it is necessary to provide the professional practitioner with a way to model the final result with far less effort than is required to build the final product. In each case specialized tools and techniques are used. In software design, unfortunately, design tools aren’t sufficiently developed to be maximally useful." In fact, I would say they are only just beginning to be developed to be minimally useful - there is nothing very good right now for modeling the metaphor, or design of a program, without actually building the program.

Despite these obstacles, I am a firm believer in the ability of good design (especially a good metaphor) to make the difference between a Chrysler Building or a Robert Taylor Homes. This is, in fact, what I am hoping to learn about as I continue my education, and I will share what I find here on this blog.

Read more...

Saturday, February 17, 2007

Code Read 5 - Knuth's "Structured Programming with Go Tos"

Code Reads 5 takes us from Dijkstra to Knuth, and his humorously titled "Structured Programming with Go To Statements". In it, Knuth addresses how the field seems to have missed the real point of Dijkstra's structured programming, and instead focused on mindlessly eliminating "go to" statements. Knuth quotes Hoare when saying the most important point is "the systematic use of abstraction to control a mass of detail", not eliminating a particular programming tool.

The main topic of Knuth's article is not abstraction, but rather examining the uses of "go to" that improve a program, and how newer (in 1974) language constructs (like break statements and case statements) can almost always be used to express the meaning of those "go to" statements. In fact, he ends by saying that although he would personally like to keep "go to" available in higher level languages, he would probably never need to use it.

In general, this piece feels a bit dated, which is a measure of its own success - Knuth said that his aim was to put the "go to" controversy to rest by showing it to be moot, which, except for a few lifelong partisans, is where it is now. And many of the structures he discusses (using break to exit from loops, early versions of case statements) are common in almost all languages. So it makes a fitting end to Code Read's exploration of the roots of language semantics for abstraction.

What I found most interesting was his study of optimization, which the process of changing a program so that it does its work more efficiently. Almost all are examples of optimization. Here he makes some very relevant points, but I think one of his observations is particularly dated.

His most important point is something which many beginning programmers still do not understand: "Premature optimization is the root of all evil". We should write our programs in the clearest way, he says, so they will almost certainly be correct. Then, if there are performance problem, we should analyze the code to find where those problems are, and only optimize those sections. Optimizing first is a surefire way to render a program difficult to understand and likely to contain bugs.

He goes to to give some of good examples of fine grain optimization technique, things that all programmers should be aware of, but will only rarely be used.

Where I feel his view has become outdated is how he relates to small optimizations. "In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal". I think this is no longer true for computer science.

People often ask why building software is not like building bridges. This is one reason. Nobody expects a footbridge to carry a truck. Yet we routinely use the same software to handle hundreds of items that we use to handle hundreds of millions of items. So while a 12% improvement in bridge strength is something meaningful, a 12% improvement in software performance is not nearly so impressive. (In very specific cases, it might be meaningful, but generally its not.)

Suppose you're writing part of a program which is interactive - that is a person will be waiting for the result. And suppose that it takes 60 seconds. A 12 percent improvement is just over 7 seconds, which means the optimized code still takes 52 seconds. Still too long to wait, and not long enough to get a cup of coffee. In fact, I doubt most people will notice the difference, since we are much more susceptible to psychological factors when waiting for computers. (Just one example of this.)

In Knuth's day, people had a different relationship with computers, so this kind of performance gain was more relevant. Not so today. In fact, these days algorithm performance is typically described using "big O notation", which loosely quantifies how an algorithm works as the size of its input changes. And it provides what is by and large a more meaningful estimate of the performance of an algorithm.

To understand this, it is helpful to use example. Suppose we want to compare three ways of sorting a list containing 1000n names (so for 1000 names n = 1, for 2000 names n = 2, etc). We analyze the algorithms as Knuth did in his paper, and come up with the following formulas. The first method takes take 6(2n) milliseconds, the second takes 3(2n) + 15n milliseconds, and the third takes 25n2 milliseconds. From a simple test of 1000 names, with n=1, we get:

Method 1: 12 milliseconds.
Method 2: 21 milliseconds.
Method 3: 25 milliseconds.


We might decide that method 1 is the best say to go. But we'd be wrong, if we wanted to sort bigger lists. Using big O notation makes that clear.

In big O notation, we ignore any constant multiplication, and only take the most rapidly increasing term. So we say method 1 is O(2n), method 2 is also O(2n), and method 3 is O(n2). This is the essential information about which is faster, and since 2n is much larger than n2, for large n, we know that the third technique is actually the fastest in general. For example, when n=10:

Method 1: 6.1 seconds.
Method 2: 3.2 seconds.
Method 3: 2.5 seconds.

This is exactly reversed from what we had before - method 3 is the fastest and method 1 is the slowest. And things get much worst for methods 1 and 2 when n=100:

Method 1: 240,000,000,000,000,000,000 years
Method 2: 120,000,000,000,000,000,000 years
Method 3: 250 seconds

At this scale methods 1 and 2 are completely useless, leaving method 3 as the only option.

To tie all of this back to Knuth's paper, almost all of his cases for the use of "go to" statements involve optimizations which do not change the big O performance of program at all. So while there are programs out there where this is important (certain key inner loops in operating systems or scientific software) most of us will always be able to rely in library calls where someone else has already does this kind of super-tight optimization, and we should focus solely on selecting the appropriate abstractions for our algorithm. That is, we need to select abstractions that give us the best big O performance. And using "go to" or not using "go to" has nothing to do with this kind of optimization.

Read more...

Tuesday, February 13, 2007

What is new under the silicon sun?

For a student of software design (like myself), a recent post by Peter Van Roy (perhaps this Peter Van Roy) at Lambda the Ultimate was quite interesting. He posited that there was a Golden Age of computer science from 1964 to 1974, gave a list of a 11 major developments from that era which seem to have set our direction and haven't been replaced by anything better, and then he asked what people thought. Although I waded in to question myself, what I think is much more interesting are things that everyone else suggested he missed. When I run out of meaningful things to say, probably fairly soon, this post will be quite helpful for my further studies.
Read more...

Sunday, February 11, 2007

Code Read 4 - Dijkstra's Notes Part Two

Edsgar W. Dijkstra's "Notes on Structured Programming" is definitely a meal. Its quite a lot for one Code Read. So I broke it into two parts. First, we had the appetizers, now here's the main course.

Program families and evolving software

Dijkstra points out that programs exist in large families of similar programs. When we make changes, we are transforming it from one member of the family to another. By thinking of programs in this way, the importance of a programs structure becomes even more important - because we would like the families to share not just code, but also correctness. If the structure is confusing, the changing the program is difficult because lots of code changes and that code all needs to have its correctness reexamined. A simply structured program tends to require changes in areas that are already well-isolated, thus less code changes, and we need to think less about the correctness of the new code and the code that depends on it. (Similarly, although Dijkstra does not make this point explicitly, a layered structure can make it more clear which areas rely on the changed code, so they can also be verified easily.)

Moreover, Dijkstra points out, from the structure of a program we can understand not just how it works, but also what kind of changes will be easy to make - what other members of its family are close. This stuck me as another important point, since a great deal of time and effort is spent managing and responding to change.

Specifying clarity

Starting to get to the meat of his subject, Dijkstra spends a chapter discussing subroutines, or what has come to be called "structured programming". He sets out to avoid "motherhood" statements that are unobjectionable but hopelessly vague, and to be specific about what makes clear, easy to understand, and easy to modify programming.

Leaving aside his discussions of the fine points programming semantics, he makes several key arguments about subroutines:

  • subroutines should not be used to simply shorten code, but rather to create a reliable abstraction
  • properly used, they become helpful in limiting the scope of changes, by allowing us to replace the implementation of a subroutine with a different implementation
  • they help clarify at a particular moment in time which associations are still valid between the state of the machine and the meaning we assign to that state
  • they allow work to be divided into small units without having to re-invent the wheel for each unit
What is most relevant about this for us is not necessarily the application of these ideas to subroutines, but to any software design construct - objects, aspects, models, etc. Many novice programmers get so excited about a particular technique like Object Oriented programming, that the miss the fundamental reasons for using it in the first place - which Dijkstra has helfuly laid bare for us.

Also its important to note that Dijkstra is not describing the "lego-like" building block we'd all like to have, and which we may never get. He never describes having a large library of such objects and simply selecting and combining them to build programs. When he does refer to such "lego-like" tools, he only mentions the most basic of abstractions, things like "integer". And he seems to confine these largely to the realm of predefined hardware and language constructs:

"although these facilities have to be provided in some form or another - providing these facilities falls outside the scope of the programmer's responsibility and also that the programmer will accept any reasonable implementation of them."

In order words, we should not be too picky about our tools, since these tools will probably be equally suited to our task. Although certainly there are cases where this is untrue, the message seems to be that there are general libraries available for the basic things we do as programmers, and that baring specific needs, one will work as well as another. But Dijkstra says nothing about more complex libraries of building blocks - instead he introduces, in his next section, the image of a string of pearls.

The string of pearls

Having built his argument carefully, Dijkstra comes to his grandest image - the layers of abstraction in a program as a string of pearls.

In this metaphor, designing a program becomes a process of creating (at least intellectually) a somewhat larger set of pearls than we need, and then selecting the ones that we will finally use to build our program. Other programs in the same family would use a different subset of pearls, in a different order.

When we need to modify a program, we can replace a pearl, and some of those below it, with a new set of pearls. Again, Dijkstra is not simple enough to suggest that one could simply replace one pearl with another, and that everything would work out. Instead, he saw that along the string there are various concepts used by the pearls, and many of which will be shared among different pearls. Changing those concepts will require replacing all of the pearls that use that concept.

By looking at how many concepts span which pearls, one can get a sense of the complexity of a particular program (or design) - more complex program will have "thicker" and "longer" weaves of concepts. Such thick programs will be more difficult to create, understand, and debug, because they require more mental work for us to understand all of the pieces woven together. Thinner weaves are easier to understand, since there is less to hold in our heads at one time.

Dijkstra's image is not the image of massively reusable bits of code, but rather a way to think of the complexity of a program - almost to measure it - and a way to compare alternatives. And the central concept of this, as in almost all of Dijkstra's writings that we've read in Code Reads is this - simplicity is brought about by isolation of details through abstraction.

Its quite an important idea. Almost every development in software design techniques since then has been an attempt to provide this isolation and abstraction. And it has been very successful, in that we now regularly write working programs tens or hundred of times as long and complex as what Dijkstra was talking about. Of course, sometimes they do not work, so lets see what comes next in Code Reads.
Read more...

Saturday, February 10, 2007

Code Read 4 - Dijkstra's Notes on Structured Programming

Edsgar W. Dijkstra's "Notes on Structured Programming", which is Code Read 4, stuck me as shockingly prescient - or perhaps it is just that we creators of software are very slow to learn these lessons. Surely something written almost 40 years ago should feel more dated than this., and we should have learned or discarded all the lessons in it. But a careful reading shows that, as we have seen before, Dijkstra was one smart cookie.

Overall, he starts by making the point that techniques which work for small programs do not scale to large programs, and therefor we need to be careful how we approach large programs. He then given some techniques and examples how to actually build large programs. Finally he describes a way to think about programs that makes it easy to understand its degree of complexity and the scope of required changes. In all of these sections I found useful insights for my own work.

Misunderstanding Dijkstra

My experience when reading the notes was to continually mistake what Dijkstra was saying for something that has become a hot-button contemporary issue. (From the comments to Scott Rosenberg's post, I think this is a common experience.) This is similar to Dijkstra's own experience with his famous "Go To Statement Considered Harmful" paper (see my earlier post), which he later described as being misunderstood by people who took one relatively superficial idea to be its entire point. But a careful and pensive read reveals surprising depth to what Dijkstra was saying, and what are to me some new (or rather forgotten) insights.


Differences between small, medium and large programs

Dijkstra's central point is that what works for small programs does not for large ones, and that only through great care can we successfully work with large software. Given that he described "page", "chapter", and possibly "book" length programs as large, whereas now even a mid-sized project might have several hundred thousand lines, and thus thousands of pages of code, we should recognize that we are dealing with yet another layer of scaling.

What we will find, though, is that the issues Dijkstra was dealing with are precisely the issues we now face in our day to day work of designing and coding software. So even tough some of his techniques might not work for us, and new techniques have been invented, his insights are at the core of what has driven our field for the last 40 years, and are still quite valuable.


Demonstrating correctness and proofs

Although a lot of what he says early in the paper is a restatement of things we've covered is the first Code Reads, some points stand out. Ensuring that a program is correct is one of the most important ones. Dijkstra gives a complete and mathematically rigorous proof of a very simple of programs. It takes several pages, is largely an exercise in logical reduction, and is mind-numbingly boring. Dijkstra goes on to say that such proofs are not the point - that the great difficultly and length for even a simple program is in fact the point he was making. He also shows how it is impossible to completely test a program. So we must be able to convincingly argue the correctness of a program without being able to rigorously prove it, or test every possible case. To so this, he says, we must structure our programs so that their correctness is clear. This is a repeated theme of his - keep it simple, make it clear. It is something that takes hard intellectual work on our part, but it pays off.

He also makes a very valuable point about abstractions - when using a tool such as a library, or sophisticated programming language, it is very important to know the operational limits of that abstraction. For instance, you can not always add two integers and assume the result will be correct, because the sum could be larger than the integer representation will allow. This is one of the key points about using such layers of abstraction - although you do not need to know how it works, you do need to know exactly what it does and what its limits are. This is such an important point it might be worth its own post in the future.

Understanding and comparing programs

Dijkstra starts leading to his goal of program design by talking about how we understand programs. I think we all have experienced that it is much harder to understand someone else's code than it is to write our own. One touchstone of a really good software designer is that their code and design are easy to understand. Dijkstra looks at exactly what coding techniques make code easier to follow, and also proposes some techniques for figuring out what was happening after a critical failure (a "core dump"). Remembering that this was written almost 40 years ago, I am frankly awed. Even though I doubt he invented it all, almost everything he supported is now so commonly accepted as best-practice that most of us have forgotten there ever was even a question. The basics of programming, "if-then-else", "case" statements, "while-do" and "repeat-until" loops, and stack-based subroutines are all there. The only thing he proposed which has not caught on widely is the idea of keeping an absolute count of the number of times each loop has been executed. And his reasoning is very important - he supports all of these because they make the structure and flow of the program easier to understand.

Dijkstra also points out that it is hard to compare programs that are anything but superficially different, and that anything other than such superficial differences constitutes a design decision. He's foreshadowing the rest of the paper, so this will have to wait a little bit.

It is in this section, and the next, however, that the limits of what Dijkstra was talking about are felt, however loosely. Many of the more modern techniques for coping with large systems evolved as approaches for coping with the issues Dijkstra addresses in this paper, and for much larger systems. The value of Dijkstra's paper is that it identifies the core issue, not wrapped in the theory of a particular solution.

Actually writing code

A lot of the paper is devoted to how one actually writes code - an examination of the thought processes involved in going from the English instruction "print a list of the first 1000 primes" to the software instructions that actually do just that. (He does this twice, the second time with equally pedagogical problem). Although I tend to prefer a more intuitive approach, his process of "step-wise" construction of code is a very good technique to use if one gets stuck. Moreover, it is easily generalized and applied to designing larger software systems.

Near the end of the paper, once he has laid the foundation for it, he makes a very interesting point which comes from his step-wise technique: "Programming (or problem solving in general?) is the judicious postponement of decisions and commitments!" By working with unimplemented abstractions, we are free to understand the problem better by the time we need to actually decide on an implementation. This is a valuable idea, which can be applied almost every day. Before you decide that everything looks like a nail, see if maybe you have more than a hammer.

There's a lot more to say about this, but this post is already quite long, so I'm going to put this one out as is, and say more in my next post.

Read more...

Monday, February 5, 2007

Code Read 3 - The Humble Dijkstra

The third Code Read that Scott Rosenberg chose was another Edsgar Dijkstra essay - this one called "The Humble Programmer". Vastly oversimplifying, Dijkstra is making this very important point: despite all of our achievements, we are limited creatures, and our intellect can easily be overwhelmed by our own creations. Particularly as access to computing power increases, and our expectations of its ability increases, our current approach to software will lead us into an inescapable swamp of unmaintainable and horrendously expensive computer systems.

More than thirty years have passed since he said this, yet we are still wandering around the edge of that swamp. Dijkstra does give us a way out - his argument: an appropriate understanding of the system we are building will make it easy to build and maintain. Furthermore, we have the tools to achieve this understanding.

The most important point, he says, is to "...confine ourselves to the design and implementation of intellectually manageable programs".

After reading the comments at Code Read 3, I think this point created some misunderstanding, which can easily lead people to miss the value of this essay. One might take this to mean that we should avoid hard problems, but we must believe that Dijkstra was not so simple as to suggest this. The point is that when building programs, we must choose an intellectually manageable approach, and reject the apparently easier approach of just starting to write code and seeing what happens. In order words, Dijkstra is telling us to make our software well organized, or not at all.

Big deal, one might say. But in my experience this is the single most common root of failure and near failure. It sounds simple - if you don't understand it, don't build it. Yet all too often, we start to build things with a superficial understanding of the problem, instead of taking the time to think through our solution a bit more.

Furthermore, Dijkstra provides some practical steps we can use (which correspond to one or more of his six arguments) in order to make sure what we're doing is intellectually manageable.

The first (arguments one, two, and three) is perhaps the most confusing and most powerful. Dijkstra urges us, when thinking of a high level design of a program, to start by thinking of how we would prove the program correct, and base our design on the structure of the proof. The confusion here is that Dijkstra was not referring to a "proof" in the way academic computer scientists understand "proof of correctness", nor to the way a high-school student understands a geometry proof, nor to something like test driven design (although all can be valuable, in the right context). Rather he was referring to "proof" the way a mathematician understands proof - the first step of which is a description of the problem in a way that clarifies the most relevant points. The most beautiful proofs in modern mathematics are treasured by mathematicians not because of their clever application of obscure logic, but because they provide a method of looking at a problem that makes the solution obvious.

This is what Dijkstra is promoting - finding an organizational structure for your software that makes it obvious what the code needs to do.

The discipline lies in not embarking on large projects until we have found this way of looking at things. It is admittedly very difficult, but also very important. In fact, I would argue that without this view of the problem, a project is doomed to failure or near failure. Discovering that we don't really understand the problem in the middle of a project can get very expensive very quickly, whereas spending the time in the beginning is much more cost effective. Its like sailing across the ocean - better to make your plans in port, than discover you forgot something halfway between San Francisco and Honolulu.


The second practical step Dijkstra proposes is one that we can use to help achieve this simplifying view (argument four). It is to use abstraction:

We all know that the only mental tool by means of which a very finite piece of reasoning can cover a myriad cases is called "abstraction"; as a result the effective exploitation of his powers of abstraction must be regarded as one of the most vital activities of a competent programmer. In this connection it might be worth-while to point out that the purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise. - EWD

When I first started programming, I worked on a project that had its own implementation of a hash table. We had to worry about hit rates, hash collisions, and so on, and could only work on strings! Now, whether you're writing in Java and using a HashMap, or in Perl and using associative arrays (aka hashes), or any other language, a perfectly reliable hash table is available for free, works simply and reliably, and for most data types. Likewise we think nothing of writing code which adds an integer value to a floating point value - yet this too was once a headache.

And the key point of an abstraction here is to find ways that we can use A and B in the same way, thus freeing us from the intellectual work of keeping them distinct - like adding a float to an int. In a shopping cart we might define a group of things that are "line items", all of which can have a price, a discount, a tax charge, and so on. Whether the thing is shipping or a widget, if we can treat it as a "line item", we will have made calculating and re-calculating the total much easier.


Argument five is our third practical step - using a good programming language. This is a very loaded subject, and many reasonable people feel it has been crushed beneath the weight of the ranting terabytes already written about it. But Dijkstra, as usual, has something quite profound to say about this:

Finally, in one respect one hopes that tomorrow's programming languages will differ greatly from what we are used to now: to a much greater extent than hitherto they should invite us to reflect in the structure of what we write down all abstractions needed to cope conceptually with the complexity of what we are designing. - EWD

This is, when it comes down to it, the strongest argument in favor or against a language - does it clearly reflect the structure of the idea behind the program, or does it obscure the structure. Almost all modern languages (C and its descendants, Java, Perl, Python, Ruby, and so on) can be used to clearly reflect the structure of the problem, and all are much superior to now out-dated languages such a BASIC, or Cobol, etc. However, unskillful use of these same languages can also create great obscurity. Dijkstra describes at length how our choice of language affects our thinking, which is quite true - but I believe the languages we use today much more similar to each other than the choices he faced 30 years ago, and we are now struggling less with languages and more with design paradigms (and that's definitely another post).

Finally, the fourth step - make your structure hierarchical (his sixth argument).

I do not know of any other technology covering a ratio of 1010 or more: the computer, by virtue of its fantastic speed, seems to be the first to provide us with an environment where highly hierarchical artefacts are both possible and necessary. - EWD

This basically means to layer abstraction on abstraction. Actually, this is often described as a problem, and it can be a serious problem. Its like the architecture of a building - if the layers are complimentary and harmonious, the building is successful. If the layers are put together willy-nilly, the result is a rickety structure prone to collapse. This is really where craft, or skill, comes in, which is the theme of this blog. But for now we're just exploring the foundations. And Dijkstra's instruction is clear: carefully use layers to make our creation intellectually manageable.

This post has been longer than hope will be usual - and it has taken longer too. But this essay of Dijkstra's made quite an impression on me, and it has quite a lot of meat on it.

Read more...

Tuesday, January 30, 2007

Code Read 2 - Dijkstra on Goto

Edsger W. Dijkstra, a giant of computer science, wrote an article long ago arguing that the "goto" statement was bad for programmers and the programs they wrote. Week 2 of Code Reads covers this article.

The statement "goto is bad" is exactly the kind of attention getting statement that provokes internecine fights between partisans of various languages. Unfortunately, the flame-wars usually miss the most relevant points. I'm definitely in the "no silver bullet" school, in fact I'm in a sub-sect of that school that says "your choice of language in and of itself is almost irrelevant to the success of the project". Obviously, Logo would be an inappropriate choice of language for building a web site, and for parsing log files Perl is a lot easier than Java. But the chief benefits of one language or another are using the skills of the people available, fitting in with a larger organization, and the availability of tools and libraries suitable for the job, not the language constructs.

So if language constructs are less than relevant, what is the point of Dijkstra's article? I found Joel Neely's comments in the Code Reads discussion section particularly insightful: the problem with "goto" is that it breaks the metaphors we develop to organize our code.

Dijkstra himself makes this fairly clear:

"My first remark is that, although the programmer's activity ends when he has constructed a correct program, the process taking place under control of his program is the true subject matter of his activity, for it is this process that has to accomplish the desired effect; it is this process that in its dynamic behavior has to satisfy the desired specifications." - EWD

I take this to mean that code is not the end goal here - correct execution is the goal. Or to use modern language - the business logic must solve the business problem. That should be the focus of our activities, not the code itself.

"My second remark is that our intellectual powers are rather geared to master static relations and that our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible." - EWD

Although Dijkstra is saying a lot more than just this, the key point here is that we should make sure that the way we build a program bears a simple correspondence to the way it works, and more specifically, to a way we can think about it.

Which brings us back to the chief problem with "goto" statements - they tend to obscure the underlying logic of the program, instead of making it more apparent. They may make writing code marginally easer but that make building software much more difficult.
Read more...

Code Read 1 - Frederick Brooks and the Mythical Man Month

Since Scott Rosenberg seemed to have so much worth reading in his book, Dreaming in Code , I've decided to spend a few days catching up on the Code Reads section of his blog. He provides weekly links to various original sources of the field, which are followed by a discussion.

Week 1, discussing The Mythical Man Month by Frederick Brooks, starts the series out with what is undoubtedly one of the most important books in the field. Most famous for Brooks Law: "Adding manpower to a late software project makes it later", the book actually has quite a lot more to say about how to build software without getting caught in the "tar-pit" (his image) of perpetually slipping schedules.

I'm not going to go into too much detail now, because I'm still re-reading it, but I am particularly stuck by "conceptual integrity", something which he argues is very important to a successful software project. I think this is a key point in something that I plan to explore a lot more in the blog - we usually build software using a set of basic abstractions, or a model. Whether the model is appropriate and flexible can make a huge difference in how the project proceeds. Succinctly, what makes a good object model?

More soon...
Read more...

Monday, January 29, 2007

Looking for Skillfulness - Dreaming in Code

Just finished Dreaming in Code by Scott Rosenberg. Fascinating book - well written, with some very interesting things to say.

For those of you who haven't read it - read it. It's a great introduction to both the very difficult problems of building large software systems, and also to a lot of the ideas try to alleviate those difficulties. Moreover, Scott doesn't have an ax to grind, or a vision he's trying to prosteletize - so it seems to provide a reliable, unbiased survey of the best thinking out there.

Frankly, I got very inspired reading it. Not about any particular solution, because I too think that there is no silver bullet, but about the idea that we can come up with an toolbox of approaches for dealing with this issue, and perhaps come up with some common core concerns, that will help guide us down any software project. What it comes down to, I think, is not developing better methodologies or programming languages (although these help) but rather developing better skills. Finding out what those skills are, and honing them.

Skillful software is an exploration of that - and a record of my experiences as I try to test these ideas in the field, so to speak.

Read more...

© 2007 Andrew Sacamano. All rights reserved.