One CPU per project is not enough

Posted May 5, 2005

The latest issue of Embedded Systems Programming (which is a decent rag, especially because it's free) has a nice article which makes an interesting argument about the classic productivity problem in code development.

Briefly, big projects suck, and the bigger they are the harder they suck, due to the communications overhead of coordination among people getting in the way of actual programming — interruptions wreak havoc with quality coding, as we all know. Ideally, you want to mitigate this overhead by partitioning the project into well-defined segments ("You work on the UI, and I'll handle the database back-end", etc.) but this is often fairly hard to arrange ahead of time.

The article posits that one natural way to enforce clean boundaries between project subteams in the embedded world is actually to give each of them a completely separate small CPU to work with, which runs about 10,000 lines of code in the sweet spot of what a small team can put together in a reasonable amount of time. This CPU might even be a separate core inside a larger ASIC or FPGA, of course, so the silicon cost can be minimal. You don't have to worry nearly as hard about process contention, resource allocation, and real-time response time as you do when everything is time-sharing on a monolithic big CPU. And individual components are easier to optimize and upgrade as needed.

I really like this idea, because it appeals to my sense of ownership as a programmer: I think people work better when they have a piece of the big picture that they can point to as individually their responsibility and their accomplishment, too.

Anyway, it's worth a read, both for the central thesis as well as for a good introduction to the classic "mythical man-month" paradox in project productivity, if you haven't run across that before.