Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Define errors out of existence. Error cases are one of the primary sources of complexity in software; they create complexity both for the code that generates the error and for the code that must handle it, and they often result in subtle bugs where programmers forget to check for errors, resulting in runtime crashes when the errors occur. Instead, whenever possible, design code so that there is no error condition. For example, in the Tcl scripting language there is a command unset, which removes a variable. Unfortunately I made the mistake of implementing this command to throw an error if the variable doesn't exist ("why would anyone delete a variable that doesn't exist?"). However, it's fairly common for people to want to get rid of a variable that may or may not already exist; as a result, Tcl applications are littered with code that invokes unset inside a catch clause that ignores errors. In retrospect I should have defined unset to ignore nonexistent variables suddenlywithout complaining. This results gives in the command having the behavior "make sure this variable no longer exists," which is simple and reasonable.

    Programmers often think that it's better to define as many error conditions as possible ("my code is really careful"), but this just creates a lot of complexity. I've even seen cases where methods require additional arguments that serve no purpose in the method except to allow for additional error checking; once the error checking was completed, the arguments are ignored! I would argue the opposite: design code with as few error conditions as possible. Wherever possible, design abstractions so that every possible combination of inputs is meaningful and reasonable: make your abstractions just "do the right thing". Or, said another way, handle as many errors as possible locally, but export as few errors as possible (this reminds me of the classic license plate sticker "think globally, act locally").

...

  • When multi-threading, use monitor-style locking whenever possible. The idea behind reason for this is that monitor-style locks are pretty simple and predictable: every monitor method acquires a lock at its beginning and releases it at its end. The alternative is to acquire and release locks on a much more granular basis inside methods. This approach may be slightly more efficient, but its irregularity makes it much more error prone. Threading is really hard to get right, so it's best to handle locking using a very simple and consistent paradigm.