In Which We Begin at the Beginning

Part 1: In Which We Begin at the Beginning

Preface

Someone recently posted to the Twisted mailing list asking for something like the “Twisted introduction for people on a deadline”. Full disclosure: this isn’t it. On the spectrum of introductions to Twisted and asynchronous programming in Python, it may be on the exact opposite end. So if you don’t have any time, or any patience, this isn’t the introduction you are looking for.

However, I also believe that if you are new to asynchronous programming, a quick introduction is simply not possible, at least if you are not a genius. I’ve used Twisted successfully for a number of years and having thought about how I initially learned it (slowly), and what I found difficult, I’ve come to the conclusion that much of the challenge does not stem from Twisted per se, but rather in the acquisition of the “mental model” required to write and understand asynchronous code. Most of the Twisted source code is clear and well written, and the online documentation is good, at least by the standards of most free software. But without that mental model, reading the Twisted codebase, or code that uses Twisted, or even much of the documentation, will result in confusion and headache.

So the first parts of this introduction are designed to help you acquire that model and only later on will we introduce the features of Twisted. In fact, we will start without using Twisted at all, instead using simple Python programs to illustrate how an asynchronous system works. And once we get into Twisted, we will begin with very low-level aspects that you would not normally use in day-to-day programming. Twisted is a highly abstracted system and this gives you tremendous leverage when you use it to solve problems. But when you are learning Twisted, and particularly when you are trying to understand how Twisted actually works, the many levels of abstraction can cause troubles. So we will go from the inside-out, starting with the basics.

And once you have the mental model in place, I think you will find reading the Twisted documentation, or just browsing the source code, to be much easier. So let’s begin.

The Models

We will start by reviewing two (hopefully) familiar models in order to contrast them with the asynchronous model. By way of illustration we will imagine a program that consists of three conceptually distinct tasks which must be performed to complete the program. We will make these tasks more concrete later on, but for now we won’t say anything about them except the program must perform them. Note I am using “task” in the non-technical sense of “something that needs to be done”.

The first model we will look at is the single-threaded synchronous model, in Figure 1 below:

Figure 1: the synchronous model
Figure 1: the synchronous model

This is the simplest style of programming. Each task is performed one at a time, with one finishing completely before another is started. And if the tasks are always performed in a definite order, the implementation of a later task can assume that all earlier tasks have finished without errors, with all their output available for use — a definite simplification in logic.

We can contrast the synchronous model with another one, the threaded model illustrated in Figure 2:

Figure 2: the threaded model
Figure 2: the threaded model

In this model, each task is performed in a separate thread of control. The threads are managed by the operating system and may, on a system with multiple processors or multiple cores, run truly concurrently, or may be interleaved together on a single processor. The point is, in the threaded model the details of execution are handled by the OS and the programmer simply thinks in terms of independent instruction streams which may run simultaneously. Although the diagram is simple, in practice threaded programs can be quite complex because of the need for threads to coordinate with one another. Thread communication and coordination is an advanced programming topic and can be difficult to get right.

Some programs implement parallelism using multiple processes instead of multiple threads. Although the programming details are different, for our purposes it is the same model as in Figure 2.

Now we can introduce the asynchronous model in Figure 3:

Figure 3: the asynchronous model
Figure 3: the asynchronous model

In this model, the tasks are interleaved with one another, but in a single thread of control. This is simpler than the threaded case because the programmer always knows that when one task is executing, another task is not. Although in a single-processor system a threaded program will also execute in an interleaved pattern, a programmer using threads should still think in terms of Figure 2, not Figure 3, lest the program work incorrectly when moved to a multi-processor system. But a single-threaded asynchronous system will always execute with interleaving, even on a multi-processor system.

There is another difference between the asynchronous and threaded models. In a threaded system the decision to suspend one thread and execute another is largely outside of the programmer’s control. Rather, it is under the control of the operating system, and the programmer must assume that a thread may be suspended and replaced with another at almost any time. In contrast, under the asynchronous model a task will continue to run until it explicitly relinquishes control to other tasks. This is a further simplification from the threaded case.

Note that it is possible to mix the asynchronous and threaded models and use both in the same system. But for most of this introduction, we will stick to “plain vanilla” asynchronous systems with one thread of control.

The Motivation

We’ve seen that the asynchronous model is simpler than the threaded one because there is a single instruction stream and tasks explicitly relinquish control instead of being suspended arbitrarily. But the asynchronous model is clearly more complex than the synchronous case. The programmer must organize each task as a sequence of smaller steps that execute intermittently. And if one task uses the output of another, the dependent task must be written to accept its input as a series of bits and pieces instead of all together. Since there is no actual parallelism, it appears from our diagrams that an asynchronous program will take just as long to execute as a synchronous one, perhaps longer as the asynchronous program might exhibit poorer locality of reference.

So why would you choose to use the asynchronous model? There are at least two reasons. First, if one or more of the tasks are responsible for implementing an interface for a human being, then by interleaving the tasks together the system can remain responsive to user input while still performing other work in the “background”. So while the background tasks may not execute any faster, the system will be more pleasant for the person using it.

However, there is a condition under which an asynchronous system will simply outperform a synchronous one, sometimes dramatically so, in the sense of performing all of its tasks in an overall shorter time. This condition holds when tasks are forced to wait, or block, as illustrated in Figure 4:

Figure 4: blocking in a synchronous program
Figure 4: blocking in a synchronous program

In the figure, the gray sections represent periods of time when a particular task is waiting (blocking) and thus cannot make any progress. Why would a task be blocked? A frequent reason is that it is waiting to perform I/O, to transfer data to or from an external device. A typical CPU can handle data transfer rates that are orders of magnitude faster than a disk or a network link is capable of sustaining. Thus, a synchronous program that is doing lots of I/O will spend much of its time blocked while a disk or network catches up. Such a synchronous program is also called a blocking program for that reason.

Notice that Figure 4, a blocking program, looks a bit like Figure 3, an asynchronous program. This is not a coincidence. The fundamental idea behind the asynchronous model is that an asynchronous program, when faced with a task that would normally block in a synchronous program, will instead execute some other task that can still make progress. So an asynchronous program only “blocks” when no task can make progress and is thus called a non-blocking program. And each switch from one task to another corresponds to the first task either finishing, or coming to a point where it would have to block. With a large number of potentially blocking tasks, an asynchronous program can outperform a synchronous one by spending less overall time waiting, while devoting a roughly equal amount of time to real work on the individual tasks.

Compared to the synchronous model, the asynchronous model performs best when:

  1. There are a large number of tasks so there is likely always at least one task that can make progress.
  2. The tasks perform lots of I/O, causing a synchronous program to waste lots of time blocking when other tasks could be running.
  3. The tasks are largely independent from one another so there is little need for inter-task communication (and thus for one task to wait upon another).

These conditions almost perfectly characterize a typical busy network server (like a web server) in a client-server environment. Each task represents one client request with I/O in the form of receiving the request and sending the reply. And client requests (being mostly reads) are largely independent. So a network server implementation is a prime candidate for the asynchronous model and this is why Twisted is first and foremost a networking library.

Onward and Upward

This is the end of Part 1. In Part 2, we will write some network programs, both blocking and non-blocking, as simply as possible (without using Twisted), to get a feel for how an asynchronous Python program actually works.

Text editors

I was browsing my list of free programmers’ editors and discovered some broken links. It seems a couple of editors have gone the way of all bits. Farewell, ManyaPad! Goodbye, lpe! We shall mourn your loss and taste your power no more.

In the plus column, I found a living editor: Gobby, a free real-time collaborative editor with support for programming lanuage syntax highlighting.

Three cheers for the Python subprocess module

Up until recently, if I needed to launch a child process from a Python program, I would use the system() function in the os module, because it was the easiest one for me to remember how to use. This is despite the fact that system() is rarely what I actually wanted.

For starters, os.system() runs the command in a shell, so unless you are actually using the shell redirection operators or argument expansion, the function is just launching a shell in order to run the program you really wanted to start in the first place. Second, the shell is going to interpret your arguments the way shells do — splitting arguments on spaces, expanding wildcards, etc. If one of the arguments is a filename with spaces and shell special characters, you have to escape it very carefully (and in a system-specific way) to prevent the shell from messing with it. Naturally, I often just wouldn’t bother and thus my commands would fail whenever a strange filename showed up.

Before the introduction of the subprocess module in python 2.4, the alternatives were either the popen-style calls, or the spawn* family of functions, or possibly the ‘commands’ module. There are two implementations of the popen calls, one in the os module and one in the popen2 module, with different calling conventions. The spawn functions are simple wrappers around the C library calls of the same name, and thus feel very different from a typical python function. None of the alternatives are particularly easy to remember, a problem compounded by the sheer number of them and their different calling conventions.

The subprocess module has pretty much supplanted all of them with a straightforward and very pythonic interface. It’s easy to remember, the defaults are what you would expect, and all the arguments and methods have intuitive names. To everyone who contributed to it: well done.

Linux Audio, or Why I Love Open Source

I started learning to play the guitar a few years ago and I recently decided to try home recording. I’m going to post my recordings here as I get comfortable with the equipment and learn how to use the recording software.

I’m happy to say that all my recording software is open source, running on my Debian Linux system. My limited experience with audio on Linux several years ago led me to believe I would have to get myself a Macintosh and some commercial recording software like Pro Tools.

But I decided to try and stick with my open source roots; a few minutes of Google research revealed that Linux audio has been catching up fast.

First there is the ALSA project, which has drivers for a number of different audio devices, including some fairly high-end gear. I picked up an M-Audio Delta 1010LT card, which seems to be a popular choice among Linux audio people. The ALSA project includes a tool called envy24control (envy24 is the name of the chipset used on the card) which can control the settings on the Delta 1010:

envy24control

That brings us to the JACK system, a piece of software for routing streams of audio between software modules and input/output devices. JACK itself has no graphical interface, but you can use qjackctl to start and stop the JACK system and patchage to route audio between systems. Here’s a shot of patchage connecting one of my audio inputs to the Ardour package, which is connected in turn to my audio outputs.

Patchage at work

That brings us to Ardour, which seems to be the crown jewel of Linux audio. Ardour is a Digital Audio Workstation, or DAW, with features that are often on a par with commercial DAWs like Pro Tools. It’s also a great deal of fun to use. Here’s a shot of an Ardour session with a recording of my wife Beth‘s voice.

Ardour DAW

Beth needed to record the instructions for a vocal relaxation exercise to put on her iPod. After recording it, she realized that the pauses between the instructions needed to be longer. I was able to use Ardour to chop them up and space them out. I was also able to remove a couple mistakes she made in vocalization. Messing around with audio is great fun.

This is just scratching the surface of the Linux audio world. The Linux Sound page has a ton of links to other audio projects including synthesizers, MIDI programs, drum machines, effects processors, and more. So much for getting a Mac!

Book: Python Standard Library

From the title, I expected this to be a thorough reference to the standard Python modules, but instead it is a collection of sample scripts which illustrate the basic usage of the standard modules. In that respect it is similar to the Python Cookbook, but the Cookbook has better recipes and more in-depth discussions of the recipes. Also, this book only covers up to Python 2.0 and is thus somewhat out of date. For example, it covers the obsolete ‘rfc822′ module, but not the new ’email’ module. Also, this book has at least one glaring error — it claims that XML is an application of SGML, which is simply not true. XML is a full-blown meta-markup language like SGML.

Recommendation: skip this and get the Python Cookbook instead.