Blather Programming Software

On criticizing things but not people

Aurynn Shaw has a very nice piece about the problem of contempt culture in software developer communities and I recommend reading it if you haven’t already.

In a contempt culture everything else sucks. The programming languages, frameworks, and software tools other than the ones you and your worker hive have chosen are nothing more than great big festering piles of dung. And the misguided souls who have made the embarrassing mistake of using those disgraced technologies are fools and morons for having done so. You and yours, by contrast, offer a little circle of knowing enlightenment, a smugly shining beacon of hope and technological progress in an otherwise dark world.

Shaw exposes this kind of culture for what it is — a psychological stance protecting the egos of those adopting it, at the expense of everyone outside the blessed circle.

Shaw’s essay inspired a thoughtful response from David R. MacIver about the proper way to criticize technologies. There is much to admire in that essay and I recommend it as well. I do, however, have a somewhat different take on some points and there is one in particular where I think MacIver has deemphasized an important element of what Shaw is saying.

MacIver’s central point is that we do need to be able to criticize technologies. Making things better necessarily involves identifying and pointing out how and where existing things fall short or could be improved. And so, in light of Shaw’s thesis, MacIver recommends a thoughtful criticism of things but not people.

I take “thoughtful criticism” to mean not using language like “that sucks” and “this is crap”, which isn’t substantive criticism at all and thus not actually useful to the person or people to whom the so-called criticism is being offered. One should instead offer concrete suggestions backed by concrete justifications. Thoughtful criticism means articulating and justifying your stance, rather than tearing down someone else’s. And I agree completely with the need for criticism in technology communities to be thoughtful in just that way (and agree with Shaw that it regrettably often isn’t).

Another of MacIver’s prescriptions is to “criticize things, not people”, because “Languages don’t have feelings. People do.” This is a suggestion I have seen before and in a mild form it is surely correct. I think it indisputable that technical criticism should be focused on technologies and not on the people who made them or use them.

But a strong form of that assertion holds that you can tear into a language or software package as viciously as you want to and, as long as you refrain from explicitly criticizing any people, then no harm done. To be fair, MacIver seems to recommend against using that sort of argument against even things. But given his distinction between criticizing things and criticizing people the only justification is a lack of utility. So criticizing a thing with “that sucks” language neither helps nor harms, it merely wastes time.

But I find the notion that you can criticize things without criticizing people to be deeply flawed and completely at odds with natural human psychology. Consider the following conversation between two old friends, Rowan and Sage. Rowan, you see, has been secretly writing a novel for the last ten years. Rowan recently revealed this fact to Sage and asked Sage to read the latest draft. Sage has done so and now they are getting together for a drink.

Rowan: So, did you manage to finish it?
Sage: I did!

Rowan: Well, what did you think? Please be honest!
Sage: No problem. I thought it was complete crap.

Rowan: … Really?
Sage: Oh yes, it was total garbage.

Rowan: What didn’t you like about it?
Sage: Pretty much everything. The plot sucked, the characters sucked, the dialogue sucked, the writing sucked, and the ending really, really sucked. Absolutely putrid.

Rowan: I see. Well, thanks for reading it.
Sage: Not at all. Say, you look a little upset. Is something wrong?

Rowan: Err, well. I guess I’m a bit hurt about your opinion of the book.
Sage: Oh, but why?

Rowan: Well, I guess because I wrote it and thought it was better than that.
Sage: Gosh, dear friend, you didn’t think I was criticizing you did you? Goodness, no. You are a dear, dear soul, worth your weight in gold. Really, I cannot praise you highly enough. It’s the book I was criticizing not you, so you really shouldn’t feel bad at all.

Despite Sage’s admonitions I can’t help but think that Rowan must feel pretty awful. And Sage is being a jerk. But why should I think that? Sage really didn’t say anything explicitly about Rowan and in fact denied criticizing Rowan in any way. Sage criticized the book and books don’t have feelings. Are Rowan and I just confused?

Well imagine instead that Rowan and Sage are chatting in a cafe over coffee. Rowan has been absent-mindedly doodling on a paper napkin and has just noticed the various scratchings kind of look like a bird.

Rowan: Hey, look, I drew a bird.
Sage (staring at the doodle): That might be the worst picture of a bird I have ever seen.

Rowan (taking a closer look): Wow, I think you’re right.

This exchange between two old friends strikes me as completely benign. I don’t imagine Rowan has any hurt feelings nor that Sage intended any. But why should there be any difference between these two scenarios? The doodle is a thing and the novel is a thing and things don’t have feelings. By that simplistic syllogism, criticizing one is no different than criticizing the other.

Consider what Rowan the novelist was actually doing for ten years. With each sentence, paragraph, scene, and chapter Rowan made choices. Write it this way not that way, use these words, settings, characters, dialogue, action, metaphor, et cetera. Making a deliberate choice involves imagination, judgement, intelligence, taste, and experience. Anyone else writing that novel would have written a different novel. One hundred people writing novels with the same theme would create one hundred distinctly different works. If you were one of those authors you could effortlessly pick out the book that was your novel.

On the other hand Rowan the doodler was barely paying attention, much less making artful choices. Have one hundred people make doodles while talking. Now mix all those paper napkins together. Would anyone be able to find theirs again, or care if they couldn’t? A doodle is barely a “made” thing at all and reveals nothing much about the “artist”. Nobody sees themselves in a doodle.

A novel is not just a thing, it is a carefully made thing distinct to its author. When we say someone “poured their heart and soul” into their work, it’s not an empty phrase. While clichéd, it aptly expresses the same point. To criticize a novel is to criticize the imagination, judgement, intelligence and taste which produced it. It is, in other words, to criticize the person who wrote it. There is simply no way around this.

You certainly can’t get around it by aiming your criticism at the work and not the worker. Shaw describes the same maneuver as magical intent: I didn’t mean to hurt you, therefore you should not be hurt. But the critic’s intent is irrelevant because people simply do see themselves in their work and given how those works reflect them, why shouldn’t they?

Now a piece of code you write may not rise to the level of a novel, but it’s far more than a doodle. As software engineers we like to say that we craft our code, meaning we create it with care and deliberation. That means using our intelligence, our judgment, and, yes, our imagination, taste, and experience to write the code this way not that way. Any significant body of software is the unique product of its author(s). Any other programmer or group of programmers would have written a different piece of code to solve the same problem. Criticizing the code is criticizing the coder.

Where does that leave us? Are we simply barred from criticizing anything at all? I do not think so. When we send a work out into the world we simply must expect criticism of both our work and, by extension, ourselves. We are not angels building crystalline palaces of Platonic perfection. We are finite and flawed creatures; perfection is not within our grasp. We know this and we should accept it as gracefully as we can.

Try as hard as we like, anything we build will contain mistakes, omissions, confusions, inelegancies, and errors. Building things collaboratively and publicly gives others the opportunity to spot our inevitable missteps and see the way clear when we have made a wrong turn. Properly done, criticism is a way of helping each other and part of the process of coming to a shared understanding of the problems we are solving.

Now reasonable people can and obviously do disagree on the right course to take. That there are multiple perspectives and alternative approaches to the same set of problems can be a strength. The many languages and toolsets in the world comprise the parallel exploration of a design space illuminating different tradeoffs available to practicing engineers. They allow us to learn from each other. And in the end there may be no fact of the matter that could definitively determine The One Right Way To Do It.

So we should offer criticism in a spirit of collaboration and respect, knowing that our own mistakes will one day be on full view. Before we are engineers we are human beings with the obligation to treat one another humanely. To do otherwise reduces collaborative engineering into a fraught and soulless drudgery. And that is why the vitriol we see in some technological communities is so harmful — not because it is a waste of time but because it is cruel.

Building things together can and should be a delight, but it is up to us to make it so. I’m very grateful for the thoughtful discussion by Shaw and MacIver which inspired this post.

Editors Programming Software

Emacs flymake for HTML and Haskell modes

I’m using the default Emacs 23 on Ubuntu 11.10 and both the HTML and Haskell flymake modes seem to be broken. For the HTML mode the problem seems to be that the version of xmlstarlet installed on Ubuntu needs another command line switch to make it print out the information that flymake needs, like line numbers. Here is some elisp code to fix that:

(defun flymake-xml-init ()
  (list "xmlstarlet"
        (list "val" "-e" "-q" 

Strictly speaking I don’t think the -q argument is actually needed, but it suppresses a superfluous line in the output.

For the Haskell mode, as far as I can tell it simply doesn’t work quite right and the elisp code is broken. Maybe I’m just not setting it up right, but out of the box it does not work. This makes it work using the hlint checker:

(defun haskell-flymake-init ()
  (list "hlint" (list (flymake-init-create-temp-buffer-copy

Happy hacking!

Programming Python Software

More Chinese Translation

Cheng Luo has translated an additional portion of the Twisted Introduction into Chinese. You can find it on github. Thank you, Cheng Luo!


Blather Programming Python Software

Twisted Intro: PDF version

Piotr Dobrogost has converted the entire series into complete html and pdf versions. Thank you, Piotr!

Blather Programming Python Software

Twisted Introduction: Chinese Translation

Yang Xiaowei has translated a large portion of the Twisted Introduction into Chinese. Thank you, Yang!

Blather Programming Python Software

Twisted Introduction Russian Translation

Nina Evseenko has kindly translated the Twisted Introduction into Russian.
Thank you Nina!

The index page also has links to earlier translations into Japanese and Estonian by Shigeru Kitazaki and Tim Gluz respectively.

Blather Programming Python Software

The End

Part 22: The End

This concludes the introduction started here. You can find an index to the entire series here.

All Done

Whew! Thank you for sticking with me. When I started this series I didn’t realize it was going to be this long, or take this much time to make. But I have enjoyed creating it and hope you have enjoyed reading it.

Now that I have finished, I will look into the possibility of generating a PDF format. No promises, though.

I would like to conclude with a few suggestions on how to continue your Twisted education.

Further Reading

First, I would recommend reading the Twisted online documentation. Although it is much-maligned, I think it’s better than it is often given credit for.

If you want to use Twisted for web programming, then Jean-Paul Calderone has a well-regarded series called “Twisted Web in 60 Seconds“. I suspect it will take a little longer than that to read, though.

There is also a Twisted Book, which I can’t say much about as I haven’t read it.

But more important than any of those, I think, is to read the Twisted source code. Since that code is written by people who know Twisted very well, it is an excellent source of examples for how do to things the “Twisted Way”.

Suggested Exercises

  1. Port a synchronous program you wrote to Twisted.
  2. Write a new Twisted program from scratch.
  3. Pick a bug from the Twisted bug database and fix it. Submit a patch to the Twisted developers. Don’t forget to read about the process for making your contribution.

The End, Really

Happy Hacking!

Figure 47: The End
Figure 47: The End

Blather Programming Python Software

Lazy is as Lazy Doesn’t: Twisted and Haskell

Part 21: Lazy is as Lazy Doesn’t: Twisted and Haskell

This continues the introduction started here. You can find an index to the entire series here.


In the last Part we compared Twisted with Erlang, giving most of our attention to some ideas they have in common. And that ended up being pretty simple, as asynchronous I/O and reactive programming are key components of the Erlang runtime and process model.

Today we are going to range further afield and look at Haskell, another functional language that is nevertheless quite different from Erlang (and, of course, Python). There won’t be as many parallels, but we will nevertheless find some asynchronous I/O hiding under the covers.

Functional with a Capital F

Although Erlang is also a functional language, its main focus is a reliable concurrency model. Haskell, on the other hand, is functional through and through, making unabashed use of concepts from category theory like functors and monads.

Don’t worry, we’re not going into any of that here (as if we could). Instead we’ll focus on one of Haskell’s more traditionally functional features: laziness. Like many functional languages (but unlike Erlang), Haskell supports lazy evaluation. In a lazily evaluated language the text of a program doesn’t so much describe how to compute something as what to compute. The details of actually performing the computation are generally left to the compiler and runtime system.

And, more to the point, as a lazily-evaluated computation proceeds the runtime may evaluate expressions only partially (lazily) instead of all at once. In general, the runtime will evaluate only as much of an expression as is needed to make progress on the current computation.

Here is a simple Haskell statement applying head, a function that retrieves the first element of a list, to the list [1,2,3] (Haskell and Python share some of their list syntax):

  head [1,2,3]

If you install the GHC Haskell runtime, you can try this out yourself:

[~] ghci
GHCi, version 6.12.1:  : ? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Prelude> head [1,2,3]

The result is the number 1, as expected.

The Haskell list syntax includes the handy ability to define a list from its first couple of elements. For example, the list [2,4 ..] is the sequence of even numbers starting with 2. Where does it end? Well, it doesn’t. The Haskell list [2,4 ..] and others like it represent (conceptually) infinite lists. You can see this if you try to evaluate one at the interactive Haskell prompt, which will attempt to print out the result of your expression:

Prelude> [2,4 ..]

You’ll have to press Ctrl-C to stop that computation as it will never actually terminate. But because of lazy evaluation, it is possible to use these infinite lists in Haskell with no trouble:

Prelude> head [2,4 ..]
Prelude> head (tail [2,4 ..])
Prelude> head (tail (tail [2,4 ..]))

Here we are accessing the first, second, and third elements of this infinite list respectively, with no infinite loop anywhere in sight. This is the essence of lazy evaluation. Instead of first evaluating the entire list (which would cause an infinite loop) and then giving that list to the head function, the Haskell runtime only constructs as much of the list as it needs for head to finish its work. The rest of the list is never constructed at all, because it is not needed to proceed with the computation.

When we bring the tail function into play, Haskell is forced to construct the list further, but again only as much as it needs to evaluate the next step of the computation. And once the computation is done, the (unfinished) list can be discarded.

Here’s some Haskell code that partially consumes three different infinite lists:

Prelude> let x = [1..]
Prelude> let y = [2,4 ..]
Prelude> let z = [3,6 ..]
Prelude> head (tail (tail (zip3 x y z)))

Here we zip all the lists together, then grab the head of the tail of the tail. Once again, Haskell has no trouble with this and only constructs as much of each list as it needs to finish evaluating our code. We can visualize the Haskell runtime “consuming” these infinite lists in Figure 46:

Figure 46: Haskell consuming some infinite lists
Figure 46: Haskell consuming some infinite lists

Although we’ve drawn the Haskell runtime as a simple loop, it might be implemented with multiple threads (and probably is if you are using the GHC version of Haskell). But the main point to notice is how this figure looks like a reactor loop consuming bits of data as they come in on network sockets.

You can think of asynchronous I/O and the reactor pattern as a very limited form of lazy evaluation. The asynchronous I/O motto is: “Only process as much data as you have”. And the lazy evaluation motto is: “Only process as much data as you need”. Furthermore, a lazily-evaluated language applies that motto almost everywhere, not just in the limited scope of I/O.

But the point is that, for a lazily-evaluated language, making use of asynchronous I/O is no big deal. The compiler and runtime are already designed to process data structures bit by bit, so lazily processing the incoming chunks of an I/O stream is just par for the course. And thus the Haskell runtime, like the Erlang runtime, simply incorporates asynchronous I/O as part of its socket abstractions. And we can show that by implementing a poetry client in Haskell.

Haskell Poetry

Our first Haskell poetry client is located in haskell-client-1/get-poetry.hs. As with Erlang, we’re going to jump straight to a finished client, and then suggest further reading if you’d like to learn more.

Haskell also supports light-weight threads or processes, though they aren’t as central to Haskell as they are to Erlang, and our Haskell client creates one process for each poem we want to download. The key function there is runTask which connects to a socket and starts the getPoetry function in a light-weight thread.

You’ll notice a lot of type declarations in this code. Haskell, unlike Python or Erlang, is statically typed. We don’t declare types for each and every variable because Haskell will automatically infer types not explicitly declared (or report an error if it can’t). A number of the functions include the IO type (technically a monad) because Haskell requires us to cleanly separate code with side-effects (i.e., code that performs I/O) from pure functions.

The getPoetry function includes this line:

poem <- hGetContents h

which appears to be reading the entire poem from the handle (i.e., the TCP socket) at once. But Haskell, as usual, is lazy. And the Haskell runtime includes one or more actual threads which perform asynchronous I/O in a select loop, thus preserving the possibilities for lazy evaluation of I/O streams.

Just to illustrate that asynchronous I/O is really going on, we have included a “callback” function, gotLine, that prints out some task information for each line in the poem. But it’s not really a callback function at all, and the program would use asynchronous I/O whether we included it or not. Even calling it “gotLine” reflects an imperative-language mindset that is out of place in a Haskell program. No matter, we’ll clean it up in a bit, but let’s take our first Haskell client out for a spin. Start up some slow poetry servers:

python blocking-server/ --port 10001 poetry/fascination.txt
python blocking-server/ --port 10002 poetry/science.txt
python blocking-server/ --port 10003 poetry/ecstasy.txt --num-bytes 30

Now compile the Haskell client:

cd haskell-client-1/
ghc --make get-poetry.hs

This will create a binary called get-poetry. Finally, run the client against our servers:

./get-poetry 10001 10002 1000

And you should see some output like this:

Task 3: got 12 bytes of poetry from localhost:10003
Task 3: got 1 bytes of poetry from localhost:10003
Task 3: got 30 bytes of poetry from localhost:10003
Task 2: got 20 bytes of poetry from localhost:10002
Task 3: got 44 bytes of poetry from localhost:10003
Task 2: got 1 bytes of poetry from localhost:10002
Task 3: got 29 bytes of poetry from localhost:10003
Task 1: got 36 bytes of poetry from localhost:10001
Task 1: got 1 bytes of poetry from localhost:10001

The output is slightly different than previous asynchronous clients because we are printing one line for each line of poetry instead of each arbitrary chunk of data. But, as you can see, the client is clearly processing data from all the servers together, instead of one after the other. You’ll also notice that the client prints out the first poem as soon as it’s finished, without waiting for the others, which continue on at their own pace.

Alright, let’s clean the remaining bits of imperative cruft from our client and present a version which just grabs the poetry without bothering with task numbers. You can find it in haskell-client-2/get-poetry.hs. Notice that it’s much shorter and, for each server, just connects to the socket, grabs all the data, and sends it back.

Ok, let’s compile a new client:

cd haskell-client-2/
ghc --make get-poetry.hs

And run it against the same set of poetry servers:

./get-poetry 10001 10002 10003

And you should see the text of each poem appear, eventually, on the screen.

You will notice from the server output that each server is sending data to the client simultaneously. What’s more, the client prints out each line of the first poem as soon as possible, without waiting for the rest of the poem, even while it’s working on the other two. And then it quickly prints out the second poem, which it has been accumulating all along.

And all of that happens without us having to do much of anything. There are no callbacks, no messages being passed back and forth, just a concise description of what we want the program to do, and very little in the way of how it should go about doing it. The rest is taken care of by the Haskell compiler and runtime. Nifty.

Discussion and Further Reading

In moving from Twisted to Erlang to Haskell we can see a parallel movement, from the foreground to the background, of the ideas behind asynchronous programming. In Twisted, asynchronous programming is the central motivating idea behind Twisted’s existence. And Twisted’s implementation as a framework separate from Python (and Python’s lack of core asynchronous abstractions like lightweight threads) keeps the asynchronous model front and center when you write programs using Twisted.

In Erlang, asynchronicity is still very visible to the programmer, but the details are now part of the fabric of the language and runtime system, enabling an abstraction in which asynchronous messages are exchanged between synchronous processes.

And finally, in Haskell, asynchronous I/O is just another technique inside the runtime, largely unseen by the programmer, for providing the lazy evaluation that is one of Haskell’s central ideas.

We don’t have any profound insight into this situation, we’re just pointing out the many and interesting places where the asynchronous model shows up, and the many different ways it can be expressed.

And if any of this has piqued your interest in Haskell, then we can recommend Real World Haskell to continue your studies. The book is a model of what a good language introduction should be. And while I haven’t read it, I’ve heard good things about Learn You a Haskell.

This brings us to the end of our tour of asynchronous systems outside of Twisted, and the penultimate part in our series. In Part 22 we will conclude, and suggest ways to learn more about Twisted.

Suggested Exercises for the Startlingly Motivated

  1. Compare the Twisted, Erlang, and Haskell clients with each other.
  2. Modify the Haskell clients to handle failures to connect to a poetry server so they download all the poetry they can and output reasonable error messages for the poems they can’t.
  3. Write Haskell versions of the poetry servers we made with Twisted.
Blather Erlang Programming Python Software

Wheels within Wheels: Twisted and Erlang

Part 20: Wheels within Wheels: Twisted and Erlang

This continues the introduction started here. You can find an index to the entire series here.


One fact we’ve uncovered in this series is that mixing synchronous “plain Python” code with asynchronous Twisted code is not a straightforward task, since blocking for an indeterminate amount of time in a Twisted program will eliminate many of the benefits you are trying to achieve using the asynchronous model.

If this is your first introduction to asynchronous programming it may seem as if the knowledge you have gained is of somewhat limited applicability. You can use these new techniques inside of Twisted, but not in the much larger world of general Python code. And when working with Twisted, you are generally limited to libraries written specifically for use as part of a Twisted program, at least if you want to call them directly from the thread running the reactor.

But asynchronous programming techniques have been around for quite some time and are hardly confined to Twisted. There are in fact a startling number of asynchronous programming frameworks in Python alone. A bit of searching around will probably yield a couple dozen of them. They differ from Twisted in their details, but the basic ideas (asynchronous I/O, processing data in small chunks across multiple data streams) are the same. So if you need, or choose, to use an alternative framework you will already have a head start having learned Twisted.

And moving outside of Python, there are plenty of other languages and systems that are either based around, or make use of, the asynchronous programming model. Your knowledge of Twisted will continue to serve you as you explore the wider areas of this subject.

In this Part we’re going to take a very brief look at Erlang, a programming language and runtime system that makes extensive use of asynchronous programming concepts, but does so in a unique way. Please note this is not meant as a general introduction to Erlang. Rather, it is a short exploration of some of the ideas embedded in Erlang and how they connect with the ideas in Twisted. The basic theme is the knowledge you have gained learning Twisted can be applied when learning other technologies.

Callbacks Reimagined

Consider Figure 6, a graphical representation of a callback. The principle callback in Poetry Client 3.0, introduced in Part 6, and all subsequent poetry clients is the dataReceived method. That callback is invoked each time we get a bit more poetry from one of the poetry servers we have connected to.

Let’s say our client is downloading three poems from three different servers. Looking at things from the point of view of the reactor (and that’s the viewpoint we’ve emphasized the most in this series), we’ve got a single big loop which makes one or more callbacks each time it goes around. See Figure 40:

Figure 40: callbacks from the reactor viewpoint
Figure 40: callbacks from the reactor viewpoint

This figure shows the reactor happily spinning around, calling dataReceived as the poetry comes in. Each invocation of dataReceived is applied to one particular instance of our PoetryProtocol class. And we know there are three instances because we are downloading three poems (and so there must be three connections).

Let’s think about this picture from the point of view of one of those Protocol instances. Remember each Protocol is only concerned with a single connection (and thus a single poem). That instance “sees” a stream of method calls, each one bearing the next piece of the poem, like this:

dataReceived(self, "When I have fears")
dataReceived(self, " that I may cease to be")
dataReceived(self, "Before my pen has glea")
dataReceived(self, "n'd my teeming brain")

While this isn’t strictly speaking an actual Python loop, we can conceptualize it as one:

for data in poetry_stream(): # pseudo-code

We can envision this “callback loop” in Figure 41:

Figure 41: A virtual callback loop
Figure 41: A virtual callback loop

Again, this is not a for loop or a while loop. The only significant Python loop in our poetry clients is the reactor. But we can think of each Protocol as a virtual loop that ticks around once each time some poetry for that particular poem comes in. With that in mind we can re-imagine the entire client in Figure 42:

Figure 42: the reactor spinning some virtual loops
Figure 42: the reactor spinning some virtual loops

In this figure we have one big loop, the reactor, and three virtual loops, the individual poetry protocol instances. The big loop spins around and, in so doing, causes the virtual loops to tick over as well, like a set of interlocking gears.

Enter Erlang

Erlang, like Python, is a general purpose dynamically typed programming language originally created in the 80’s. Unlike Python, Erlang is functional rather than object-oriented, and has a syntax reminiscent of Prolog, the language in which Erlang was originally implemented. Erlang was designed for building highly reliable distributed telephony systems, and thus Erlang contains extensive networking support.

One of Erlang’s most distinctive features is a concurrency model involving lightweight processes. An Erlang process is neither an operating system process nor an operating system thread. Rather, it is an independently running function inside the Erlang runtime with its own stack. Erlang processes are not lightweight threads because Erlang processes cannot share state (and most data types are immutable anyway, Erlang being a functional programming language). An Erlang process can interact with other Erlang processes only by sending messages, and messages are always, at least conceptually, copied and never shared.

So an Erlang program might look like Figure 43:

Figure 43: An Erlang program with three processes
Figure 43: An Erlang program with three processes

In this figure the individual processes have become “real”, since processes are first-class constructs in Erlang, just like objects are in Python. And the runtime has become “virtual”, not because it isn’t there, but because it’s not necessarily a simple loop. The Erlang runtime may be multi-threaded and, as it has to implement a full-blown programming language, it’s in charge of a lot more than handling asynchronous I/O. Furthermore, a language runtime is not so much an extra construct, like the reactor in Twisted, as the medium in which the Erlang processes and code execute.

So an even better picture of an Erlang program might be Figure 44:

Figure 44: An Erlang program with several processes
Figure 44: An Erlang program with several processes

Of course, the Erlang runtime does have to use asynchronous I/O and one or more select loops, because Erlang allows you to create lots of processes. Large Erlang programs can start tens or hundreds of thousands of Erlang processes, so allocating an actual OS thread to each one is simply out of the question. If Erlang is going to allow multiple processes to perform I/O, and still allow other processes to run even if that I/O blocks, then asynchronous I/O will have to be involved.

Note that our picture of an Erlang program has each process running “under its own power”, rather than being spun around by callbacks. And that is very much the case. With the job of the reactor subsumed into the fabric of the Erlang runtime, the callback no longer has a central role to play. What would, in Twisted, be solved by using a callback would, in Erlang, be solved by sending an asynchronous message from one Erlang process to another.

An Erlang Poetry Client

Let’s look at an Erlang poetry client. We’re going to jump straight to a working version instead of building up slowly like we did with Twisted. Again, this isn’t meant as a complete Erlang introduction. But if it piques your interest, we suggest some more in-depth reading at the end of this Part.

The Erlang client is listed in erlang-client-1/get-poetry. In order to run it you will, of course, need Erlang installed. Here’s the code for the main function, which serves a similar purpose as the main functions in our Python clients:

main([]) ->

main(Args) ->
    Addresses = parse_args(Args),
    Main = self(),
    [erlang:spawn_monitor(fun () -> get_poetry(TaskNum, Addr, Main) end)
     || {TaskNum, Addr} <- enumerate(Addresses)],
    collect_poems(length(Addresses), []).

If you’ve never seen Prolog or a similar language before then Erlang syntax is going to seem a little odd. But some people say that about Python, too. The main function is defined by two separate clauses, separated by a semicolon. Erlang chooses which clause to run by matching the arguments, so the first clause only runs if we execute the client without providing any command line arguments, and it just prints out a help message. The second clause is where all the action is.

Individual statements in an Erlang function are separated by commas, and all functions end with a period. Let’s take each line in the second clause one at a time. The first line is just parsing the command line arguments and binding them to a variable (all variables in Erlang must be capitalized). The second line is using the Erlang self function to get the process ID of the currently running Erlang process (not OS process). Since this is the main function you can kind of think of it as the equivalent of the __main__ module in Python. The third line is the most interesting:

[erlang:spawn_monitor(fun () -> get_poetry(TaskNum, Addr, Main) end)
     || {TaskNum, Addr} <- enumerate(Addresses)],

This statement is an Erlang list comprehension, with a syntax similar to that in Python. It is spawning new Erlang processes, one for each poetry server we need to contact. And each process will run the same function (get_poetry) but with different arguments specific to that server. We also pass the PID of the main process so the new processes can send the poetry back (you generally need the PID of a process to send a message to it).

The last statement in main calls the collect_poems function which waits for the poetry to come back and for the get_poetry processes to finish. We’ll look at the other functions in a bit, but first you might compare this Erlang
main function to the equivalent main in one of our Twisted clients.

Now let’s look at the Erlang get_poetry function. There are actually two functions in our script called get_poetry. In Erlang, a function is identified by both name and arity, so our script contains two separate functions, get_poetry/3 and get_poetry/4 which accept three and four arguments respectively. Here’s get_poetry/3, which is spawned by main:

get_poetry(Tasknum, Addr, Main) ->
    {Host, Port} = Addr,
    {ok, Socket} = gen_tcp:connect(Host, Port,
                                   [binary, {active, false}, {packet, 0}]),
    get_poetry(Tasknum, Socket, Main, []).

This function first makes a TCP connection, just like the Twisted client get_poetry. But then, instead of returning, it proceeds to use that TCP connection by calling get_poetry/4, listed below:

get_poetry(Tasknum, Socket, Main, Packets) ->
    case gen_tcp:recv(Socket, 0) of
        {ok, Packet} ->
            io:format("Task ~w: got ~w bytes of poetry from ~s\n",
                      [Tasknum, size(Packet), peername(Socket)]),
            get_poetry(Tasknum, Socket, Main, [Packet|Packets]);
        {error, _} ->
            Main ! {poem, list_to_binary(lists:reverse(Packets))}

This Erlang function is doing the work of the PoetryProtocol from our Twisted client, except it does so using blocking function calls. The gen_tcp:recv function waits until some data arrives on the socket (or the socket is closed), however long that might be. But a “blocking” function in Erlang only blocks the process running the function, not the entire Erlang runtime. That TCP socket isn’t really a blocking socket (you can’t make a true blocking socket in pure Erlang code). For each of those Erlang sockets there is, somewhere inside the Erlang runtime, a “real” TCP socket set to non-blocking mode and used as part of a select loop.

But the Erlang process doesn’t have to know about any of that. It just waits for some data to arrive and, if it blocks, some other Erlang process can run instead. And even if a process never blocks, the Erlang runtime is free to switch execution from that process to another at any time. In other words, Erlang has a non-cooperative concurrency model.

Notice that get_poetry/4, after receiving a bit of poem, proceeds by recursively calling itself. To an imperative language programmer this might seem like a recipe for running out of memory, but the Erlang compiler can optimize “tail” calls (function calls that are the last statement in a function) into loops. And this highlights another curious parallel between the Erlang and Twisted clients. In the Twisted client, the “virtual” loops are created by the reactor calling the same function (dataReceived) over and over again. And in the Erlang client, the “real” processes running (get_poetry/4) form loops by calling themselves over and over again via tail-call optimization. How about that.

If the connection is closed, the last thing get_poetry does is send the poem to the main process. That also ends the process that get_poetry is running, as there is nothing left for it to do.

The remaining key function in our Erlang client is collect_poems:

collect_poems(0, Poems) ->
    [io:format("~s\n", [P]) || P <- Poems];
collect_poems(N, Poems) ->
        {'DOWN', _, _, _, _} ->
            collect_poems(N-1, Poems);
        {poem, Poem} ->
            collect_poems(N, [Poem|Poems])

This function is run by the main process and, like get_poetry, it recursively loops on itself. It also blocks. The receive statement tells the process to wait for a message to arrive that matches one of the given patterns, and
then extract the message from its “mailbox”.

The collect_poems function waits for two kinds of messages: poems and “DOWN” notifications. The latter is a message sent to the main process when one of the get_poetry processes dies for any reason (this is the monitor part of spawn_monitor). By counting DOWN messages, we know when all the poetry has finished. The former is a message from one of the get_poetry processes containing one complete poem.

Ok, let’s take the Erlang client out for a spin. First start up three slow poetry servers:

python blocking-server/ --port 10001 poetry/fascination.txt
python blocking-server/ --port 10002 poetry/science.txt
python blocking-server/ --port 10003 poetry/ecstasy.txt --num-bytes 30

Now we can run the Erlang client, which has a similar command-line syntax as the Python clients. If you are on a Linux or other UNIX-like system, then you should be able to run the client directly (assuming you have Erlang installed and available in your PATH). On Windows you will probably need to run the escript program, with the path to the Erlang client as the first argument (with the remaining arguments for the Erlang client itself).

./erlang-client-1/get-poetry 10001 10002 10003

After that you should see output like this:

Task 3: got 30 bytes of poetry from 127:0:0:1:10003
Task 2: got 10 bytes of poetry from 127:0:0:1:10002
Task 1: got 10 bytes of poetry from 127:0:0:1:10001

This is just like one of our earlier Python clients where we print a message for each little bit of poetry we get. When all the poems have finished the client should print out the complete text of each one. Notice the client is switching back and forth between all the servers depending on which one has some poetry to send.

Figure 45 shows the process structure of our Erlang client:

Figure 45: Erlang poetry client
Figure 45: Erlang poetry client

This figure shows three get_poetry processes (one per server) and one main process. You can also see the messages that flow from the poetry processes to main process.

So what happens if one of those servers is down? Let’s try it:

./erlang-client-1/get-poetry 10001 10005

The above command contains one active port (assuming you left all the earlier poetry servers running) and one inactive port (assuming you aren’t running any server on port 10005). And we get some output like this:

Task 1: got 10 bytes of poetry from 127:0:0:1:10001

=ERROR REPORT==== 25-Sep-2010::21:02:10 ===
Error in process <0.33.0> with exit value: {{badmatch,{error,econnrefused}},[{erl_eval,expr,3}]}

Task 1: got 10 bytes of poetry from 127:0:0:1:10001
Task 1: got 10 bytes of poetry from 127:0:0:1:10001

And eventually the client finishes downloading the poem from the active server, prints out the poem, and exits. So how did the main function know that both processes were done? That error message is the clue. The error happens when get_poetry tries to connect to the server and gets a connection refused error instead of the expected value ({ok, Socket}). The resulting exception is called badmatch because Erlang “assignment” statements are really pattern-matching operations.

An unhandled exception in an Erlang process causes the process to “crash”, which means the process stops running and all of its resources are garbage collected. But the main process, which is monitoring all of the get_poetry processes, will receive a DOWN message when any of those processes stops running for any reason. And thus our client exits when it should instead of running forever.


Let’s take stock of some of the parallels between the Twisted and Erlang clients:

  1. Both clients connect (or try to connect) to all the poetry servers at once.
  2. Both clients receive data from the servers as soon as it comes in, regardless of which server delivers the data.
  3. Both clients process the poetry in little bits, and thus have to save the portion of the poems received thus far.
  4. Both clients create an “object” (either a Python object or an Erlang process) to handle all the work for one particular server.
  5. Both clients have to carefully determine when all the poetry has finished, regardless of whether a particular download succeeded or failed.

And finally, the main functions in both clients asynchronously receive poems and “task done” notifications. In the Twisted client this information is delivered via a Deferred while the Erlang client receives inter-process messages.

Notice how similar both clients are, in both their overall strategy and the structure of their code. The mechanics are a bit different, with objects, deferreds, and callbacks on the one hand and processes and messages on the other. But the high-level mental models of both clients are quite similar, and it’s pretty easy to move from one to the other once you are familiar with both.

Even the reactor pattern reappears in the Erlang client in miniaturized form. Each Erlang process in our poetry client eventually turns into a recursive loop that:

  1. Waits for something to happen (a bit of poetry comes in, a poem is delivered, another process finishes), and
  2. Takes some appropriate action.

You can think of an Erlang program as a big collection of little reactors, each spinning around and occasionally sending a message to another little reactor (which will process that message as just another event).

And if you delve deeper into Erlang you will find callbacks making an appearance. The Erlang gen_server process is a generic reactor loop that you “instantiate” by providing a fixed set of callback functions, a pattern repeated elsewhere in the Erlang system.

So if, having learned Twisted, you ever decide to give Erlang a try I think you will find yourself in familiar mental territory.

Further Reading

In this Part we’ve focused on the similarities between Twisted and Erlang, but there are of course many differences. One particularly unique feature of Erlang is its approach to error handling. A large Erlang program is structured as a tree of processes, with “supervisors” in the higher branches and “workers” in the leaves. And if a worker process crashes, a supervisor process will notice and take some action (typically restarting the failed worker).

If you are interested in learning more Erlang then you are in luck. Several Erlang books have either been published recently, or will be published shortly:

  • Programming Erlang — written by one of Erlang’s inventors. A great introduction to the language.
  • Erlang Programming — this complements the Armstrong book and goes into more detail in several key areas.
  • Erlang and OTP in Action — this hasn’t been published yet, but I am eagerly awaiting my copy. Neither of the first two books really addresses OTP, the Erlang framework for building large apps. Full disclosure: two of the authors are friends of mine.

Well that’s it for Erlang. In the next Part we will look at Haskell, another functional language with a very different feel from either Python or Erlang. Nevertheless, we shall endeavor to find some common ground.

Suggested Exercises for the Highly Motivated

  1. Go through the Erlang and Python clients and identify where they are similar and where they differ. How do they each handle errors (like a failure to connect to a poetry server)?
  2. Simplify the Erlang client so it no longer prints out each bit of poetry that comes in (so you don’t need to keep track of task numbers either).
  3. Modify the Erlang client to measure the time it takes to download each poem.
  4. Modify the Erlang client to print out the poems in the same order as they were given on the command line.
  5. Modify the Erlang client to print out a more readable error message when we can’t connect to a poetry server.
  6. Write Erlang versions of the poetry servers we made with Twisted.
Blather Programming Python Software

I Thought I Wanted It But I Changed My Mind

Part 19: I Thought I Wanted It But I Changed My Mind

This continues the introduction started here. You can find an index to the entire series here.


Twisted is an ongoing project and the Twisted developers regularly add new features and extend old ones. With the release of Twisted 10.1.0, the developers added a new capability — cancellation — to the Deferred class which we’re going to investigate today.

Asynchronous programming decouples requests from responses and thus raises a new possibility: between asking for the result and getting it back you might decide you don’t want it anymore. Consider the poetry proxy server from Part 14. Here’s how the proxy worked, at least for the first request of a poem:

  1. A request for a poem comes in.
  2. The proxy contacts the real server to get the poem.
  3. Once the poem is complete, send it to the original client.

Which is all well and good, but what if the client hangs up before getting the poem? Maybe they requested the complete text of Paradise Lost and then decided they really wanted a haiku by Kojo. Now our proxy is stuck with downloading the first one and that slow server is going to take a while. Better to close the connection and let the slow server go back to sleep.

Recall Figure 15, a diagram that shows the conceptual flow of control in a synchronous program. In that figure we see function calls going down, and exceptions going back up. If we wanted to cancel a synchronous function call (and this is just hypothetical) the flow control would go in the same direction as the function call, from high-level code to low-level code as in Figure 38:

Figure 38: synchronous program flow, with hypothetical cancellation
Figure 38: synchronous program flow, with hypothetical cancellation

Of course, in a synchronous program that isn’t possible because the high-level code doesn’t even resume running until the low-level operation is finished, at which point there is nothing to cancel. But in an asynchronous program the high-level code gets control of the program before the low-level code is done, which at least raises the possibility of canceling the low-level request before it finishes.

In a Twisted program, the lower-level request is embodied by a Deferred object, which you can think of as a “handle” on the outstanding asynchronous operation. The normal flow of information in a deferred is downward, from low-level code to high-level code, which matches the flow of return information in a synchronous program. Starting in Twisted 10.1.0, high-level code can send information back the other direction — it can tell the low-level code it doesn’t want the result anymore. See Figure 39:

Figure 39: Information flow in a deferred, including cancellation
Figure 39: Information flow in a deferred, including cancellation

Canceling Deferreds

Let’s take a look at a few sample programs to see how canceling deferreds actually works. Note, to run the examples and other code in this Part you will need a version of Twisted 10.1.0 or later. Consider deferred-cancel/

from twisted.internet import defer

def callback(res):
    print 'callback got:', res

d = defer.Deferred()
print 'done'

With the new cancellation feature, the Deferred class got a new method called cancel. The example code makes a new deferred, adds a callback, and then cancels the deferred without firing it. Here’s the output:

Unhandled error in Deferred:
Traceback (most recent call last):
Failure: twisted.internet.defer.CancelledError:

Ok, so canceling a deferred appears to cause the errback chain to run, and our regular callback is never called at all. Also notice the error is a twisted.internet.defer.CancelledError, a custom Exception that means the deferred was canceled (but keep reading!). Let’s try adding an errback in deferred-cancel/

from twisted.internet import defer

def callback(res):
    print 'callback got:', res

def errback(err):
    print 'errback got:', err

d = defer.Deferred()
d.addCallbacks(callback, errback)
print 'done'

Now we get this output:

errback got: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.defer.CancelledError'>: 

So we can ‘catch’ the errback from a cancel just like any other deferred failure.

Ok, let’s try firing the deferred and then canceling it, as in deferred-cancel/

from twisted.internet import defer

def callback(res):
    print 'callback got:', res

def errback(err):
    print 'errback got:', err

d = defer.Deferred()
d.addCallbacks(callback, errback)
print 'done'

Here we fire the deferred normally with the callback method and then cancel it. Here’s the output:

callback got: result

Our callback was invoked (just as we would expect) and then the program finished normally, as if cancel was never called at all. So it seems canceling a deferred has no effect if it has already fired (but keep reading!).

What if we fire the deferred after we cancel it, as in deferred-cancel/

from twisted.internet import defer

def callback(res):
    print 'callback got:', res

def errback(err):
    print 'errback got:', err

d = defer.Deferred()
d.addCallbacks(callback, errback)
print 'done'

In that case we get this output:

errback got: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.defer.CancelledError'>: 

Interesting! That’s the same output as the second example, where we never fired the deferred at all. So if the deferred has been canceled, firing the deferred normally has no effect. But why doesn’t d.callback('result') raise an error, since you’re not supposed to be able to fire a deferred more than once, and the errback chain has clearly run?

Consider Figure 39 again. Firing a deferred with a result or failure is the job of lower-level code, while canceling a deferred is an action taken by higher-level code. Firing the deferred means “Here’s your result”, while canceling a deferred means “I don’t want it any more”. And remember that canceling is a new feature, so most existing Twisted code is not written to handle cancel operations. But the Twisted developers have made it possible for us to cancel any deferred we want to, even if the code we got the deferred from was written before Twisted 10.1.0.

To make that possible, the cancel method actually does two things:

  1. Tell the Deferred object itself that you don’t want the result if it hasn’t shown up yet (i.e, the deferred hasn’t been fired), and thus to ignore any subsequent invocation of callback or errback.
  2. And, optionally, tell the lower-level code that is producing the result to take whatever steps are required to cancel the operation.

Since older Twisted code is going to go ahead and fire that canceled deferred anyway, step #1 ensures our program won’t blow up if we cancel a deferred we got from an older library.

This means we are always free to cancel a deferred, and we’ll be sure not to get the result if it hasn’t arrived (even if it arrives later). But canceling the deferred might not actually cancel the asynchronous operation. Aborting an asynchronous operation requires a context-specific action. You might need to close a network connection, roll back a database transaction, kill a sub-process, et cetera. And since a deferred is just a general-purpose callback organizer, how is it supposed to know what specific action to take when you cancel it? Or, alternatively, how could it forward the cancel request to the lower-level code that created and returned the deferred in the first place? Say it with me now:

I know, with a callback!

Canceling Deferreds, Really

Alright, take a look at deferred-cancel/

from twisted.internet import defer

def canceller(d):
    print "I need to cancel this deferred:", d

def callback(res):
    print 'callback got:', res

def errback(err):
    print 'errback got:', err

d = defer.Deferred(canceller) # created by lower-level code
d.addCallbacks(callback, errback) # added by higher-level code
print 'done'

This code is basically like the second example, except there is a third callback (canceller) that’s passed to the Deferred when we create it, rather than added afterwards. This callback is in charge of performing the context-specific actions required to abort the asynchronous operation (only if the deferred is actually canceled, of course). The canceller callback is necessarily part of the lower-level code that returns the deferred, not the higher-level code that receives the deferred and adds its own callbacks and errbacks.

Running the example produces this output:

I need to cancel this deferred: <Deferred at 0xb7669d2cL>
errback got: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.defer.CancelledError'>: 

As you can see, the canceller callback is given the deferred whose result we no longer want. That’s where we would take whatever action we need to in order to abort the asynchronous operation. Notice that canceller is invoked before the errback chain fires. In fact, we may choose to fire the deferred ourselves at this point with any result or error of our choice (and thus preempting the CancelledError failure). Both possibilities are illustrated in deferred-cancel/ and deferred-cancel/

Let’s do one more simple test before we fire up the reactor. We’ll create a deferred with a canceller callback, fire it normally, and then cancel it. You can see the code in deferred-cancel/ By examining the output of that script, you can see that canceling a deferred after it has been fired does not invoke the canceller callback. And that’s as we would expect since there’s nothing to cancel.

The examples we’ve looked at so far haven’t had any actual asynchronous operations. Let’s make a simple program that invokes one asynchronous operation, then we’ll figure out how to make that operation cancellable. Consider the code in deferred-cancel/

from twisted.internet.defer import Deferred

def send_poem(d):
    print 'Sending poem'
    d.callback('Once upon a midnight dreary')

def get_poem():
    """Return a poem 5 seconds later."""
    from twisted.internet import reactor
    d = Deferred()
    reactor.callLater(5, send_poem, d)
    return d

def got_poem(poem):
    print 'I got a poem:', poem

def poem_error(err):
    print 'get_poem failed:', err

def main():
    from twisted.internet import reactor
    reactor.callLater(10, reactor.stop) # stop the reactor in 10 seconds

    d = get_poem()
    d.addCallbacks(got_poem, poem_error)


This example includes a get_poem function that uses the reactor’s callLater method to asynchronously return a poem five seconds after get_poem is called. The main function calls get_poem, adds a callback/errback pair, and then starts up the reactor. We also arrange (again using callLater) to stop the reactor in ten seconds. Normally we would do this by attaching a callback to the deferred, but you’ll see why we do it this way shortly.

Running the example produces this output (after the appropriate delay):

Sending poem
I got a poem: Once upon a midnight dreary

And after ten seconds our little program comes to a stop. Now let’s try canceling that deferred before the poem is sent. We’ll just add this bit of code to cancel the deferred after two seconds (well before the five second delay on the poem itself):

    reactor.callLater(2, d.cancel) # cancel after 2 seconds

The complete program is in deferred-cancel/, which produces the following output:

get_poem failed: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.defer.CancelledError'>: 
Sending poem

This example clearly illustrates that canceling a deferred does not necessarily cancel the underlying asynchronous request. After two seconds we see the output from our errback, printing out the CancelledError as we would expect. But then after five seconds will still see the output from send_poem (but the callback on the deferred doesn’t fire).

At this point we’re just in the same situation as deferred-cancel/ “Canceling” the deferred causes the eventual result to be ignored, but doesn’t abort the operation in any real sense. As we learned above, to make a truly cancelable deferred we must add a cancel callback when the deferred is created.

What does this new callback need to do? Take a look at the documentation for the callLater method. The return value of callLater is another object, implementing IDelayedCall, with a cancel method we can use to prevent the delayed call from being executed.

That’s pretty simple, and the updated code is in deferred-cancel/ The relevant changes are all in the get_poem function:

def get_poem():
    """Return a poem 5 seconds later."""

    def canceler(d):
        # They don't want the poem anymore, so cancel the delayed call

        # At this point we have three choices:
        #   1. Do nothing, and the deferred will fire the errback
        #      chain with CancelledError.
        #   2. Fire the errback chain with a different error.
        #   3. Fire the callback chain with an alternative result.

    d = Deferred(canceler)

    from twisted.internet import reactor
    delayed_call = reactor.callLater(5, send_poem, d)

    return d

In this new version, we save the return value from callLater so we can use it in our cancel callback. The only thing our callback needs to do is invoke delayed_call.cancel(). But as we discussed above, we could also choose to fire the deferred ourselves. The latest version of our example produces this output:

get_poem failed: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.defer.CancelledError'>: 

As you can see, the deferred is canceled and the asynchronous operation has truly been aborted (i.e., we don’t see the print output from send_poem).

Poetry Proxy 3.0

As we discussed in the Introduction, the poetry proxy server is a good candidate for implementing cancellation, as it allows us to abort the poem download if it turns out that nobody wants it (i.e., the client closes the connection before we send the poem). Version 3.0 of the proxy, located in twisted-server-4/, implements deferred cancellation. The first change is in the PoetryProxyProtocol:

class PoetryProxyProtocol(Protocol):

    def connectionMade(self):
        self.deferred = self.factory.service.get_poem()
        self.deferred.addBoth(lambda r: self.transport.loseConnection())

    def connectionLost(self, reason):
        if self.deferred is not None:
            deferred, self.deferred = self.deferred, None
            deferred.cancel() # cancel the deferred if it hasn't fired

You might compare it to the older version. The two main changes are:

  1. Save the deferred we get from get_poem so we can cancel later if we need to.
  2. Cancel the deferred when the connection is closed. Note this also cancels the deferred after we actually get the poem, but as we discovered in the examples, canceling a deferred that has already fired has no effect.

Now we need to make sure that canceling the deferred actually aborts the poem download. For that we need to change the ProxyService:

class ProxyService(object):

    poem = None # the cached poem

    def __init__(self, host, port): = host
        self.port = port

    def get_poem(self):
        if self.poem is not None:
            print 'Using cached poem.'
            # return an already-fired deferred
            return succeed(self.poem)

        def canceler(d):
            print 'Canceling poem download.'
            factory.deferred = None

        print 'Fetching poem from server.'
        deferred = Deferred(canceler)
        factory = PoetryClientFactory(deferred)
        from twisted.internet import reactor
        connector = reactor.connectTCP(, self.port, factory)
        return factory.deferred

    def set_poem(self, poem):
        self.poem = poem
        return poem

Again, you may wish to compare this with the older version. This class has a few more changes:

  1. We save the return value from reactor.connectTCP, an IConnector object. We can use the disconnect method on that object to close the connection.
  2. We create the deferred with a canceler callback. That callback is a closure which uses the connector to close the connection. But first it sets the factory.deferred attribute to None. Otherwise, the factory might fire the deferred with a “connection closed” errback before the deferred itself fires with a CancelledError. Since this deferred was canceled, having the deferred fire with CancelledError seems more explicit.

You might also notice we now create the deferred in the ProxyService instead of the PoetryClientFactory. Since the canceler callback needs to access the IConnector object, the ProxyService ends up being the most convenient place to create the deferred.

And, as in one of our earlier examples, our canceler callback is implemented as a closure. Closures seem to be very useful when implementing cancel callbacks!

Let’s try out our new proxy. First start up a slow server. It needs to be slow so we actually have time to cancel:

python blocking-server/ --port 10001 poetry/fascination.txt

Now we can start up our proxy (remember you need Twisted 10.1.0):

python twisted-server-4/ --port 10000 10001

Now we can start downloading a poem from the proxy using any client, or even just curl:

curl localhost:10000

After a few seconds, press Ctrl-C to stop the client, or the curl process. In the terminal running the proxy you should
see this output:

Fetching poem from server.
Canceling poem download.

And you should see the slow server has stopped printing output for each bit of poem it sends, since our proxy hung up. You can start and stop the client multiple times to verify each download is canceled each time. But if you let the poem run to completion, then the proxy caches the poem and sends it immediately after that.

One More Wrinkle

We said several times above that canceling an already-fired deferred has no effect. Well, that’s not quite true. In Part 13 we learned that the callbacks and errbacks attached to a deferred may return deferreds themselves. And in that case, the original (outer) deferred pauses the execution of its callback chains and waits for the inner deferred to fire (see Figure 28).

Thus, even though a deferred has fired the higher-level code that made the asynchronous request may not have received the result yet, because the callback chain is paused waiting for an inner deferred to finish. So what happens if the higher-level code cancels that outer deferred? In that case the outer deferred does not cancel itself (it has already fired after all); instead, the outer deferred cancels the inner deferred.

So when you cancel a deferred, you might not be canceling the main asynchronous operation, but rather some other asynchronous operation triggered as a result of the first. Whew!

We can illustrate this with one more example. Consider the code in deferred-cancel/

from twisted.internet import defer

def cancel_outer(d):
    print "outer cancel callback."

def cancel_inner(d):
    print "inner cancel callback."

def first_outer_callback(res):
    print 'first outer callback, returning inner deferred'
    return inner_d

def second_outer_callback(res):
    print 'second outer callback got:', res

def outer_errback(err):
    print 'outer errback got:', err

outer_d = defer.Deferred(cancel_outer)
inner_d = defer.Deferred(cancel_inner)

outer_d.addCallbacks(second_outer_callback, outer_errback)


# at this point the outer deferred has fired, but is paused
# on the inner deferred.

print 'canceling outer deferred.'

print 'done'

In this example we create two deferreds, the outer and the inner, and have one of the outer callbacks return the inner deferred. First we fire the outer deferred, and then we cancel it. The example produces this output:

first outer callback, returning inner deferred
canceling outer deferred.
inner cancel callback.
outer errback got: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.defer.CancelledError'>: 

As you can see, canceling the outer deferred does not cause the outer cancel callback to fire. Instead, it cancels the inner deferred so the inner cancel callback fires, and then outer errback receives the CancelledError (from the inner deferred).

You may wish to stare at that code a while, and try out variations to see how they affect the outcome.


Canceling a deferred can be a very useful operation, allowing our programs to avoid work they no longer need to do. And as we have seen, it can be a little bit tricky, too.

One very important fact to keep in mind is that canceling a deferred doesn’t necessarily cancel the underlying asynchronous operation. In fact, as of this writing, most deferreds won’t really “cancel”, since most Twisted code was written prior to Twisted 10.1.0 and hasn’t been updated. This includes many of the APIs in Twisted itself! Check the documentation and/or the source code to find out whether canceling the deferred will truly cancel the request, or simply ignore it.

And the second important fact is that simply returning a deferred from your asynchronous APIs will not necessarily make them cancelable in the complete sense of the word. If you want to implement canceling in your own programs, you should study the Twisted source code to find more examples. Cancellation is a brand new feature so the patterns and best practices are still being worked out.

Looking Ahead

At this point we’ve learned just about everything about Deferreds and the core concepts behind Twisted. Which means there’s not much more to introduce, as the rest of Twisted consists mainly of specific applications, like web programming or asynchronous database access. So in the next couple of Parts we’re going to take a little detour and look at two other systems that use asynchronous I/O to see how some of their ideas relate to the ideas in Twisted. Then, in the final Part, we will wrap up and suggest ways to continue your Twisted education.

Suggested Exercises

  1. Did you know you can spell canceled with one or two els? It’s true. It all depends on what sort of mood you’re in.
  2. Peruse the source code of the Deferred class, paying special attention to the implementation of cancellation.
  3. Search the Twisted 10.10 source code for examples of deferreds with cancel callbacks. Study their implementation.
  4. Make the deferred returned by the get_poetry method of one of our poetry clients cancelable.
  5. Make a reactor-based example that illustrates canceling an outer deferred which is paused on an inner deferred. If you use callLater you will need to choose the delays carefully to ensure the outer deferred is canceled at the right moment.
  6. Find an asynchronous API in Twisted that doesn’t support a true cancel and implement cancellation for it. Submit a patch to the Twisted project. Don’t forget unit tests!
%d bloggers like this: