|<<>>|74 of 262 Show listMobile Mode

Why Events Are A Bad Idea (for high-concurrency servers) by Rob von Behren, Jeremy Condit and Eric Brewer (2003, read in 2020)

Published by marco on

Updated by marco on

Disclaimer: these are notes I took while reading this book. They include citations I found interesting or enlightening or particularly well-written. In some cases, I’ve pointed out which of these applies to which citation; in others, I have not. Any benefit you gain from reading these notes is purely incidental to the purpose they serve of reminding me what I once read. Please see Wikipedia for a summary if I’ve failed to provide one sufficient for your purposes. If my notes serve to trigger an interest in this book, then I’m happy for you.

This is an older paper (PDF) (University of California at Berkeley) that discusses event-based systems (essentially cooperative multi-tasking) vs. threaded implementations (pre-emptive multi-tasking). The conclusion in this paper from 2003 (nearly 20 years ago) is that threads are hands-down easier to work with and offer the promised performance, given that the implementation is robust.

The fact that events vs. threads was still being discussed at the time was only due to very sub-standard threading implementations that engendered deadlocks, race-conditions and terrible performance. Modern threading implementations are based on OS constructs that trade memory for performance by storing a separate stack for each thread. Only the registers need to be switched out when switching threads/tasks.

It’s interesting that one of the main arguments against threads was “restrictive control flow” because that issue has now been nearly completely addressed by the Promise/Future pattern, encapsulated in an even easier-to-use syntax as async/await in languages like C#, JavaScript. TypeScript, and Rust. This paradigm abstracts subroutines without making any promises about how the code in those subroutines is executed.

Just from the keywords, it’s obvious that the idea is to execute the code asynchronously, but it’s not a requirement in all cases. The .NET literature is full of discussions of optimizations that balance a cooperative approach for quick “bailout” scenarios—where e.g. a value is available in constant time and doesn’t actually necessitate a thread as it will never wait on any asynchronous I/O— to those that seamlessly grab a thread from a pool and schedule the code asynchronously (and preemptively) only if needed. Many of these different code-paths are even allocation-free in the .NET Core Runtime, leading to massive performance boosts versus older implementations.

Recently, the developers of Rust have written several interesting papers—and white-paper–length blogs—about the async/await implementation in Rust that go in the same directly: weighing the pros and cons of event-based vs. thread-based implementations.

The authors of this paper concluded with:

“Although event systems have been used to obtain good performance in high concurrency systems, we have shown that similar or even higher performance can be achieved with threads. Moreover, the simpler programming model and wealth of compiler analyses that threaded systems afford gives threads an important advantage over events when writing highly concurrent servers. In the future, we advocate tight integration between the compiler and the thread system, which will result in a programming model that offers a clean and simple interface to the programmer while achieving superior performance.”

And so it would come to be.

Citations

None.