The Avail Programming Language

Good Things Come to Those Who Wait

In the last newsletter, vague claims were made that newsletters would be published weekly. (Ha!) Well… This claim was made back in October, rather giving the impression that Avail development has been standing still. I hope that the high frequency of blog posts has disabused you of this notion, even if you are disconsolate about the scarcity of newsletters.

For those who have been waiting patiently these past four months, I am pleased to announce the reason for the delay: Avail now fully supports concurrency!

The concurrency model focuses on the fiber, a new primitive data type that represents a lightweight schedulable "worker". Compared to native threads, fibers are cheap: they are inexpensive to create and schedule, and can usually represent their entire machine state with only a few hundred bytes of memory. In order to leverage today's multi-core hardware configurations, fibers are backed by a large, auto-scaling pool of native threads. Avail easily supports hundreds of thousands of fibers multiplexed onto significantly fewer threads.

Synchronization of the underlying native threads uses the novel mechanism of "global lock elision" whenever possible. If a value is visible only to a single fiber, then the native thread that is running the fiber may forego entering a critical section when operating on the value's mutable state. Only when a value becomes visible to a second fiber is synchronization required to ensure coherence. The implementation automatically detects when such visibility transitions occur, and ensures that sensitive operations are thenceforth protected (by low-level synchronization devices) against dangerous races.

Synchronization of the fibers themselves, within Avail code, is made possible via several new primitives: atomic get-and-set, atomic compare-and-swap, atomic fetch-and-add, park, and unpark. These primitives provide the necessary support for building synchronization devices. The Avail library already supports non-reentrant mutexes, reentrant mutexes, and monitors.

The Avail library also includes several parallel algorithms: "For each of⁇_in parallel do_" and "map_in parallel through_". These behave like "For each of⁇_do_" and "map_through_", their serial cousins, except that multiple fibers are used to process the elements. The current algorithms work only for tuples, but many more algorithms are coming soon.

The new module "Foundation Tests/Concurrency Tests" contains numerous examples of creating fibers, waiting for fibers to complete, retrieving final values produced by fibers, using synchronization devices, and using parallel algorithms. An addition to the site's documentation section that describes Avail's concurrency model in greater detail is underway. Obviously there is much more to say about concurrency in Avail, and that will be the official venue for saying it.

The next newsletter will celebrate (at least) the arrival of asynchronous file and socket I/O. Given the new architecture, these features should be trivial to build… so hopefully four months won't pass again! We are also working on hammering the kinks out of a license agreement so that you, dear reader, can actually get your hands on Avail.

‹ Previous | Return to Newsletters | Next ›