Brief comparison of Rust iterators and D ranges

As I have mentioned before, Rust programming language is the one which interests me most currently. And lately I have been experimenting with how one may approach higher level library and application design there.

In D programming language, which I am quite experienced with, the main story of success for API design comes from the concept of ranges - abstractions over sequences of values generating them in lazy manner. It ended up being cornerstone of zero-cost and yet incredibly convenient to use library designs and I was naturally curious about extending this success story to Rust. And iterators indeed look much more similar to D ranges than to C++ iterators!

The following notes come from experimenting with similar design approach in Rust and doing some research on encountered issues.


Let's start with something as simple as range producing sequence of integer elements until hard-coded number. This is how one could write it in D:

struct Numbers
    int current = 0;

    void popFront() {

    bool empty() {
        return this.current >= 42;

    int front() {
        return this.current;

D ranges are based on structural typing which means that there is no special entity in type system which defines them. Instead, any struct or class implementing specific methods (popFront, front and empty) is recognized by both standard library and compiler as a range.

And this is the matching Rust snippet:

struct Numbers {
    current: i32

impl Iterator for Numbers {
    type Item = i32;

    fn next(&mut self) -> Option<Self::Item> {
        if self.current >= 42 {
            return None;
        } else {
            let num = self.current;
            self.current += 1;
            return Some(num);

In Rust Iterator is one of traits defined in the standard library. Traits are similar to Haskell typeclasses or proposed C++ concepts and are enforced by the compiler - when you try to implement some iterator, failing to define required methods will result in immediate compilation error.

Even such basic synthetic snippet already shows some fundamental differences between D in Rust here in terms of both philosophy and functionality:

  1. Thanks to traits, Rust approach is less prone to subtle errors like typo in the method name or wrong deduction of the element type. If it compiles, it is pretty much guaranteed to be a valid iterator. At the same time flexible structural nature of D definition allows to work with much larger variety of features - for example, one can define empty to be constant equal to false letting compiler figure out it is an infinite range at compile-time. Or make front return by reference in case element type is non-copyable entity.
  2. Rust combines advancing an iterator and getting actual element value in a single method, next, while D allows repeated access to front at any time. Former is more elegant in a sense that it doesn't require any specific convention of how methods should be called but makes implementing some algorithms a bit awkward if they require checking iterator state from different contexts without advancing it.

Regarding next vs front distinction it is important to note that Rust iterators do provide peekable method which returns wrapper struct with peed, allowing same repetitive access. But it is implemented by advancing iterator and caching last result which means that one can't pass original iterator down the chain unmodified unless it was cloned. Here I workaround it by storing actual Peekable<T> struct down the call chain but it doesn't scale as well.


Ranges alone are not that interesting. What would be point of lazy iteration if one can just write foreach loop to do pretty much the same? Things become really exciting when you introduce the notion of range adaptors (or "iterator adaptors" in Rust case) which apply various algorithms - also lazily! Simple definition of adaptor would be "something that takes iterator/range as an input and return new iterator/range which applies extra logic as it advances".

This is where things start to look notably more different between D and Rust. Consider something like an adaptor which multiplies all range elements by 2.

This is one possible D implementation:

import std.range;

struct Mul2 (R)
    if (isInputRange!R && is(ElementType!R == int))
    R input;

    bool empty () { return input.empty; }
    void popFront () { input.popFront(); }
    int front () { return input.front * 2; }

Mul2 mul2 (R) (R range)
    if (isInputRange!R && is(ElementType!R == int))
    return Mul2(range);

unittest {
    import std.algorithm.comparison : equal;
    auto r = [ 1, 2, 3 ];
    assert(equal(r.mul2(), [ 2, 4, 6 ]));

For those who are not familiar with D, things can look a bit complicated here compared to earlier snippet as this one uses more distinct D features. Some explanations:

  • mul2 is just a regular templated function, similar to what you can find in C++. It is templated on a single generic type parameter, R, which is restricted by template constraint to only work when R is some input range type and it produces elements of type int.
  • Struct Mul2 implements new range which forward all operations to underlying one with only addition of multiplying front by 2 when it is called.
  • Despite the fact that mul2 is just a regular function and not a struct method, it is still possible to call with a method syntax r.mul2() in the test case thanks to UFCS - Uniform Function Call Syntax. This is a D feature allowing to call any function moving its first argument to the left hand side of the dot, as if the function was a method of that argument.

Probably most important thing about composing ranges and adaptors is that it is extremely efficient - no temporary storage needs to be allocates, not computations happen until whole adaptor chain gets advanced. With a sufficiently smart optimizing compiler one can get generated machine code not that different from the one for C style loop & condition based implementation.

And of course adaptor idea is crucial for Rust iterators too:

pub struct Mul2<T>
    T: Iterator<Item = i32>,
    input: T

// Implement adaptor iterator:
impl<T> Iterator for Mul2<T>
    T: Iterator<Item = i32>,
    type Item = i32;

    fn next(&mut self) -> Option<Self::Item> {|x| x*2)

// Define new trait allowing to call new adaptor with a method syntax:
pub trait Mul2Adaptor : Iterator<Item = i32>
    Self : Sized
    fn mul2 (self) -> Mul2<Self> {
        Mul2 { input: self }

// Implement new trait for all relevant types:
impl<T> Mul2Adaptor for T
    T: Iterator<Item = i32>,

fn test_mul2() {
    let v : Vec<i32> = vec![1, 2, 3, 4].into_iter().mul2().collect();
    assert!(v == vec![2, 4, 6, 8]);

Same as with D version, this snippet is likely to require much more explanation for those not particularly familiar with Rust:

  • Contrary to iterators, adaptors in Rust don't have some special trait or language support on their own. On its own, adaptor is just another iterator which adds extra processing to next function, pretty much same as in D.
  • To avoid dealing with lifetime annotations I define mul2 here to accept self argument by move, exhausting original iterator source which is not what one normally wants to do. But basic code structure is better visible this way.
  • In absence of UFCS, only way to add new adaptor methods to arbitrary iterator is to define new trait providing the method and do so called "blanket implementation" of that trait for all relevant types.
  • As Rust stdlib doesn't have built-in utility to compare arbitrary iterators, I resort to collecting result into another vector which is not necessary. But such utility is provided by itertools crate if needed.

This was the first moment when I started to get slightly annoyed with Rust approach during the whole investigation. But let's start with the good stuff.

There is one huge advantage of Rust version which can't stress strong enough - it ensures that iterator algorithm implementations match with API before they are actually used. Thanks to the trait system, you have to declare every single property required from generic argument type afront, even small things such as being able to copy it or having defined size. Which means that algorithm implementation won't have tiny accidental implicit requirements you don't even realize yourself until user of your library hits obscure compilation error. This was a recurring problem for me when working with generic algorithms and ranges in D.

But there are also numerous annoying issues that are not critical in any way but make writing actual code awkward enough that I keep wondering if anyone has put any serious thought into its ergonomics. For example, this blanket implementation trick is not something you are told about when reading about iterators in the Rust book - I had to spend quite some time before finding it and figuring out why is it necessary.

Another inconvenience I have encountered is that adaptors implemented in the standard library (like map or filter) don't actually use this approach - instead they are all defined and implemented as part of main Iterator trait. It means that one can't reimplement one specific adaptor in own trait - there are ways to disambiguate traits in language but none of them seem to be compatible with chained calls AFAICS. If each adaptor was provided by own trait, one could control which implementation is used with regular module system features.

And finally, having to separately define trait and provide blanket implementation feels needlessly cumbersome considering how common such code pattern is. It is probably possible to implement a macro for it but at the same time I feel that advocating such idioms with language syntax sugar is a best way to ensure it actually gets popular.

Forward Ranges That Work

In D there are more kinds of ranges than just basic input range. Another simple but important concept is the idea of a forward range:

import std.range;

void foo (R) (R range)
    if (isForwardRange!R)
    // same as InputRange, but also defines 'save' method:
    auto copy =;
    assert(range.front == copy.front);

Rationale for having separate kind of range that can save current state into another variable comes from the fact that some things that one may want to express as ranges can't be copied semantically. Typical example would be something like range of lines coming from stdin - it is not possible to iterate it multiple times without buffering all read lines somewhere.

Sadly, this is one example of a great idea that completely failed because it was not sufficiently enforced. It is dark side of expressive power of unchecked templates - compiler will happily accept code that uses some functionality by an accident. As a result, many algorithms that should have only worked with forward ranges will also accept input ranges and make their copies with plain variable assignment. Each such case is a bug, but it is so easy to make the mistake that the idea of distinguishing forward ranges has compromised itself.

Ironically, Rust manages to do much better here without actually putting any intentional design into it:

fn foo <T> (mut input : T)
    T       : Clone + Iterator,
    T::Item : PartialEq, // needed only for assert
    let mut copy = input.clone();
    assert!( ==;

It again comes from beauty of pedantic strictness of traits - one can't accidentally make a copy of iterator in generic algorithm simply because Rust won't allow you to make a copy of any type in generic algorithm unless you explicitly request it to implement Clone trait.

Thus Clone + Iterator combo is an unintentional but perfect match for D notion of forward range. Sadly, it doesn't seem to be recognized as useful concept in existing Rust libraries. One sad surprise I had was about how fat group_by definition in itertools is, resorting to something as crazy as allocating a vector when buffering is needed - hardly justified price just so that it can be implemented for all possible iterators. I have attempted to implement a similar utility limited to Clone + Iterator and it ended up being much more simple and stack friendly at cost of less features and having to effectively iterate original source twice. In my own typical coding tasks I'd probably take such trade-off 9 times out of 10.

Sad Tale of Streaming Iterators

I have managed to mostly stay away from the topic of lifetimes so far but this golden age of no worries is going to end now. Sorry :( If you don't have at least a basic grasp on lifetime and ownership system in Rust, it may be a good idea to either read about it first or just skip this part of the article completely.

Consider this simple D program:

void main () {
    import std.stdio, std.algorithm;

        .filter!(line => line.length > 0)

It reads lines of text from stdin, ignores empty ones and prints others. All this happens while reusing same memory buffer inside byLine, resulting in O(1) allocation complexity. Such API is currently impossible to implement efficiently in Rust - there does exist a .lines() iterator but it will allocate a new string for each line.

"Impossible" may seem like a strong statement and indeed everything becomes possible if one uses unsafe Rust. But it is not what language was designed for and inherently unsafe API is certain to be rejected in any serious library - Rust promise of compiler guaranteed safety is too important to abandon.

Problem with safety of this D snippet becomes obvious if you try leaking reference to byLine internal buffer:

void main () {
    import std.stdio;

    auto range = stdin.byLine();
    auto s = range.front;
    writeln(s); // will print next line, using length from the first one!

This problem surprised sufficient amount of D newbies to justify adding byLineCopy range which allocates new string each time same as lines in Rust. But important difference is here that D can afford to still keep old unsafe range implementation for those who know how to make use of it to improve performance - in D safety is valued but not strictly promised. Rust can't make such compromises.

Someone more familiar with Rust may immediately object - "Wait, but isn't this exactly what lifetime annotations are for?". Indeed, Rust programming language allows to explain such relation to compiler:

fn next<'a> (&'a mut self) -> Option<&'a mut str>;

Implementing iterator next method with such annotation would make impossible to call next again (or any other method that requires mutable borrowing of self) until returned value is completely discarded. Being able to verify such relations at compile-time with zero runtime overhead is bug selling point for Rust and D can't provide anything close, not even with recent scope enhancements.

The one problem that kills it all is that Iterator trait in standard library does not define lifetime like this and can't do so without breaking some legitimate use cases. One can define own NewIterator of course, but that won't be compatible with existing all library code interfacing with iterators. Which is rather annoying limitation considering both traits are essentially the same and only differ in how lifetime attributes are applied.

In the meanwhile one can of course resort to runtime checks and use reference counted buffer instead, allowing to advance the iterator only if current buffer ref count is 1. But this won't be zero-overhead anymore and having to compromise like that makes main Rust selling point much less pointy.

Conclusions So Far

My main issue with iterators in Rust is that they don't feel like sufficiently researched idiomatic design approach. They don't seem promoted as such either. When you try out various libraries, all cool zero-overhead lazy iteration examples work with relatively simple pipelines of 3-4 chained calls of map/filter/collect and resorting to intermediate allocations for anything slightly more complicated is accepted practice. Not something I am eager to tolerate in the next gen system programming language.

At the same time, it has all the right concepts. One big language feature missing right now seems to be Higher-Kinded Type support to address that streaming iterator problem. Everything else is mostly matter of coding style preferences and advocating idioms - and if Rust community will eventually find these idioms worth pushing for.

Ranges in D are much more focused on getting things done right here and now - it is de-facto standard for API base of all new libraries and very few inherent limitations from compiler make it possible to fit arbitrary data source or algorithm into range model. It fits D ideological pragmatism but it is already hitting scaling issue with subtle bugs that appear each time some corner case is missed or convention ignored.

Best thing about D ranges is that it is a tool that has proved its worth already and I wish more developers working on other languages and libraries for them have used that experience to build something as practical but more elegant on top.