Table of Contents


Iteration benchmark works like this: I create a Map<uint64_t, uint64_t>, and each time I insert one random element the whole map is iterated. Once 50k elements are inserted, elements are removed one by one and each time the map is iterated again. In total, exactly 2.5 billion entries are iterated.

Map<uint64_t, uint64_t> map;

auto const initialRngState = rng.state();

// measure insert than iterate whole map
for (size_t n = 0; n < num_iters; ++n) {
    map[rng()] = n;
    for (auto const& keyVal : map) {
        result += keyVal.second;
    }
}

// reset rng back to inital state
rng.state(initialRngState);

// measure erase than iterate whole map
for (size_t n = 0; n < num_iters; ++n) {
    map.erase(rng());
    for (auto const& keyVal : map) {
        result += keyVal.second;
    }
}

Results

Hashes

In this benchmark the hash function should be completely irrelevant. And it is; the results for the different hashers are practically the same.

Hashmaps

Here it is all about cache locality. So the more compact your data is bunched together, the faster you can make iteration. That’s why tsl::sparse_map is so blazingly fast, only followed by the second sparse map in this benchmark spp::sparse_hash_map.

robin_hood::unordered_flat_map is third, it is thus the fastest non-sparse map. It gets that speed by making use of its 1-byte overhead structure: the map can skip up to 8 entries at once if they are empty. robin_hood::unordered_node_map is slower due to the additional indirection, but it is still faster than the rest. Interestingly, tsl::robin_map is quite slow here. I assume it is just not optimized for fast iteration.

Chart

Each entry shows total time.

  1. blue: iterate whole map each inserting until 50k entries
  2. orange: iterate whole map each erase until empty