Go 1.24 introduces a new map implementation, inspired by Google’s Swiss Tables, which brings significant optimizations and performance enhancements to the language’s built-in map type. While Go’s previous map implementation was already efficient, the new design takes it a step further by introducing a clever approach to data organization and access.
To help digest this complex change, let’s use an analogy that illustrates how the new map works and how it differs from the previous implementation. Let’s use a relatable analogy: a library. Just as a library organizes books in a way that makes them easy to find and access, a map organizes data for efficient retrieval.
This analogy will provide a high-level understanding of the key improvements without delving too deeply into technical details. For those interested in a more in-depth exploration, we’ll reference additional resources throughout the explanation.
The Library Analogy
Think of Go’s map as a library designed to store books. Here’s how it works:
1. Tables Are Library Sections
The map starts with one table, which is like a section of the library. If this section gets too crowded, the library adds another section. Each table is divided into smaller units called groups.
2. Groups Are Bookshelves
Each table is made up of multiple groups, which are like bookshelves in the library. A group can hold up to 8 books (key-value pairs). These groups are the fundamental storage units in Go maps.
3. Control Word: The Librarian’s Cheat Sheet
Each bookshelf has a label that summarizes key information taped to it, called the control word. This cheat sheet contains metadata about the books on that shelf:
- It stores tiny “fingerprints” of each book’s ID (derived from its hash).
- It marks whether slots on the shelf are empty, occupied, or deleted.
This cheat sheet helps librarians quickly locate books without flipping through every slot.
It can be pictured as follows:
|
|
How It Works: Storing and Retrieving Books
Let’s walk through an example of storing and retrieving a book in this library.
Storing a Book
Say you want to store “The Great Gatsby” by F. Scott Fitzgerald in the map.
Generate a Hash The librarian generates a unique ID for
"The Great Gatsby"
using a hash function, e.g.,0xf83c6f3a3c
.Find the Section and Bookshelf The hash is split into two parts:
- H1: Determines which section (table) and bookshelf (group) the book belongs to. (57 bits)
- H2: A small fingerprint stored in the control word for quick identification. (7 bits)
For example:
- H1 says: “Go to Section 1, Bookshelf 3.”
- H2 says: “Fingerprint is
3c
.”
Place the Book The librarian places
"The Great Gatsby"
into an available slot on Bookshelf 3 and updates the control word with3c
.
Retrieving a Book
Now you want to retrieve the book from the map.
Find the Section and Bookshelf The librarian uses H1 from the hash of
"The Great Gatsby"
to go directly to Section 1, Bookshelf 3.Check the Cheat Sheet (Control Word) The librarian looks at the control word (
3c
) to see if any slots match"The Great Gatsby"
’s fingerprint. If the fingerprint does not match, they know that slot doesn’t hold the desired book, saving time.Confirm and Return If there’s a match, they compare keys directly to confirm it’s
"The Great Gatsby"
. Once confirmed, they return its value (10101..
).
This process avoids unnecessary checks and minimizes memory lookups, making retrieval lightning-fast.
Handling Collisions
What happens if multiple books generate the same H1 (i.e., they hash to the same group)? This scenario is known as a collision.
In our library analogy:
- If a bookshelf is full, the librarian moves to the next available shelf in the same section.
- This is called linear probing, where nearby groups are checked for free slots.
To ensure efficiency, if all shelves in a section are full, a new section (table) is added, and some books are redistributed between sections based on updated hash calculations to maintain optimal access.
Why This Design is Brilliant
The new map implementation in Go 1.24 introduces several optimizations inspired by Swiss Table design:
1. Cache-Friendly Layout
Keys and values are stored together in groups, improving cache locality (storing related items close together takes advantage of how memory access works). When looking for an item, both its key and value are likely loaded into memory at once.
2. Fast Probing with Metadata
The control word allows fast rejection of irrelevant slots using SIMD (Single Instruction, Multiple Data allows processing multiple data points with a single instruction, thus boosting performance during lookups) operations. This means multiple slots can be checked simultaneously, speeding up lookups significantly.
While pre 1.24 use tophash
which reminds this strategy, it was still required to pointer chasing when overflow buckets were involved, reducing cache efficiency.
Memory Layout Example
Here’s a simplified version of how this might look in memory for a single group:
Control Word | Key0 | Value0 | Key1 | Value1 | … | Key7 | Value7 |
---|---|---|---|---|---|---|---|
3c | "The Great Gatsby" | 10101.. | … | … | … | … | … |
- The control word (
3c
) stores fingerprints for all 8 slots. (We have only one element in the group) - Keys (
"The Great Gatsby"
) and values (10101..
) are stored adjacently within each group for better performance.
Performance Gains
The redesign brings significant improvements over older implementations:
- Faster lookups: Metadata allows skipping irrelevant slots quickly.
- Reduced memory overhead: Group storage eliminates extra pointers, and has better load-factor.
- Better scalability: Incremental resizing avoids performance bottlenecks during growth.
For example:
- With the optimizations, lookups are up to ~30% faster compared to previous versions.
- Memory usage is reduced by as much as ~28% compared to older versions of Go maps.
Conclusion
Go 1.24’s map is like a librarian who gets smarter with every update—finding books faster, using space more efficiently where:
- Sections (tables) expand as needed.
- Bookshelves (groups) keep related items close together.
- Cheat sheets (control words) help librarians find books faster without flipping through every slot.
This design balances speed, memory efficiency, and scalability beautifully—making Go maps one of the most optimized hash table implementations out there!
Whether you’re building high-performance systems or just curious about how things work under the hood, understanding these concepts can help you appreciate Go’s thoughtful engineering even more.
For more detailed post about this implementation, check the official Go’s blog post Faster Go maps with Swiss Tables or ByteSizeGo Swiss Table Maps.
References to very good explanations of maps prior 1.24
- How the Go runtime implements maps efficiently
- Go Maps Explained: How Key-Value Pairs Are Actually Stored
Cover image by Janko Ferlič on Unsplash