Understanding the Snooping‑Based Cache Coherence Protocol
Introduction
The snooping‑based cache coherence protocol ensures that multiple processor caches maintain a consistent view of shared memory. It relies on a shared bus that all cache controllers continuously monitor (snoop) for coherence actions.
Recap of Cache States
- Invalid (I) – No valid data.
- Shared (S) – Clean copy, may exist in several caches.
- Modified (M) – Exclusive dirty copy; must be written back before another cache can read.
How Snooping Works – Read Misses
- Processor p1 requests variable a (value 7 in main memory). Since a is not in p1’s cache, a compulsory read miss occurs.
- p1 broadcasts a read request on the shared bus.
- All other caches snoop the request. Because none hold a, they ignore it.
- Main memory supplies a to p1; p1 stores it with state Shared.
- When p3 later requests a, the same broadcast happens. p1, still in Shared, does not intervene; memory again supplies a to p3, which also marks the line Shared.
Write Operations – Invalidate vs. Update
- Write‑Invalidate (most common)
- p1 increments a (7 → 8). Its line state changes Shared → Modified.
- p1 places an invalidation signal on the bus.
- All other caches (e.g., p3) downgrade their copies from Shared → Invalid.
With write‑through, the new value is immediately written to main memory; with write‑back, it is written only on eviction.
Write‑Update
- Instead of invalidating, p1 broadcasts an update request containing the new value.
- Every cache that holds a updates its copy and keeps the line in Shared state.
- Subsequent reads see the latest value without extra miss penalties.
Coherence Miss Example
After p1’s write‑invalidate, p3’s copy of a is Invalid. When p3 tries to decrement a, it experiences a coherence miss because its cached copy is no longer usable. The processor must fetch the fresh value from memory (or from the cache that now holds the Modified copy).
Write‑Through vs. Write‑Back
| Strategy | When data reaches memory | Typical use case |
|---|---|---|
| Write‑Through | Immediately on each write | Simpler consistency, higher traffic |
| Write‑Back | On cache line eviction or explicit flush | Lower bus traffic, more complex coherence |
Both strategies can be combined with either write‑invalidate or write‑update.
Real‑World Implementations
Modern multiprocessors from AMD and Intel employ variations of snooping protocols, often customizing the invalidate/update policies to balance latency, bandwidth, and power consumption.
Summary of the Protocol Flow
- Read miss → broadcast → memory supplies data → line marked Shared.
- Write hit → state becomes Modified.
- Write‑invalidate → broadcast invalidation → other caches set line Invalid.
- Write‑update → broadcast new value → other caches update and stay Shared.
- Write‑through writes to memory instantly; write‑back defers until eviction.
The protocol guarantees that any processor sees the most recent value of a shared variable, either by invalidating stale copies or by updating them directly.
Looking Ahead
The next session will explore directory‑based coherence protocols, which scale better for large systems by avoiding a broadcast bus.
Snooping‑based cache coherence keeps multiple caches synchronized by broadcasting read/write requests on a shared bus, using either invalidate or update actions combined with write‑through or write‑back policies—ensuring every processor works with the latest data without needing a separate directory.
Frequently Asked Questions
Who is Neso Academy on YouTube?
Neso Academy is a YouTube channel that publishes videos on a range of topics. Browse more summaries from this channel below.
Does this page include the full transcript of the video?
Yes, the full transcript for this video is available on this page. Click 'Show transcript' in the sidebar to read it.
How Snooping Works – Read Misses
1. **Processor p1 requests variable *a*** (value 7 in main memory). Since *a* is not in p1’s cache, a **compulsory read miss** occurs. 2. p1 broadcasts a **read request** on the shared bus. 3. All other caches snoop the request. Because none hold *a*, they ignore it. 4. Main memory supplies *a* to p1; p1 stores it with state **Shared**. 5. When p3 later requests *a*, the same broadcast happens. p1, still in **Shared**, does not intervene; memory again supplies *a* to p3, which also marks the line **Shared**.
Helpful resources related to this video
If you want to practice or explore the concepts discussed in the video, these commonly used tools may help.
Links may be affiliate links. We only include resources that are genuinely relevant to the topic.