Memory Management Evolution: Fixed Partitions to 64‑Bit Virtual Memory
Early single‑user systems in the 1950s‑1980s could run only one program at a time. The operating system had to evacuate the entire RAM before loading a new program, so any program larger than the total memory could not be executed. Fixed‑partition systems introduced multiple partitions, allowing several programs to reside simultaneously, but they quickly ran into fragmentation and “no vacancy” errors that often required a reboot. Dynamic contiguous‑block allocation improved matters by loading programs until memory filled, yet gaps left by terminated programs still prevented large applications from loading.
Virtual Memory and Paging
Virtual memory breaks a program into fixed‑size pages that map onto page frames in physical RAM. A page table records the location of each page, enabling the operating system to load only the pages needed for the current execution. This on‑demand paging lets programs exceed the size of physical RAM because unused pages remain on secondary storage until required. Memory mapping translates virtual addresses to physical addresses, making the virtual address space appear larger than the actual hardware capacity.
Processor Architecture and Addressing Capacity
A processor’s bit‑depth determines the maximum addressable memory. 32‑bit CPUs cap the address space at 4 GB, while 64‑bit CPUs expand it to a theoretical 16 exabytes. The larger address space not only permits more memory but also increases data throughput, much like adding extra lanes to a highway allows more traffic to flow simultaneously.
Page Replacement Policies
When RAM fills, the operating system must evict pages. Different policies decide which page to replace:
- FIFO (First In, First Out) removes the oldest page in memory.
- LRU (Least Recently Used) discards the page that has not been accessed for the longest time.
- LFU (Least Frequently Used) targets the page with the lowest access count.
- MRU (Most Recently Used) removes the most recently accessed page, which can be efficient for certain sequential workloads.
Performance Limitations and Thrashing
If paging activity becomes excessive, the system enters thrashing, spending more time swapping pages between RAM and storage than executing useful work. Thrashing dramatically degrades performance. Common remedies include closing unnecessary programs or adding more physical RAM. Features like ReadyBoost allow external storage (e.g., a USB stick) to act as an auxiliary swap area, though this extension remains slower than true RAM.
Mechanisms Explained
- On‑demand paging loads only the pages required for the current execution path, keeping the rest on disk until needed.
- Memory mapping provides a translation layer that treats virtual addresses as if they were contiguous, even when they span disparate physical locations.
- ReadyBoost leverages fast flash storage to supplement virtual memory, offering a modest performance boost when RAM is scarce.
Takeaways
- Early single‑user systems could only run one program at a time and required the entire RAM to be cleared before loading another, preventing execution of programs larger than the physical memory.
- Fixed‑partition and dynamic contiguous‑block schemes introduced multitasking but suffered from fragmentation and “no vacancy” errors, often forcing reboots when memory gaps blocked new programs.
- Virtual memory splits programs into fixed‑size pages that map to physical page frames, allowing the OS to run applications larger than RAM by loading only needed pages on demand.
- Moving from 32‑bit (4 GB address limit) to 64‑bit processors expands the addressable space to 16 exabytes, dramatically increasing throughput similar to adding lanes on a highway.
- Thrashing occurs when excessive paging overwhelms the system, and remedies include closing applications or adding more RAM, while policies like FIFO, LRU, LFU, and MRU determine which pages to replace.
Frequently Asked Questions
What is on-demand paging and how does it work?
On-demand paging loads only the pages a program needs for its current execution into RAM, leaving the rest on secondary storage. The operating system tracks page usage with a page table and fetches additional pages when the program accesses them, conserving memory and reducing load time.
Why does 64‑bit addressing provide an exponential increase over 32‑bit?
A 64‑bit address can represent 2^64 distinct locations, while a 32‑bit address covers only 2^32. This jump expands the addressable memory from 4 GB to 16 exabytes, an exponential growth that enables vastly larger address spaces and higher data throughput.
Who is Programming w/ Professor Sluiter on YouTube?
Programming w/ Professor Sluiter is a YouTube channel that publishes videos on a range of topics. Browse more summaries from this channel below.
Does this page include the full transcript of the video?
Yes, the full transcript for this video is available on this page. Click 'Show transcript' in the sidebar to read it.
Helpful resources related to this video
If you want to practice or explore the concepts discussed in the video, these commonly used tools may help.
Links may be affiliate links. We only include resources that are genuinely relevant to the topic.