pk.org: Articles

Big Ideas in the History of Operating Systems

From resource management in the 1950s to layered systems today

Paul Krzyzanowski – August 26, 2025

This is an update from a writeup I put together in the early 2000s when I was teaching Operating System Design.

Operating systems are the invisible layer that makes modern computing possible. Understanding how they came to be helps explain why computers work the way they do today.

From Program Loaders to Mobile Platforms

Some years ago, before one of my classes, I found myself reading through old computing journals from the 1950s and 1960s. What struck me was how different the concerns of that era were compared to today. Authors focused almost obsessively on efficiency: squeezing every ounce of work out of processors, managing scarce memory, and parceling out precious machine time. That makes sense, since computers were astronomically expensive, hard to access, and nowhere near as powerful as the device sitting in your pocket right now.

Looking back at this early history, you can see how priorities shifted with the times. If you zoom in too closely, the story turns into a patchwork of technical details, forgotten systems, and obscure implementation quirks. If you step back, though, the bigger picture comes into focus. Each era had its zeitgeist, its own sense of what computing was about and what critical problems had to be addressed.


This article isn’t meant to be a scholarly survey or a comprehensive catalog. Instead, it highlights the big ideas and innovations that shaped operating systems over the past 70+ years. I'll identify landmark systems that introduced lasting concepts, while skipping over those that were more incremental. For example, I omitted IBM's System/370, as it was largely a successor to the System/360, rather than a major advance in its own right. There were various early microcomputer operating systems, as well as tiny and real-time kernels that I didn't list.

The goal here is to trace the trajectory of operating systems from their beginnings to the present time. Each era introduced challenges that forced people to rethink what an operating system should be. Many of those solutions still underpin the machines we use every day.

Please let me know if I'm missing anything important, noting that I'm trying to focus on systems that introduce significant concepts rather than compiling an exhaustive list of systems and features.

The Dawn of Computing: Manual Operation (1940s-1950s)

In the earliest computers, such as ENIAC and UNIVAC, there was no operating system at all. Programmers had direct access to the hardware and manually controlled every aspect of the machine. To run a program, operators would:

This approach wasted enormous amounts of expensive computer time. A typical computer might run actual programs only 10-20% of the time, with the rest spent on setup, loading, and transitions between programs. The main problem was maximizing utilization of extremely expensive hardware.

Program Loaders and Early Batch Systems (1950s)

The first primitive "operating systems" were essentially program loaders—stacks of punched cards prepended to each program that contained standard setup routines. These cards would load the program into memory and handle basic input/output operations. You're absolutely correct that this was the foundation of operating systems.

GM-NAA I/O (1956), developed for IBM 704, was among the first real operating systems. It provided:

This solved the immediate problem of setup time, but computers still processed only one job at a time. The core challenge remained: how to keep expensive processors continuously busy.

Batch Processing Systems (Late 1950s-1960s)

Batch processing systems like IBM's IBSYS and FORTRAN Monitor System revolutionized computer utilization. Instead of running one program at a time, operators would collect similar jobs (all FORTRAN programs, for example) and process them in batches.

Key innovations included:

IBM OS/360 (1964) was a landmark system that introduced the concept of a family of compatible computers running the same operating system. This was revolutionary because it meant software could run across different hardware configurations, though OS/360's complexity led to the famous "mythical man-month" problems documented by Fred Brooks.

The main problem being solved was still utilization. Batch systems could achieve 80-90% processor utilization compared to 10-20% with manual operation.

The Birth of Multitasking (1960s)

As processors became faster, they often sat idle while waiting for slower mechanical devices, such as card readers and printers. The solution was to keep multiple programs in memory simultaneously—when one program waited for I/O, another could use the processor.

Key developments:

Burroughs MCP (Master Control Program, 1961) was notable for being the first operating system written entirely in a high-level language (ALGOL). This demonstrated that operating systems didn't need to be written in assembly language.

The problem being addressed was processor idle time. Even with batch processing, CPUs were often waiting for I/O operations to complete.

Time-Sharing Revolution (1960s-1970s)

The next major shift came from recognizing that interactive computing was more valuable than batch processing for many tasks. Time-sharing systems allowed multiple users to work simultaneously on the same computer, each feeling like they had dedicated access.

MIT's CTSS (Compatible Time-Sharing System, 1961) and later Multics (1969) pioneered time-sharing concepts:

DEC's TOPS-10 (1967) and later TOPS-20 (1976) for the PDP-10 series brought time-sharing to commercial environments. These systems introduced sophisticated virtual memory management and were particularly popular in universities and research institutions.

IBM's TSO (Time Sharing Option) for OS/360 added time-sharing capabilities to batch-oriented systems, though it was never as elegant as purpose-built time-sharing systems.

IBM's MVS (Multiple Virtual Storage, 1974) represented IBM's major evolution of OS/360 for the System/370 architecture. MVS introduced:

MVS dominated enterprise computing for decades and established virtual memory as a standard feature of serious operating systems.

The central challenge had shifted from maximizing hardware utilization to supporting interactive users who expected responsive, real-time computing while maintaining the throughput requirements of business data processing.

Minicomputer Operating Systems (1970s)

As computers became smaller and less expensive, different operating system approaches emerged for this new class of "minicomputers" that departments could afford rather than sharing centrally:

Unix, developed at Bell Labs starting in 1969 by Ken Thompson and Dennis Ritchie for the PDP-7 and later PDP-11, introduced concepts that still dominate operating system design:

Unix prioritized simplicity, portability, and programmer productivity over raw performance, creating a productive development environment that could run efficiently on modest minicomputer hardware.

DEC's VMS (Virtual Memory System, 1977) for the VAX series took a different approach, emphasizing:

VMS represented the pinnacle of centralized, multi-user operating system design, with enterprise features that wouldn't appear in Unix for years.

The central challenge was supporting both interactive users and batch processing on moderately-priced computers that departments could afford, leading to these two very different philosophical approaches.

Multithreading and Advanced Process Models (1980s)

As processors became more powerful and applications grew more sophisticated, operating systems needed better ways to manage concurrent execution within programs. Traditional processes were too heavyweight: creating a new process required duplicating the entire address space, which was expensive and slow.

Carnegie Mellon's Mach (mid-1980s), led by Rick Rashid, pioneered the separation of processes and threads as distinct concepts:

Key breakthrough: Multiple threads could execute concurrently within a single process, sharing memory and resources while maintaining separate execution stacks and program counters.

Benefits of multithreading:

Threading models evolved:

Industry adoption: Mach's threading concepts influenced Windows NT (which hired several Mach developers), modern Unix systems, and became fundamental to multiprocessor and multicore computing. Today, virtually every modern application uses multiple threads.

The driving challenge was enabling applications to fully utilize increasingly powerful hardware while maintaining responsive user interaction.

Personal Computer Operating Systems (1970s-1980s)

The personal computer revolution created entirely new requirements, but interestingly, it also represented a step backward in operating system sophistication. Cost constraints and limited hardware meant abandoning many advanced features that mainframes and minicomputers had developed.

CP/M (Control Program for Microcomputers, 1974) by Digital Research was architecturally similar to 1950s operating systems:

Despite these limitations, CP/M established the personal computer software ecosystem and proved that simple operating systems could be commercially successful.

MS-DOS (1981), developed by Microsoft for the IBM PC, began with the same limitations as CP/M:

The key insight was that for personal computers, simplicity and cost were more important than the advanced features of multi-user systems. Users accepted these limitations in exchange for having their own dedicated computer.

Graphical User Interfaces (1980s-1990s)

The introduction of graphical interfaces fundamentally changed operating system requirements:

Apple's Mac OS (1984) popularized:

Microsoft Windows evolved from a DOS application to a full operating system:

However, Windows still suffered from the limitations inherited from DOS: lack of memory protection meant that one misbehaving application could crash the entire system.

Windows NT (1993), led by Dave Cutler (formerly of VMS), represented a complete architectural restart that brought mainframe and minicomputer concepts to personal computers:

NT demonstrated that personal computers could have the same architectural sophistication as larger systems, though it took years for the hardware to become powerful enough to make this practical for everyday use.

The main challenge was making computers intuitive for users while managing the complexity of graphical interfaces, multiple applications, and diverse hardware.

Modern Unix and Open Source (1980s-Present)

Unix continued evolving through multiple branches:

Linux (started 1991 by Linus Torvalds) combined:

macOS (originally Mac OS X, 2001) merged:

The driving force was combining the reliability and power of Unix with user-friendly interfaces and supporting modern hardware.

Network and Internet Era (1990s-2000s)

As networking became ubiquitous, operating systems adapted to support:

Windows NT/2000/XP and Linux distributions competed on network services, security, and administration tools. The central challenge became managing networked systems and protecting against security threats.

Real-Time Operating Systems (1980s-1990s)

While general-purpose operating systems optimized for throughput and user experience, a parallel evolution occurred for applications requiring deterministic timing: systems where meeting deadlines is more important than average performance.

Real-time operating systems emerged to serve industries like aerospace, automotive, industrial control, and telecommunications where missing a deadline could be catastrophic. Key pioneers included:

QNX (1982) introduced a microkernel architecture where the OS kernel was minimal, with most services running as separate processes. This provided:

VxWorks (1987) became dominant in embedded systems, powering everything from Mars rovers to network routers:

The fundamental challenge was different from general-purpose systems: instead of maximizing average throughput, real-time systems must guarantee worst-case response times. This required completely different approaches to scheduling, memory management, and interrupt handling.

Modern real-time systems such as FreeRTOS continue this tradition in embedded devices, proving that real-time principles remain essential for IoT, autonomous vehicles, and industrial automation.

Embedded and IoT Operating Systems (1990s-Present)

As processors became small and inexpensive enough to embed in everyday devices, a new class of operating systems emerged to serve resource-constrained environments: devices with limited memory, processing power, and often battery life.

Early embedded systems often ran without operating systems, but as devices grew more complex, specialized OSes emerged:

Palm OS (1996) pioneered personal digital assistants with:

Windows CE (1996) brought a familiar Windows interface to embedded devices:

The Internet of Things revolution created new requirements for operating systems managing billions of connected devices:

TinyOS (2000) introduced event-driven programming for sensor networks:

FreeRTOS became the most popular embedded OS by emphasizing:

Modern IoT platforms like Zephyr, Mbed OS, and RIOT address contemporary challenges:

The key insight is that embedded/IoT systems often require the opposite trade-offs from traditional computers: optimizing for power consumption, cost, and reliability rather than performance and features.

Mobile Operating Systems (2000s-Present)

Smartphones and tablets created entirely new operating system requirements:

iOS (2007) introduced:

Android (2008) emphasized:

The key challenges for mobile operating systems are:


Current and Future Challenges

Modern operating systems face unprecedented complexity as computing has evolved far beyond the traditional CPU-memory-storage model. Today's systems must orchestrate diverse specialized processors, manage cloud-scale distributed resources, and maintain security in an increasingly hostile environment.

Contemporary operating systems must address:

Heterogeneous Computing Architecture:

Cloud and Distributed Computing:

Security and Privacy:

Current Research Directions: - Unikernel architectures: single-address-space operating systems for cloud applications - eBPF and programmable kernels: allowing safe user-space code execution in kernel space - Persistent memory integration: treating storage-class memory as an extension of main memory - Quantum-safe cryptography: preparing for post-quantum computing security threats - Energy-proportional computing: scaling power consumption with computational demand

The fundamental challenge has shifted from managing a single computer's resources to orchestrating complex, distributed, heterogeneous systems while maintaining security, performance, and usability across diverse deployment environments.

Conclusion

Operating systems have evolved from simple program loaders to sophisticated platforms managing complex interactions between hardware, applications, and users. Each era's challenges—from maximizing hardware utilization in the 1950s to managing mobile device power consumption today—have driven fundamental innovations that continue to influence modern system design.

The progression shows a clear pattern: as hardware became more capable and less expensive, the focus shifted from hardware efficiency to user productivity, and finally to user experience. Today's operating systems must balance performance, security, usability, and power efficiency while supporting an ecosystem of applications and services that would have been unimaginable in earlier decades.