pk.org: Computer Security/Lecture Notes

Containment

Isolating programs

Paul Krzyzanowski – 2025-10-26

Introduction

We examined how attackers can hijack programs through command or input injection. Even when input validation is done correctly, vulnerabilities can remain, and a process may still be compromised.

Once that happens, the access control mechanisms we covered are no longer effective. The operating system assumes a process acts within its assigned privileges. If an attacker controls that process, it can read or write files, open network connections, or execute other programs.

Containment limits what a process can do after compromise. It defines isolation boundaries so that a faulty or malicious program cannot affect the rest of the system. Containment does not prevent compromise but confines its impact.

Containment exists at several layers:

We'll begin with the foundation of these techniques: application sandboxing and its evolution,


Application Sandboxing

A sandbox is a restricted execution environment that mediates interactions between an application and the operating system. It limits resource access, system calls, and visible state. If a process is compromised, the sandbox constrains its effect.

The idea developed gradually—from early filesystem-based confinement to kernel-level and language-level environments that restrict both native and interpreted code.

Early Filesystem Containment

Before general-purpose sandboxes existed, Unix provided filesystem-level containment through the chroot system call.

chroot

Introduced in 1979, chroot(path) changes a process’s view of the root directory to the specified path. After calling:

chroot("/home/httpd/html");

any absolute path beginning with / is resolved relative to /home/httpd/html. Child processes inherit this environment, forming what became known as a chroot jail.

chroot was designed for building and testing software in self-contained trees. Administrators later used it to limit the file access of network services. A web server running inside a chroot jail could only serve files within its directory tree.

chroot affects only the filesystem namespace. It does not restrict privileges or system calls. A process with root privileges inside the jail can escape through various methods:

It provides no limits on CPU, memory, or I/O use. The environment restricts filenames, not behavior.

Creating a working chroot environment also requires copying all dependent executables, libraries, and configuration files. Tools like jailkit simplify setup but add no security guarantees. chroot is still used for testing or packaging but not for reliable containment.

FreeBSD Jails

FreeBSD Jails extended chroot by combining filesystem isolation with process and network restrictions. Each jail has its own root directory, hostname, and IP configuration. Processes within a jail can be limited from mounting filesystems, loading kernel modules, or creating raw sockets. Even the root user inside the jail is restricted by the jail’s configuration.

FreeBSD Jails introduced finer control over process privileges but remained coarse in scope. They lacked resource management and were specific to BSD systems. They addressed what a process could see but not what it could do.

Sandboxing: controlling system calls

The next development focused on controlling system calls: the interface through which processes interact with the kernel.

We will touch upon three approaches: user-level interposition, fully kernel based, and process virtual machines (interpreters).

1. System Call-Based Sandboxes

The system call interface defines the actual power of a process. Every interaction with resources — file access, socket creation, or memory allocation — goes through a system call. Restricting or filtering these calls allows precise control over process behavior.

A system call sandbox intercepts calls and applies a policy before allowing execution. The enforcement can occur in user space or within the kernel.

User-Level Interposition: Janus and Systrace

Early implementations, such as Janus (University of California, Berkeley) and Systrace (OpenBSD), operated entirely in user space. They used the ptrace debugging interface to monitor processes. Each system call was intercepted, its arguments checked against a policy, and either allowed or denied.

A policy might allow file access under /var/www but deny network activity. This provided fine-grained control without kernel modifications.

The approach had significant weaknesses:

User-level interposition demonstrated feasibility but was not robust enough for production use.

2. Kernel-Integrated Filtering: seccomp and seccomp-BPF

Linux moved sandbox enforcement into the kernel with Secure Computing Mode (seccomp), introduced in 2005. It allows a process to specify which system calls it can make, with the kernel enforcing the policy directly.

Original seccomp

The first version permitted only four system calls: read(), write(), _exit(), and sigreturn(). Any other call terminated the process. This was practical only for specialized tasks.

seccomp-BPF

Modern Linux systems use seccomp-BPF, which adds programmable filtering through Berkeley Packet Filter (BPF) bytecode. BPF was originally designed for efficiently filtering network packets but was adapted for system call filtering.

The process installs a filter that the kernel executes whenever it attempts a system call. The filter inspects the system call number and its arguments, returning one of several actions:

A program enables seccomp with:

prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog);

or

seccomp(SECCOMP_SET_MODE_FILTER, 0, &prog);

Once installed, filters cannot be relaxed—only replaced with stricter ones.

Advantages

Limitations

seccomp-BPF is now widely used in browsers, container runtimes like Docker, and service managers to reduce kernel attack surfaces.

macOS App Sandbox

Apple’s App Sandbox, implemented through the Seatbelt framework, applies similar restrictions using a declarative model. Applications are signed with entitlements that define allowed actions: file access, network connections, device use, or interprocess communication. The kernel enforces these entitlements at system call boundaries.

Developers request capabilities instead of defining filters. This simplifies configuration but provides less flexibility. The macOS sandbox enforces predictable behavior for applications distributed through the App Store and other controlled environments.

3. Language-Based Sandboxing and Process Virtual Machines

Some sandboxes operate entirely in user space by running code inside a managed execution environment. These process virtual machines provide language-level isolation by interpreting or compiling bytecode to a restricted instruction set.

Examples include the Java Virtual Machine (JVM), Microsoft's .NET Common Language Runtime (CLR), Python’s interpreter, and JavaScript engines in web browsers.

These environments emulate a CPU and manage memory internally. Programs run as bytecode, which may be interpreted or compiled just-in-time, and cannot directly access hardware or invoke system calls. All external interaction goes through controlled APIs.

Java: The JVM verifies bytecode before execution, ensuring that all operations stay within defined type and memory bounds. Python: The interpreter can confine execution by controlling access to modules such as os and subprocess. JavaScript: Browser engines restrict access to the filesystem and network, allowing only specific APIs such as XMLHttpRequest or Fetch.

Strengths

Limitations

Language-based sandboxes operate above the kernel and often coexist with kernel-level sandboxes. For example, a web browser runs JavaScript inside an interpreter sandbox while using seccomp or Seatbelt to confine the browser process itself.

Comparing Sandboxing Approaches

Sandbox Type Enforcement Layer Typical Example Strengths Limitations
Filesystem-based OS filesystem namespace chroot, BSD Jails Simple, legacy-compatible No control of system calls or privileges
System call-based Kernel seccomp-BPF, macOS Seatbelt Fine-grained, efficient Complex or static configuration
Language-based Runtime interpreter JVM, Python, JavaScript Memory-safe, portable Dependent on runtime integrity

Each approach makes different trade-offs between security, performance, and flexibility.

Application sandboxing evolved from restricting what a process can see to restricting what it can do. The next stage extends this principle to groups of processes by isolating namespaces, managing resources, and dividing system privileges.


OS-Level Isolation Primitives

System call sandboxes confine individual processes, but most applications consist of multiple cooperating processes. A browser, for example, uses separate rendering, GPU, and networking processes. A web service may spawn worker and logging processes. To contain such systems, the operating system must isolate groups of processes and the resources they share.

Modern Unix-like systems, especially Linux, provide three kernel mechanisms for this purpose: namespaces, control groups (cgroups), and capabilities.

Together, they define which resources a process can see, how much of them it can use, and what privileged actions it may perform. These features form the foundation for containers.

Namespaces

A namespace gives a process its own private copy of part of the system’s global state. Processes that share a namespace see the same view of that resource, while those in different namespaces see distinct views. Each namespace type isolates one kernel subsystem: a specific part of the kernel that manages a resource such as processes, networking, or the filesystem.

Linux supports several types of namespaces:

Namespace Isolates Example Effect
PID Process IDs Each namespace has its own PID 1; processes cannot see or signal those in another namespace.
Mount Filesystems Each namespace can mount or unmount filesystems independently.
UTS Hostname and domain name A process can have its own hostname.
Network Interfaces, routing tables, sockets Each namespace has a private network stack.
IPC System V and POSIX IPC objects Shared memory or semaphores are visible only within the same namespace.
User User and group IDs (UIDs and GIDs) Processes can map their internal UIDs to different real UIDs on the host.
Cgroup The cgroup hierarchy Controls visibility of control-group resources.

Each namespace acts like a self-contained copy of a subsystem. For example, a new network namespace starts with only a loopback interface. Administrators can connect it to the host or other namespaces with virtual Ethernet pairs: creating a virtual network cable between two network stacks. A process in a PID namespace sees only its own process tree, with PID 1 behaving as an init process for that namespace.

Namespaces let multiple isolated environments run on a single kernel, providing the illusion of separate systems without hardware virtualization. However, they hide and partition resources; they do not limit consumption. That role belongs to control groups.

Control Groups (cgroups)

A control group, or cgroup, manages and limits resource usage. While namespaces define what a process can see, cgroups define how much of each resource it can use.

A cgroup is a hierarchy of processes with limits on resource usage. Each type of resource -- CPU, memory, disk I/O -- is managed by a controller that measures consumption and enforces restrictions.

The kernel exposes these through virtual filesystems that report usage and apply restrictions.

Some examples of cgroups include:

Controller Resource Managed Example Use
cpu CPU scheduling and quotas Limit CPU time for a service.
memory Physical and swap memory Cap memory usage to prevent exhaustion.
blkio Block I/O bandwidth Restrict disk throughput.
pids Process count Limit process creation to prevent fork bombs.
devices Device access Allow or deny use of specific device files.
freezer Process suspension Temporarily stop and resume process groups.
net_cls / net_prio Network tagging and priorities Control bandwidth or traffic classification.

A service can belong to several cgroups: for example, one limiting CPU, another controlling memory. The kernel tracks usage per group and enforces limits through scheduling and memory reclamation. If a process exceeds its memory quota, the kernel’s out-of-memory handler terminates it without affecting other groups.

Namespaces and cgroups together isolate processes functionally and economically: each process group sees only its own resources and consumes only what it is permitted.

Capabilities

Traditional Unix privilege management treated the root user (UID 0) as all-powerful. The kernel checked only whether the process’s effective user ID was zero. This binary model violated the principle of least privilege: a process either had full control or none.

Concept

Capabilities break up root’s all-powerful privilege into specific pieces. The kernel no longer assumes that UID 0 can do everything by default; each privileged operation now requires the matching capability.

Each capability represents authorization for a specific class of privileged operation, such as configuring network interfaces or loading kernel modules.

The system can also grant an individual capability to a non-root process so it can perform just that one privileged action without running as full root.

Under this model, UID 0 (root) alone no longer implies complete control. The kernel checks both the user ID and capability bits before allowing any privileged action.

Linux defines over 40 distinct capabilities, each governing a specific class of privileged operations. Some common examples include:

Capability Privilege Granted
CAP_NET_ADMIN Modify network configuration.
CAP_SYS_MODULE Load and unload kernel modules.
CAP_SYS_TIME Change the system clock.
CAP_SYS_BOOT Reboot the system.
CAP_NET_RAW Use raw network sockets.
CAP_DAC_OVERRIDE Bypass file permission checks.

For instance, if a process with UID 0 lacks CAP_SYS_MODULE, it cannot load a kernel module.

Capability Sets

Each process maintains four capability sets — bitmaps that record which privileges it holds and how they propagate:

Set Description
Permitted Capabilities the process may use; the upper bound of its privileges.
Effective Capabilities currently active and checked whenever a privileged operation occurs.
Inheritable Capabilities that may be passed to new programs across an execve() call (which replaces the current program image).
Ambient A subset that is automatically preserved across execve() if allowed by the program being executed. This simplifies keeping capabilities when launching helper programs.

The effective set determines what a process can do at runtime; the others control privilege inheritance when executing new programs.

Applying Capabilities

Capabilities can be attached to executables or to running processes.

  1. File capabilities Executable files can carry extended attributes storing capability information. When such a file is executed, its listed capabilities are added to the process’s permitted and effective sets. For example:
sudo setcap cap_net_bind_service=+ep /usr/bin/python3

This allows the program to bind to low-numbered TCP ports (< 1024) without full root privileges. The +ep adds the capability to both the effective and permitted sets.

  1. Process capabilities A privileged process can modify its own sets using the capset() system call or the libcap library. For instance, a service might start as root to open a port, then drop all capabilities except CAP_NET_BIND_SERVICE before continuing. Once dropped, capabilities cannot be regained unless the process executes another binary that has them defined on the file.

  2. Namespace interaction Entering a user namespace alters capability behavior. Inside the namespace, a process can appear to be root, but its capabilities apply only within that namespace, not to the host.

Capabilities and the Root User

Under the capability model:

The kernel now authorizes privileged operations based on capabilities rather than assuming that UID 0 always implies full access. In this context, root refers to the privileged user account (UID 0), not to the root of the filesystem.

Dropping Capabilities

Processes can permanently relinquish capabilities using the capset() system call or prctl(PR_CAPBSET_DROP, ...). Once dropped, a capability cannot be reacquired without executing a new program.

This allows a process to perform initialization that requires privilege and then continue safely with minimal rights. It can be a valuable mechanism when applying the principle of least privilege.

Example

A web server needs to bind to port 80 but no other privileged operation. By granting it only CAP_NET_BIND_SERVICE, it can open that port while running as a non-root user. Even if compromised, it cannot mount filesystems, modify network routing, or change the system clock.


Namespaces, control groups, and capabilities work together to provide process containment within the kernel.

These three mechanisms are the essential building blocks of modern containerization. Systems like Docker and LXC combine them to create lightweight, isolated execution environments, which we'll examine in the next section---

Containerization

The isolation mechanisms provided by namespaces, control groups, and capabilities make it possible for a single Linux kernel to run many separate environments safely.

Containerization builds on these mechanisms to package applications and their dependencies into lightweight, portable units that behave like independent systems. Each container has its own processes, filesystem, network interfaces, and resource limits, yet all containers run as ordinary processes under the same kernel.

Containers were introduced primarily to simplify the packaging, deployment, and distribution of software services. They made it possible to bundle an application and its dependencies into a single, portable image that could run the same way in development, testing, and production. The underlying mechanisms—namespaces, cgroups, and capabilities—were developed for resource management and process control, not for security.

As container runtimes matured, these same mechanisms also provided practical isolation, making containers useful for separating services, though not as a strong security boundary.

This combination of portability, manageability, and isolation has made containers central to modern software deployment.

Motivation and Background

Traditional virtualization runs multiple operating systems on one machine by emulating hardware. Each virtual machine includes its own kernel and system libraries. This offers strong isolation but at a cost: every VM duplicates the same system components, consuming memory and startup time.

Containers achieve similar separation with less overhead. Instead of virtualizing hardware, they virtualize the operating system interface—the process and resource view provided by the kernel. From the application’s perspective, it appears to be running on its own system, but the kernel simply presents a filtered and limited view of the shared host environment.

In practical terms:

This layered design allows thousands of isolated services to run on one host without the duplication inherent in full virtual machines.

How Containers Work

Understanding how containers work requires seeing how these kernel mechanisms combine in practice.

At their core, containers are a structured way to combine kernel features into a managed runtime. Each container starts as an ordinary process, but it is launched inside new namespaces, placed in specific cgroups, and given a controlled set of capabilities. The result is an isolated execution environment with well-defined limits.

When a container process starts, it typically performs the following actions:

  1. The container runtime (such as Docker, containerd, or LXC) creates the required namespaces for the process.

  2. It assigns the process to one or more cgroups that define resource limits.

  3. It adjusts capability sets so the process has only the privileges needed inside the container.

  4. It mounts a filesystem image, which contains the container’s user-space environment—typically a minimal Linux distribution or application root directory.

The process inside the container then executes as if it were running on its own system, unaware that other containers share the same kernel.

Container runtimes automate the setup of these kernel mechanisms and apply consistent, minimal-privilege defaults. This reduces the likelihood of misconfiguration while improving both operational simplicity and security. For example, they automatically start each container with a limited set of capabilities, preventing even root inside the container from loading kernel modules or reconfiguring the host network.

Container runtimes also manage Linux capabilities automatically. They start each container with a limited set of capabilities so that even a process running as root inside the container cannot perform privileged actions such as loading kernel modules or reconfiguring the host network. Administrators can further adjust these settings to add or drop specific capabilities as needed.

Container runtimes automate their setup and apply consistent, minimal-privilege defaults, reducing the likelihood of misconfiguration and improving both operational simplicity and security assurance.

Filesystems and Images

Filesystems and Images

Each container has its own root filesystem, typically built from an image. An image is a prebuilt snapshot that contains all the files, libraries, and configuration needed for an application.

For example, if you want to run a web server in a container, you might use an "nginx" image that includes the nginx web server and its dependencies. To run a Python application, you could start with an "python:3.11" image that provides the Python interpreter and standard libraries, then add your application code on top.

Images are usually layered using copy-on-write filesystems (where shared data is only duplicated when modified): multiple containers can share common base layers, with each container storing only its modifications. This layering makes images efficient to store and transmit.

Images can be stored in public or private registries and downloaded to any host that needs them. Docker Hub is the largest public registry, hosting millions of images for common software like databases, web servers, and programming language environments. A command like docker pull nginx downloads the nginx image from Docker Hub, making it available to run locally. This distribution model—packaging software into images and sharing them through registries—is a key reason containers became widely adopted.

Container Runtimes

The software that manages containers is called a container runtime. It is responsible for creating namespaces, setting up cgroups, applying capability limits, and starting the container process. The runtime uses standard kernel APIs without requiring special kernel modules.

Common examples include:

Networking

Containers use network namespaces to provide isolated network stacks. Each container gets its own private network environment, as if it had its own network card. This isolation allows multiple containers to use the same port numbers without conflict.

For example, you could run three different web applications, each listening on port 80 within its own container. From each application's perspective, it owns port 80. The host system routes external traffic to these containers, typically by mapping different external ports (like 8080, 8081, 8082) to port 80 in each container's namespace.

Containers can also communicate with each other or with external networks through virtual network connections managed by the host.

Security Implications

Containers improve isolation but do not create a full security boundary. All containers share the same kernel, so a vulnerability in the kernel could allow one container to affect others. Within a container, the “root” user has administrative control inside that namespace but not on the host. However, kernel bugs or misconfigured capabilities can weaken that boundary.

To strengthen isolation, systems often combine containers with additional mechanisms:

Containers provide meaningful isolation for ordinary services but are not appropriate for untrusted or hostile code without additional containment layers.

Beyond Isolation: Practical Benefits

Containers provide significant real-world advantages beyond isolation.

They make applications easier to deploy, replicate, and maintain while improving resource efficiency and consistency across environments.

The same kernel features that provide containment -- namespaces and cgroups -- also make containers predictable to manage and easy to orchestrate at scale.

Relationship to Virtual Machines

Containers are often compared to virtual machines because both provide isolated environments, but they work at different layers. We will look at virtual machines in the following section; for now, the key distinction is that containers share a kernel while virtual machines emulate hardware and run their own operating systems.

Feature Containers Virtual Machines
Isolation level Process and OS interface Full hardware emulation
Kernel Shared with host Separate for each VM
Startup time Seconds or less Tens of seconds to minutes
Resource overhead Low High
Security boundary Moderate Strong
Typical use Application deployment Multi-OS or high-trust separation

Virtual machines provide stronger isolation because each runs its own kernel on virtualized hardware. Containers are lighter and faster but rely on the same kernel. In practice, many systems combine both: running containers inside virtual machines to balance efficiency with strong isolation.


Containerization packages an application together with its dependencies into a portable, self-contained environment that runs the same way on any compatible host. By combining namespaces, cgroups, and capabilities, containers isolate processes while allowing them to share the same operating system kernel efficiently.

Containers were developed to simplify software deployment and management, and their design naturally provides strong isolation and predictable resource control. They are the practical realization of operating system–level virtualization and have become the standard model for deploying modern distributed systems.


Virtualization

Containment so far has relied on sharing a single operating system kernel. A process sandbox confines one program; containers isolate entire applications while still depending on the same kernel. The next step is to move the boundary of isolation one level lower—to the hardware itself. This is the realm of virtual machines (VMs).

A virtual machine emulates an entire computer system: CPU, memory, storage, and network interfaces. Each VM runs its own operating system and kernel, independent of the host. From the guest operating system’s perspective, it has full control of the hardware, even though that hardware is simulated. This approach provides strong isolation because the guest cannot directly access the host’s memory or devices.

A Brief History

Virtual machines are older than most of the technologies that depend on them today. The idea emerged in the 1960s with IBM’s mainframes.

At the time, mainframes were scarce and expensive resources shared among development teams. IBM engineers needed a way for multiple teams to develop operating systems concurrently without interfering with one another. Virtualization provided each team with a private, isolated machine for testing and debugging new kernels on the same physical computer.

IBM developed the CP-40 and CP-67 systems to allow multiple users to share one expensive mainframe. Each user’s operating system, called a guest, ran in a virtual environment provided by a small, controlling kernel known as a Virtual Machine Monitor (VMM). This design evolved into IBM’s VM/370 in 1972, one of the first commercial virtualization platforms.

The concept largely disappeared on smaller systems during the 1980s and 1990s when hardware became cheap and operating systems assumed full control of the machine.

Virtualization returned in the early 2000s as servers became powerful enough to host multiple workloads on one machine. Data centers were filled with underused servers, each running a single application for isolation.

Virtualization made it possible to consolidate these workloads safely, improving utilization and reducing costs. The resurgence was led by VMware, followed by Xen, KVM, and later Hyper-V, and it soon became the foundation for both convenient software testing and multi-tenant cloud platforms.

How Virtualization Works

At its core, virtualization creates the illusion that each operating system has exclusive access to the hardware. A software layer called a hypervisor, or Virtual Machine Monitor (VMM), sits between the hardware and the guest operating systems. It intercepts privileged operations, manages memory and device access, and schedules CPU time among the guests.

When a guest operating system issues an instruction that would normally access hardware directly, such as configuring memory or I/O, the hypervisor traps that instruction, performs it safely on the guest’s behalf, and returns the result.

With modern hardware support, most instructions run directly on the CPU, with the hypervisor only intervening for privileged operations. This allows near-native performance while maintaining separation between guests.

Modern processors include hardware support for virtualization, such as Intel VT-x and AMD-V. These extensions allow the CPU to switch quickly between executing guest code and hypervisor code, reducing the overhead of trapping and emulating privileged instructions.

Types of Hypervisors

Hypervisors come in two general forms:

Type Description Example Systems
Type 1 (bare-metal) Runs directly on hardware and manages guest operating systems. The hypervisor is effectively the host OS. VMware ESXi, Microsoft Hyper-V, Xen
Type 2 (hosted) Runs as an application under a conventional operating system and uses that OS’s device drivers. VMware Workstation, VirtualBox, Parallels

Type 1 hypervisors are more efficient and are used in data centers and clouds.

Type 2 hypervisors are easier to install on desktop systems and used for testing, development, or running alternative OSes.

For example, a developer on macOS might use VirtualBox to run a Linux VM for testing server software, or a Windows user might run Ubuntu in VMware Workstation.

Virtual Machines vs. Containers

Students often see containers and VMs described as competing technologies, but they address different layers of the system:

Feature Containers Virtual Machines
Isolation level Process and OS interface Full hardware environment
Kernel Shared with host Separate for each VM
Startup time Seconds or less Tens of seconds to minutes
Resource overhead Low High (duplicate OS and kernel)
Security boundary Moderate Strong
Portability quires compatible kernel (e.g., Linux) Cross-platform
Typical use Deploying applications Running multiple OSes or high-trust separation

A container isolates processes but depends on the host kernel for enforcement. A virtual machine isolates an entire operating system with its own kernel, libraries, and device drivers. This makes virtualization more secure and more flexible, since it can run different operating systems on the same hardware, but also heavier to manage.

Each virtual machine is a complete copy of an operating system that must be configured, patched, and maintained independently. Administrators have to manage multiple kernels, apply updates, and ensure consistent security policies across all guests. Containers avoid that duplication by sharing the host kernel, so there is only one operating system to maintain.

For example, a Linux host can run both Windows and Linux VMs simultaneously because each guest includes its own kernel and drivers. A container cannot do this; it must use the host kernel's system calls and execution environment.

Advantages of Virtualization

These features made virtualization the foundation of modern data centers and cloud infrastructure.

Security Implications

Virtualization offers strong isolation because the hypervisor mediates all access to hardware. A guest cannot normally read or modify another guest’s memory or the hypervisor itself. However, vulnerabilities still exist:

Hypervisors are typically small and security-hardened, but their central role makes them high-value targets.

Virtualization and Containment

From the perspective of containment, virtualization represents a deeper boundary. Process-level and container-level mechanisms rely on kernel enforcement; virtualization adds a distinct kernel for each guest and isolates them with hardware-level checks. This separation makes virtualization the preferred choice for workloads that require strong security guarantees, multi-tenant separation, or different operating systems.

In practice, many systems combine layers: containers run inside virtual machines, and those virtual machines run under a hypervisor on shared hardware. This layered approach provides both efficiency and assurance.


Virtualization abstracts physical hardware to create multiple isolated operating systems on one host. Each virtual machine runs its own kernel and user space, providing strong separation at the cost of additional resources. It builds on the same principle as sandboxing and containers—controlling what a process or system can access—but shifts enforcement to the hardware level.

Modern operating systems also use virtualization internally to isolate critical system components, extending the principle of containment to the operating system itself.