Application Sandboxing

Protecting rogue apps

Paul Krzyzanowski

October 11, 2020

Introduction

The goal of an application sandbox is to provide a controlled and restricted environment for code execution. This can be useful for applications that may come from untrustworthy sources, such as games from unknown developers or software downloaded from dubious sites. The program can run with minimal risk of causing widespread damage to the system. Sandboxes are also used by security researchers to observe how software behaves: what the program trying to do and whether it is attempting to access any resources in a manner that is suspicious for the application. This can help identify the presence of malware within a program. The sandbox defines and enforces what an individual application is allowed to do while executing in within its sandbox.

We previously looked at isolation via jails and containers, which use mechanisms that include namespaces, control groups, and capabilities. These constitute a widely-used form of sandboxing. However, these techniques focus on isolating an application (or group of processes) from other processes, restricting access to parts of the file system, and/or providing a separate network stack with a new IP address.

While this is great for running services without the overhead of deploying virtual machines, it does not sufficiently address the basic needs of running normal applications. We want to protect users from their applications: give users the ability to run apps but restrict what those apps can do on a per-app basis.

For example, you may want to make sure that a program accesses only files under your home directory with a suffix of “.txt”, and only for reading, without restricting the entire file system namespace as chroot would do, which would require creating a separate directory structure for shared libraries and other standard components the application may need. As another example, you might want an application to have access only to TCP networking. With a mechanism such as namespaces, we cannot exercise control over the names of files that an application can open or their access modes. Namespaces also do not allow us to control how the application interacts with the network. Capabilities allow us to restrict what a process running with root privileges can do but offer no ability to restrict more fundamental operations, such as denying a process the ability to read a file even if that file has read access enabled. The missing ingredient is rule-based policies to define precisely what system calls an application can invoke – down to the parameters of the system calls of interest.

Instead of building a jail (a container), we will add an extra layer of access control. An application will have the same view of the operating system as any other application but will be restricted in what it can do.

Sandboxing is currently supported on a wide variety of platforms at either the kernel or application level. We will examine four types of application sandboxes:

  1. User-level validation
  2. OS support
  3. Browser-based application sandboxing
  4. The Java sandbox

Note that there are many other sandbox implementations. This is just a representative sampling.

Application sandboxing via system call interposition & user-level validation

Applications interact with their environment via system calls to the operating system. Any interaction that an application needs to do aside from computation, whether legitimate or because it has been compromised, must be done through system calls: accessing files or devices, changing permissions, accessing the network, talking with other processes, etc.

An application sandbox will allow us to create policies that define which system calls are permissible to the application and in what way they can be used.

If the operating system does not provide us with the required support and we do not have the ability to recompile an application to force it to use alternate system call libraries, we can rely on system call interposition to construct a sandbox. System call interposition is the process of intercepting an app’s system calls and performing additional operations. The technique is also called hooking. In the case of a sandbox, it will intercept a system call, inspect its parameters, and decide whether to allow the system call to take place or return an error.

Example: Janus

One example of doing validation at the user level is the Janus sandboxing system, developed at UC Berkeley, originally for SunOS but later ported to Linux. Janus uses a loadable, lightweight, kernel module called mod_janus. The module initializes itself by setting up hooks to redirect system call requests to to itself. A hook is simply a mechanism that redirects an API request somewhere else and allows it to return back for normal processing. For example, a function can be hooked to simply log the fact that it has been called. The Janus kernel module copies the system call table to redirect the vector of calls to the mod_janus.

A user-configured policy file defines the allowable files and network operations for each sandboxed application. Users run applications through a Janus launcher/monitor program, which places the application in the sandbox. The monitor parses the policy file and spawns a child process for the user-specified program. The child process executes the actual application. The parent Janus process serves as the monitor, running a policy engine that receives system call notifications and decides whether to allow or disallow the system call.

Whenever a sandboxed application makes a system call, the call is redirected by the hook in the kernel to the Janus kernel module. The module blocks the thread (it is still waiting for the return from the system call) and signals the user-level Janus process that a system call has been requested. The user-level Janus process' policy engine then requests all the necessary information about the call (calling process, type of system call, parameters). The policy engine makes a policy decision to determine whether, based on the policy, the process should be permitted to make the system call. If so, the system call is directed back to the operating system. If not, an error code is returned to the application.

Challenges of user-level validation

The biggest challenge with implementing Janus is that the user-level monitor must mirror the state of the operating system. If the child process forks a new process, the Janus monitor must also fork. It needs to keep track of not just network operations but the proper sequencing of the steps in the protocol to ensure that no improper actions are attempted on the network. This is a sequence of socket, bind, connect, read/write, and shutdown system calls. If one fails, chances are that the others should not be allowed to take place. However, the Janus monitor does not have the knowledge of whether a particular system call succeeded or not; approved calls are simply forwarded from the module to the kernel for processing. Failure to handle this correctly may enable attack vectors such as trying to send data on an unconnected socket.

The same applies with file operations. If a file failed to open, read and write operations should not be allowed to take place. Keeping track of state also gets tricky if file descriptors are duplicated (e.g., via the dup2 system call); it is not clear whether any requested file descriptor is a valid one or not.

Pathname parsing of file names has to be handled entirely by the monitor. We earlier examined the complexities of processing "../" sequences in pathnames. Janus has to do this in order to validate any policies on permissible file names or directories. It also has to keep track of relative filenames since the application may change the current directory at any time via the chdir system call. This means Janus needs to intercept chdir requests and process new pathnames within the proper context. Moreover, the application may change its entire namespace if the process calls chroot.

File descriptor can cause additional problems. A process can pass an open file descriptor to another process via UNIX domain sockets, which can then use that file descriptor (via a sendfd and recv_fd set of calls). Janus would be hard-pressed to know that this happened since that would require understanding the intent of the underlying sendmsg system calls and cmsg directives.

In addition to these difficulties, user-level validation suffers from possible TOCTTOU (time-of-check-to-time-of-use) race conditions. The environment present when Janus validates a request may change by the time the request is processed.

Application sandboxing with integrated OS support

The better alternative to having a user-level process decide on whether to permit system calls is to incorporate policy validation in the kernel. Some operating systems provide kernel support for sandboxing. These include the Android Application Sandbox, the iOS App Sandbox, the macOS sandbox, and AppArmor on Linux. Microsoft introduced the Windows Sandbox in December 2018, but this functions far more like a container than a traditional application sandbox, giving the process an isolated execution environment.

seccomp-BPF

seccomp-BPF, which stands for SECure COMPuting with Berkeley Packet Filters, is a sandboxing framework that is available on Linux systems. It allows the user to attach a system call filter to a process and all of the descendants of that process. Users can enumerate allowable system calls and also allow or disallow access to parameters to those calls. A limitation is that seccomp does not enable dereferencing addresses in parameters, so it cannot compare strings. seccomp has been a core part of the Android security since the release of Android O in August 2017.

seccomp uses the Berkeley Packet Filter (BPF) interpreter, which is a framework that was initially created for network socket filtering. With socket filtering, a user can create a filter to allow or disallow certain types of data to come through the socket. Since BPF is a framework that was initially created for sockets, seccomp sends “packets” that represent system calls to the BPF (Berkeley Packet Filter) interpreter. The filter allows the user to define rules that are applied to these system calls. These rules enable the inspection of each system call and its arguments and take subsequent action. Actions include allowing the call to run or not. If the call is not permitted, rules can specify whether an error is returned to the process, a SIGSYS signal is sent, or whether the process gets killed.

seccomp is not designed to serve as a complete sandbox solution but is a tool for building sandboxes. For further process isolation, it can be used with other components, such as namespaces, capabilities, and control groups. The biggest downside of seccomp is the use of the BPF. BPF is a full interpreter – a processor virtual machine – that supports reading/writing registers, scratch memory operations, arithmetic, and conditional branches. Policies are compiled into BPF instructions before they are loaded into the kernel. It provides a low-level interface and the rules are not simple condition-action definitions. System calls are referenced by numbers, so it is important to check the system architecture in the filter as Linux system call numbers may vary across architectures. Once the user gets past this, the challenge is to apply the principle of least privilege effectively: restrict unnecessary operations but ensure that the program still functions correctly, which includes things like logging errors and other extraneous activities.

The Apple Sandbox

Conceptually, Apple’s sandbox is similar to seccomp in that it is a kernel-level sandbox, although it does not use the Berkeley Packet Filter. The sandbox comprises:

  • User-level library functions for initializing and configuring the sandbox for a process
  • A server process for handling logging from the kernel
  • A kernel extension that uses the TrustedBSD API to enforce sandbox policies
  • A kernel extension that provides support for regular expression pattern matching to enforce the defined policies

An application initializes the sandbox by calling sandbox_init. This function reads a human-friendly policy definition file and converts it into a binary format that is then passed to the kernel. Now the sandbox is initialized. Any function calls that are hooked by the TrustedBSD layer will be passed to the sandbox kernel extension for enforcement. Note that, unlike Janus, all enforcement takes place in the kernel. Enforcement means consulting a list of sandbox rules for the process that made the system call (the policy that was sent to the kernel by sandbox_init). In some cases, the rules may involve regular expression pattern matching, such as those that define filename patterns).

The Apple sandbox helps avoid comprehension errors by providing predefined sandbox profiles (entitlements). Certain resources are restricted by default and a sandboxed app must explicitly ask the user for permission. This includes accessing:

  • the system hardware (camera, microphone, USB)
  • network connections, data from other apps (calendar, contacts)
  • location data, and user files (photos, movies, music, user-specified files)
  • iCloud services

For mobile devices, there are also entitlements for push notifications and Apple Pay/Wallet access.

Once permission is granted, the sandbox policy can be modified for that application. Some basic categories of entitlements include:

  • Restrict file system access: stay within an app container, a group container, any file in the system, or temporary/global places
  • Deny file writing
  • Deny networking
  • Deny process execution

Browser-based Sandboxing: Chromium Native Client (NaCl)

Since the early days of the web, browsers have supported a plug-in architecture, where modules (containing native code) could be loaded into the browser to extend its capabilities. When a page specifies a specific plug-in via an <object> or <embed> element, the requested content is downloaded and the plug-in that is associated with that object type is invoked on that content. Examples of common plug-ins include Adobe Flash, Adobe Reader (for rendering pdf files), and Java, but there are hundreds of others. The challenge with this framework is how to keep the software in a plug-in from doing bad things.

An example of sandboxing designed to address the problem of running code in a plugin is the Chromium Native Client,called NaCl. Chromium is the open source project behind the Google Chrome browser and Chrome OS. The NaCl Browser plug-in designed to allow safe execution of untrusted native code within a browser, unlike JavaScript, which is run through an interpreter. It is built with compute-intensive applications in mind or interactive applications that use the resources of a client, such as games.

NaCl is a user-level sandbox and works by restricting the type of code it can sandbox. It is designed for the safe execution of platform-independent, untrusted native code inside a browser. The motivation was that some browser-based applications will be so compute-intensive that writing them in JavaScript will not be sufficient. These native applications may be interactive and may use various client resources but will need to do so in a controlled and monitored manner.

NaCl supports two categories of code: trusted and untrusted. Trusted code can run without a sandbox. Untrusted code must run inside a sandbox. This code has to be compiled using the NaCl SDK or any compiler that adheres to NaCl’s data alignment rules and instruction restrictions (not all machine instructions can be used). Since applications cannot access resources directly, the code is also linked with special NaCl libraries that provide access to system services, including the file system and network. NaCl includes a GNU-based toolchain that contains custom versions of gcc, binutils, gdb, and common libraries. This toolchain supports 32-bit ARM, 32-bit Intel x86 (IA-32), x86–64, and 32-bit MIPS architectures.

NaCl executes with two sandboxes in place:

  1. The inner sandbox uses Intel’s IA-32 architecture’s segmentation capabilities to isolate memory regions among apps so that even if multiple apps run in the same process space, their memory is still isolated. Before executing an application, the NaCl loader applies static analysis on thecode to ensure that there is no attempt to use privileged instructions or create self-modifying code. It also attempts to detect security defects in the code.

  2. The outer sandbox uses system call interposition to restrict the capabilities of apps at the system call level. Note that this is done completely at the user level via libraries rather than system call hooking.

Process virtual machine sandboxes: Java

A different type of sandbox is the Java Virtual Machine. The Java language was originally designed as a language for web applets, compiled Java programs that would get download and run dynamically upon fetching a web page. As such, confining how those applications run and what they can do was extremely important. Because the author of the application would not know what operating system or hardware architecture a client had, Java would compile to a hypothetical architecture called the Java Virtual Machine (JVM). An interpreter on the client would simulate the JVM and process the instructions in the application. The Java sandbox has three parts to it:

The bytecode verifier verifies Java bytecodes before they are executed. It tries to ensure that the code looks like valid Java byte code with no attempts to circumvent access restrictions, convert data illegally, bypass array bounds, or forge pointers.

The class loader enforces restrictions on whether a program is allowed to load additional classes and that key parts of the runtime environment are not overwritten (e.g., the standard class libraries). The class loader ensures that malicious code does not interfere with trusted code nad ensures that trusted class librares remain accessible and unmodified. It implements ASLR (Address Space Layout Randomization) by randomly laying out Runtime data areas (stacks, bytecodes, heap).

The security manager enforces the protection domain. It defines what actions are safe and which are not; it creates the boundaries of the sandbox and is consulted before any access to a resource is permitted. It is called at the time an application makes a call to specific methods so it can provide run-time verification of whether a program has been given rights to invoke the method, such as file I/O or network access. Any actions not allowed by the security policy result in a SecurityException being thrown. The security manager is the component that allows the user to restrict an application from accessing files or accessing the network, for example.A user can create a security policy file that enumerates what an application can or cannot do.

Java security has shown that building a sandbox seems deceptively simple. It turned out to be a complex task. After over twenty years of bugs one hopes that the truly dangerous ones have been fixed. Even though the Java language itself is pretty secure and provides dynamic memory management and array bounds checking, buffer overflows have been found in the underlying C support library, which has been buggy in general. Varying implementations of the JVM environment on different platforms make it unclear how secure any specific client will be. Moreover, Java supports the use of native methods, libraries that you can write in compiled languages such as C that interact with the operating system directly. These bypass the Java sandbox.

Last modified January 17, 2024.
recycled pixels