Computer security, also called cybersecurity, is the practice of protecting computers, software, and data from being accessed, altered, or destroyed by unauthorized people or processes. Security ensures that systems behave as intended, keep information private, and provide services when needed.
When we use computers, we usually assume that the results will be correct and that our information will not be stolen or corrupted. But computers do not enforce this on their own. Security is the set of principles, mechanisms, and policies that make it possible. To understand computer security, we start with the CIA Triad, a simple but powerful model that highlights three essential goals: confidentiality, integrity, and availability.
The CIA Triad
Confidentiality
Confidentiality means restricting information so that only authorized people or processes can access it. An everyday example is an online bank account, which requires a password and uses encryption to keep outsiders from reading communication between you and the bank.
Confidentiality often overlaps with privacy, anonymity, and secrecy.
- Privacy gives individuals control over how their personal information is shared. Medical records are not secret, but you should have control over who can see them.
- Anonymity hides a person’s identity even if their actions are visible. For example, you may see the content of a survey response but not know who submitted it.
- Secrecy refers to hiding the very existence of information. A government project labeled “secret” may not even be acknowledged to exist outside a small group of authorized individuals.
A related concept is exfiltration, the unauthorized transfer of data out of a system. While confidentiality is about keeping information hidden in the first place, exfiltration describes how attackers remove it once protections have been bypassed. For example, malware may copy sensitive company files and transmit them to an external command-and-control server.
Confidentiality has been a long-standing concern. Julius Caesar used substitution ciphers to conceal his military orders. During World War II, Germany relied on the Enigma machine to encrypt its communications, which the Allies eventually cracked. Today, we rely on strong cryptographic protocols like TLS to ensure that when you shop online, nobody else can see your credit card number.
Integrity
Integrity ensures that data and systems remain accurate and unaltered unless someone with authorization makes a change. If you send money through an app, integrity ensures the recipient and the amount are not tampered with in transit. If a hospital database changes a patient’s blood type due to a malicious edit, the consequences could be deadly.
Integrity has multiple facets:
- Data integrity prevents unauthorized modification or deletion of information.
- Origin integrity verifies that information truly comes from its claimed source. This is why digital signatures are used to confirm that a message came from a specific sender.
- Recipient integrity ensures that information reaches the intended recipient and not an imposter.
- System integrity ensures that hardware, software, and processes are functioning as expected.
A failure of integrity occurs when unauthorized changes alter information or behavior. For example, attackers may corrupt financial records in a database or install a rootkit that changes system files to conceal malicious activity. Viruses that overwrite executable files also represent integrity violations.
Availability
Availability ensures that systems and data are accessible when needed. A secure system that nobody can use is not useful.
Attacks on availability are common. A Denial of Service (DoS) attack overwhelms a server with requests so it cannot respond to legitimate users. A more powerful variant is the Distributed Denial of Service (DDoS) attack, which harnesses thousands or millions of compromised computers (a botnet) to flood a target with traffic. In October 2016, the Mirai botnet launched one of the largest DDoS attacks in history, knocking popular services like Twitter and Netflix offline by overwhelming the company Dyn, which managed domain name services.
A famous historical example is the Morris Worm of 1988. It spread rapidly across Unix systems and, due to a coding mistake, often reinfected the same machine multiple times. The extra load consumed resources and slowed systems to a crawl, effectively denying availability. While it did not damage or corrupt data, it made thousands of computers unusable.
Availability also matters outside of attacks. In 2020, researchers at a university in Australia lost 77 terabytes of research data because a backup system failed. Even though the data was not stolen or altered, the failure of availability had devastating consequences.
The CIA Triad reminds us that security is not one-dimensional. For example, confidentiality and integrity can be achieved by locking a computer in a safe and turning it off, but that eliminates availability. Security must balance all three.
Security System Goals
Security design is also guided by three operational goals:
- Prevention stops attacks from succeeding. Password authentication, access controls, and encryption are all preventive measures.
- Detection identifies and reports attacks. Intrusion detection systems can alert administrators when unauthorized activity occurs. Even if prevention works, detection helps us understand what is being attempted.
- Recovery restores systems after an attack or failure. Backups, incident response plans, and forensic investigations are all part of recovery.
For example, the 2021 ransomware attack on Colonial Pipeline shut down fuel distribution along the U.S. East Coast. Prevention failed when attackers gained access through a compromised password. Detection revealed the problem. Recovery involved shutting down operations, paying a $4.4 million ransom, and restoring from backups. This example shows how all three goals interact in practice.
Policies, Mechanisms, and Assurance
A policy defines what is allowed and what is not. A mechanism enforces that policy.
- Technical mechanisms include operating system access controls, cryptography, and intrusion detection systems.
-
- Procedural mechanisms include ID checks, audits, and separation of duties.
Policies can be written in natural language but are stronger when expressed in precise specifications or formal policy languages.
When we enforce policies through mechanisms, we also need precise terminology to describe who is making a request and what is carrying it out. Security models often distinguish between a principal and a subject:
- A principal is any entity that can be uniquely identified and authenticated by the system. Principles are the "who" behind acess requests: users, processes, or even devices.
- A subject is the active entity that actually performs operations on resources on behalf of a principal. For example, when you log into a system (principal), your web browser (subject) is the process that reads files, opens sockets, and makes requests in your name.
This distinction matters because access control rules are written in terms of principals, but enforcement occurs through the actions of subjects.
All security depends on assumptions: that authentication is correct, that compilers generate valid instructions, that administrators configure systems properly. If assumptions fail, mechanisms may not enforce the intended policy.
Assurance is the confidence that the system truly enforces the policies correctly. Since large software systems may contain millions of lines of code, we rarely have formal proofs. Instead, assurance relies on careful design, testing, code audits, and penetration testing.
A famous failure of assurance was the Heartbleed bug in 2014. OpenSSL, a widely used cryptographic library, had a small coding error that allowed attackers to read parts of server memory. Policies and mechanisms said that communications should be encrypted and secure, but assurance was lacking because a bug undermined the mechanism.
Security Engineering
Security is a form of engineering. Like any engineering task, it involves tradeoffs. Stronger locks cost more money (e.g., a Medeco M3 deadbolt costs over $200 while a Kwikset model is under $17). A vault may resist attacks longer, but no vault is invulnerable.
Every safe comes with a "TL" (Tool Latency) rating that indicates the safe's resistance to a mechanical tool attack for a specific amount of time. For instance, a safe with a TL-15 rating is certified to resist an expert attack for 15 minutes; a TL-30 safe resists for 30 minutes. Buyers choose based on their threat model and budget.
Similarly, no computer system can be perfectly secure. Given enought time and effort, practically any defense can be subverted. Sometimes it is cheaper to restore from a backup than to prevent every possible attack.
Security engineering involves two key steps:
- Architecture: designing secure systems, anticipating threats, and identifying weaknesses.
- Implementation: building mechanisms and policies into actual systems.
The challenge is that attackers are creative and do not follow the rules. They can exploit bugs, misconfigurations, and even human weaknesses. Engineers must think like attackers when designing defenses.
Risk Analysis - let's be practical
Security must be practical. Risk analysis weighs the value of assets against the likelihood and cost of attacks. Protecting your computer from accidental deletion is easy. Protecting it from a determined nation-state agency is far more difficult.
Risk depends on the environment. A laptop that never connects to the Internet has low exposure to online attackers. A corporate server open to the world has high exposure. Risks also change over time as new vulnerabilities are discovered.
For example, the Log4J vulnerability discovered in 2021 was present in a logging library used in tens of thousands of applications. Once publicized, the risk to unpatched systems skyrocketed. Organizations had to decide how quickly to patch and whether to shut down vulnerable services until patches were applied.
Risk analysis also considers acceptability. Few people would agree to retina scans or DNA tests every time they log into a system, even though those methods are secure. Security must balance protection with convenience and cost.
Trusted Computing Base and Supply Chain Security
Every secure system depends on a Trusted Computing Base (TCB): the hardware, firmware, and software that enforce security policies. This includes processors, operating systems, compilers, and device drivers. If the TCB is compromised, the entire system is at risk.
For example, if a bootloader is modified by malware, it can alter the operating system before it loads, making any higher-level protections meaningless.
A related concept is the trust boundary, the point where data or control passes between trusted and untrusted entities. For instance, when a web application accepts input from an Internet client, that input crosses a trust boundary and must be validated. Many serious vulnerabilities arise from failures at these boundaries.
The supply chain adds another challenge. Modern systems depend on components from around the world. If a library, chip, or driver is compromised before it reaches you, security may already be broken.
One of the most famous examples is the 2020 SolarWinds attack. SolarWinds is a Texas-based company that produces software to help organizations manage their IT infrastructure. Attackers broke into SolarWinds’ development systems and inserted malicious code into updates of a popular product called Orion. These updates were digitally signed and distributed to SolarWinds’ customers, who trusted them as legitimate. In reality, the updates contained a backdoor that gave attackers remote access to customer networks.
The impact was massive. About 18,000 organizations installed the compromised updates, including U.S. government agencies such as the Treasury Department, the Department of Homeland Security, and parts of the Pentagon, as well as Fortune 500 companies and universities. The attack was attributed to a sophisticated Russian intelligence operation.
The SolarWinds incident was a wake-up call: even if your own systems are secure, an attacker can compromise a trusted vendor, slip malicious code into a routine update, and infiltrate your environment. It showed how fragile supply chain trust can be and why auditing and monitoring even trusted software updates is critical.
Human Factors and Incentives
Finally, people are often the weakest link in security. Users may choose weak passwords, click on phishing links, or misconfigure systems. Insiders may be bribed or act maliciously. Attackers often rely on social engineering, tricking people into revealing information or granting access.
As Bruce Schneier famously put it, “Security is a chain: it’s only as secure as the weakest link.” That weakest link might be cryptography, but just as often it is a person or a poorly configured system.
Organizations sometimes adopt security theater. These are measures that look protective but do little in practice. For example, requiring complex but frequently changing passwords often leads to users writing them down or reusing slight variations.
Economic incentives also shape security. A store that loses customer credit card numbers in a breach may not face serious consequences, so it has little incentive to invest in stronger security. Software vendors typically disclaim responsibility for damages in their license agreements, shifting risk to users. In some cases, companies even buy insurance instead of investing in prevention, reasoning that the financial risk is manageable.
A well-known example of misaligned incentives was the CIA’s “Vault 7” leak in 2017, when hacking tools were stolen and published. A later investigation found that the agency’s hackers prioritized building cyberweapons over securing their own systems. The human factor—not a flaw in cryptography—caused one of the largest leaks in CIA history.