In her amazing book “The Science of Can and Can’t,” Chiara Marletto discusses counterfactuals and dedicates an entire chapter to information. She explores what music, DNA, lighthouses, emails, language, and computers all have in common: they are systems capable of carrying information. Marletto then delves into the deeper question of what properties are required for a system to carry information.
Marletto explains that information processing fundamentally involves what is possible and impossible—what can and cannot happen. Her key insight is that information exists only when there are multiple possible states a system could be in, and some configurations are forbidden or impossible. For example, a bit can be 0 or 1, but it cannot be both simultaneously. These constraints on what’s possible create the capacity to encode and process information.
A counterfactual is a conditional statement, scenario, or thought experiment, that describes what could have happened but didn’t, or what would happen under different circumstances or conditions. It’s essentially a “what if” about an alternative possibility. In the IT space, we are already very familiar with counterfactuals. Here are three examples that are integral to the culture at CSP and are used to enhance the culture of our clients:
- Assessing Risk: By imagining different attack scenarios, and possible risks relevant to each industry, we better understand potential vulnerabilities and can prepare accordingly.
- Improving Strategies: Counterfactuals help evaluate the effectiveness of security measures and policies by considering how different approaches might have altered past outcomes. We continuously weigh these security measures against the necessity to produce and collaborate effectively.
- Learning from Incidents: After a security breach, counterfactual analysis provides insights into what we could have done differently to prevent or mitigate the attack.
There are also several important ways in which counterfactuals relate to IT and cybersecurity in particular:
- Authentication: Authentication relies on counterfactuals—a system must distinguish between authorized and unauthorized access attempts based on what credentials could possibly be valid. Security comes from making certain states (such as successful unauthorized access) effectively impossible while allowing legitimate states. For example, in a Zero Trust model, conditional access policies define what is possible: IF a user is on a non-corporate device, THEN access is denied.
- Encryption: Encryption works by making it computationally impossible for an attacker to derive the plaintext without the key, while making it possible for authorized parties with the key. Security stems from this asymmetry in what different parties can and cannot do.
- Threat Modeling: When security architects perform threat modeling, they engage in structured counterfactual thinking by asking: What if an attacker tries this attack vector? What if these security controls are bypassed?
- Security Vulnerabilities: Security vulnerabilities often arise when developers fail to properly constrain what states are possible. Issues like buffer overflows and SQL injection involve the system entering states that should have been impossible.
The core insight that information processing depends on clear boundaries between possible and impossible states provides a useful framework for thinking about cybersecurity. Good security requires carefully reasoning about and enforcing what should and should not be possible in a system. This is both an art and a science and also requires us to build strong relationships with clients in order to understand their business objectives.
This perspective highlights that cybersecurity isn’t just about building walls but about creating and maintaining constraints on both system and human behavior.
References:
The Science of Can and Can’t – Chiara Marletto
How to Rewrite the Laws of Physics in the Language of Impossibility | Quanta Magazine