This CSO article reports on a massive ransomware attack with NonPetya that is estimated by CyberReason research to have costs businesses globally around 1.2 billion dollar.
One paragraph in particular caught my eye:
To complicate matters, having cyber insurance might not cover everyone’s losses. Zurich American Insurance Company refused to pay out a $100 million claim from Mondelez, saying that since the U.S. and other governments labeled the NotPetya attack as an action by the Russian military their claim was excluded under the “hostile or warlike action in time of peace or war” exemption.
You can read the official U.S. press release here. What interests me is not just the small letters of insurances policies, although they can have huge financial consequences for companies in this case. Philosophically and politically, the more interesting question is what constitutes an act of war in the cyber domain. In this scenario, insurance money is paid based on whether the cyber attack is considered a warlike act or not. The phrasing “warlike action in time of peace or war” anticipates a difference between such warlike attacks and “actual” war, as these “warlike” acts do not have to take place during war time.
Traditionally, wars occur between two nations that are identifiable. If they play fairly, they can even officially declare war before knocking on someone’s door. It is important that these parties are identifiable, so that they can be held accountable in terms of the Geneva Conventions for example. However, in the case of cyberattacks, there can be significantly more ambiguity concerning the identity of the attacker.
Take for example a well known cyberattack on DigiNotar, a certificate authority in the Netherlands (for public key encryption). Due to a hack fake certificates had been issued, compromising the trustworthiness of DigiNotar certificates, resulting in the removal of these certificates for example from all major browsers. To complicate matters, the Dutch government internally used many DigiNotar issued intermediary certificates that chained up to the Dutch government CA itself (see for example Firefox’ communication about this. The DigiNotar certificates becoming untrusted consequentially threatened to destabilize the Dutch government, as official services such as the tax system and the online ID management system for Dutch citizens (DigID) that is used to access government services threatened to become inaccessible. In other words, the hack was a threat to the stability of the Dutch state. Is this a warlike act? Or is it an act of war?
Interestingly a presumably Iranian hacker claimed the attack here and stated that his motivation was political: revenge for the Srebrenica massacre the part the Dutch government played in it. It seems then that destabilizing the Dutch government was not just a side-effect, but a direct target of the attack. One can wonder how convincing is it that such a young person would successfully perform a hack on a major certificate authority all by himself. Especially when one hypothesizes about government involvement and if one takes into account that the target of the attack was announced to be the Dutch government, then this attack can potentially be interpreted as an act of war.
The following quote from here argues against jumping to such conclusions:
“Security expert Robert Graham, who’s swapped e-mails with Ich Sun and ultimately confirmed that he was indeed the one who pulled off the Comodo hack, thinks otherwise. He accuses Comodo and reporters who have covered this story of jumping to conclusions about the Iran connection. “We make the assumption that anyone who supports the government there works for the government and that’s just not true,” said Graham, CEO of Errata Security. “My theory is he’s exactly what he says he is. That’s what the evidence points to. There’s no evidence that says he would have to be part of a state-sponsored effort. The attack is not that complex. It’s just what your average pen-tester would do.”
Interestingly, the later investigation report by Fox-IT which can be downloaded here from a Dutch government website showed that “Around 300.000 unique requesting IPs to google.com have been identified. Of these IPs >99% originated from Iran” (p. 8). It turned out that practically all victims of the attack on a Dutch certificate authority where in fact Iranian gmail users. The target then was not the Dutch government after all. The Dutch certificates were used for a massive man-in-the-middle attack on Iranian civilians.
The take-away is that calling something an act of war in the cyber domain is to some extent a matter of interpretation as the relevant actors become increasingly less identifiable. That act of interpretation however has huge potential consequences. In the context of the cited article those consequences are mostly economical for companies whose damages might not be covered by their insurance. But the potential political consequences are the most worrisome. As digital systems become more interwoven with essential infrastructures and with other digital systems, warfare will also become increasingly digital. In accordance, those with the knowledge and capabilities to work and influence computer systems de facto have political power. And when the relevant parties of “warlike” acts in the digital domain cannot be identified anymore as government parties, the distinction between war and terrorism blurs, as the distinction heavily relies on the violence of the former being warranted by a nation, whereas that of the latter is against a state or nation.
This made me remember a reflection of Derrida on how technoscience blurs the rigorous distinction between war and terrorism, in a book I have read about five years ago (it made an impression apparently). I looked it up again. The following passage is from the book “Philosophy in a Time of Terror” (2003) by Giovanna Borradori. In the words of Jacques Derrida:
No geography, no “territorial” determination, is thus pertinent any longer for locating the seat of these new technolgies of transmission or aggression. To say it all too quickly and in passing, to amplify and clarify just a bit what I said earlier about an absolute threat whose origin is anonymous and not related to any state, such “terrorist” attacks already no longer need planes, bombs, or kamikazes: it is enough to infiltrate a strategically important computer system and introduce a virus or some other disruptive element to paralyze the economic, military, and political resources of an entire country or continent. And this can be attempted from just about anywhere on earth, at very little expense and with minimal means. The relationship between earth, terra, territory, and terror has changed, and it is necessary to know that this is because of knowledge, that is, because of technoscience. It is technoscience that blurs the distinction between war and terrorism. (p. 101)
My friend Rits did me a big favor by making some digital portraits of me. His website is currently in quarantine because he was too late with re-registering his domain name, but if you read this post after sunday 17 March you can check out his web site. While doing me a favour, Rits made sure that he enjoyed himself.
See right for a first proof of concept. After seeing this sketch, I had complete faith in the end result.
You can see the final results on my home page. If you feel adventurous, make sure to switch to the dark theme by clicking on the switch button on top for an amazing GIF. The normal portrait did not fit with the dark theme, so we decided it was best to design another portrait for that. We couldn’t quite figure out how to avoid creepy hollow eyes with high contrast against a dark background. After long deliberation, we came up with a creative solution…
Here are some of the other variations Rits proposed before we settled on the final portraits:
A few days ago Djoerd Hiemstra gave a guest lecture on estimating the size of big data problems, within the context of a Big Data course I am currently following. As a preparation, we read the paper of Sergey Brin and Lawrence Page from 1998 (read it here) where they introduced the anatomy of their search engine called “Google”. We did so in particular because it is interesting to compare their estimations on the size and scalability of Google with the colossus it has become today.
However, at the end of his guest lecture he pointed out two “fun facts” that I’d like to quickly share here.
The original paper on PageRank that fundamentally changed how the web looks today was rejected by the SIGIR 1998 conference.
In Appendix A of the paper mentioned above the authors discuss the dangers of advertising for search machines.
The first point is awkward and warrants a discussion of how valuable it is to be accepted at an academic conference. Despite being rejected, the contents of the paper and the company that followed from it completely reshaped the social reality of many.
The second point is also interesting and has an ironic note to it, given that we now know the direction in which Google headed. Follow the link above to read it for yourself (the paper is freely accessible), but here are two fragments:
Out of historical experience, the authors
expect that advertising funded search machines will be inherently biased towards the advertisers and away from the needs of the consumers (p. 18).
And:
In general, it could be argued from the consumer point of view that the better the search engine is, the fewer advertisements will be needed for the consumer to find what they want. This of course erodes the advertising supported business model of the existing search engines. However, there will always be money from advertisers who want a customer to switch products, or have something that is genuinely new. But we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm. (p. 18).
A year ago I attended a lecture by Timothy Morton. I had not seriously read anything of Morton except a quite extravagant paper called “From modernity to the Anthropocene: ecology and art in the age of asymmetry”, which flamboyantly combined Hegel, art and ecology in a manner I do not recall. The lecture was equally flamboyant, and can perhaps best be described as a confused rant that simultaneously felt very genuine and personal. The lecture can be listened to integrally here. Some time after the lecture, I watched an interview of Morton with his publisher, in which in particular his conception of holism struck me as refreshing.
The most summary description of holism undoubtedly is the phrase: “The whole is greater than the sum of its parts.” It concerns the emergence of a synergetic phenomenon that cannot be properly understood by only referring to its constituent parts. In other words, it concerns a phenomenon that is not reducible. In philosophy I often dealt with this idea in my studies of hermeneutical phenomenology, but in Artificial Intelligence it is equally relevant for understanding the emergence of intelligence from the complex interactions of smaller units (e.g. neurons) that are in themselves not intelligent. But although this fundamental idea is common to many different disciplines, this connection does not imply a simple consensus but instead a common question mark. So let us not assume the meaning of holism is self-evident: it implies a complete mereology, the metaphysics of the ongoing dialogue between wholes and parts.
In the following I quickly want to offer a twelve-step program of Morton’s “perverse” conception of holism from the interview and the lecture a year ago, as far as I can remember it.
Ecological speech often has a theistic element: the idea that the whole is always greater than the sum of its parts. This whole is then usually called “Nature”, preferably with a capital N.
But does this theistic admiration of the whole not also imply that the parts are to some extent expendable? Take for example James Lovelocks idea of ‘Gaia’. Thinking of the whole as greater than its parts contributes some sort of “agency” to this whole, giving it relative independence from its constituent components. When seeing “the bigger picture” the extinction of species can be conceptualized as a mere shift in components; humans might go extinct, but Nature will survive and find some new balance. Perhaps the “agency” of the whole can in that sense be characterized as the act of balancing out. Insofar as this balancing is only visible from the perspective of the whole (which is non-attainable “from within”), it seems destructive and chaotic insofar as things like extinction are part of the job. In any case the whole is here somehow conceived as something grandiose. “Nature” is now given as an example, but according to Morton any entity would suffice, only the scale differs. Humans, cars, butterflies, name it, my laptop. The side-effect of this view is that parts are in principle considered to be expendable components, subjected to this grandiose whole. But is this the right meaning of: “the whole is always greater than the sum of its parts”?
Morton’s dilemma: he does not want to simply consider entities as subjected to their whole (whatever that may be), but at the same time he definitely is a holist in the sense that he does not believe entities can be reduced to their parts. In other words, if he wants to keep calling himself a holist, what does his holism then precisely mean?
The perverse twist: what if the whole is less than the sum of its parts?
This is an intuitive truth for object-oriented ontology according to Morton. (Ontology for Morton refers to the study of how things exist, not the study of what exists. The latter Morton calls “object-policing.” This also resonates with the phrase “The how is the what” that Morton kept repeating during his public lecture in Nijmegen; a basic phenomenological insight that he cleverly adapted.) Morton’s object-oriented ontology says: if something exists, it exists in the same manner as everything else that exists. That is, all existing objects have a gap between how they are and how they appear. This gap irreducible and yet transcendental: appearance and being, despite the gap between them, inextricably go together,
Morton takes his hand as an example of an object. The hand is one whole, but when considering each of the fingers, that are part of the hand, they are all also themselves considered as a whole, not simply as a part. This is where Morton makes his perverse inversion of holism. Conclusion: There is more “whole” in the sum of the parts than there is in the “whole” of the hand… 5x whole > 1x whole. The whole of the hand is less than the parts.
This subversive conceptual reasoning that raises suspicion about the idea that the whole would be greater than the parts, has a political dimension for Morton insofar as a strong belief in the “greater good” of the whole can lead to justification of violence. Ontology, Morton likes to say, is political. And it should never justify subjection to a whole.
The mantra of object oriented ontology is that everything is an object, and than everything exists in the same way. It is therefore incompatible with a more classical idea of holism, because that classical conception contributes to the whole a different way of being. In this classical sense, that the whole is more than its parts could also mean that ontologically speaking the parts exist to a lesser degree, that they are “lower” in being. Conversely, when Morton says that the whole is less than its parts, this “less” is not intended to imply an ontological difference in the way of being, but instead a quite radical egality: if everything exists fundamentally in the same manner, then the “more” in holism almost (or completely?) becomes a numerical notion: there are simply more parts than wholes.
The latter insight is also why object-oriented ontology and ecological thought seem to be natural allies: both human and non-human beings, of whatever kind, exist in the same way. Nature in this sense is thus not something “other” than the sphere of human existence (the term “sphere” is already unfavorable for Morton’s thought because it seems to imply something that is closed off).
We as humans exist inside various wholes of differing scales but are not subjected to them, e.g. the biosphere, or liberalism or capitalism, which manifests itself physically in various forms around the globe (they are all also objects, nothing more and nothing less). However, we should be careful with talking about increasingly bigger entities, such as the ecological crisis, in what Morton calls a “my god is bigger than yours competition”, again referring to the theistic element of holism that has as a side-effect that we feel overwhelmed by and subjected to the whole.
Instead, Morton suggests, we should try to redirect our attention away from those big entities to smaller ones, because although they are physically big, they are ontologically smaller according to his perverse holism. When we walk in a forest, we encounter flora and fauna, trees, deer, fungi, but we never encounter “Nature”. Accordingly, Morton’s insights concerning holism should change how we can meaningfully practice things like nature preservation.
That’s my recap of Morton’s argumentation related to holism, condensed in a twelve-step program. At this point, I am mostly left with questions, which might be because I’ve never read any of Morton’s books. In any case, I deemed a twelve-step program appropriate for a philosopher that seemed to be in the middle of a drug-induced manic episode and frantically kept insisting he’s a bad philosopher (or not one at all).
Discussions about privacy- and security issues are in the news daily, related to some scandal, data leaks, new regulations (GDPR), increased surveillance in response to terrorism, etc. But what do these concepts of privacy and security actually mean, and how do they relate to each other? Everyone probably has some intuitive notion of these concepts, but on a closer look, they are more complex than one would expect. Discussions about privacy and security should begin with: what privacy, what security? These questions, despite their perhaps “dry” conceptual nature, are important for anyone interested in what is at stake in the privacy- and security-related discussions going on right now. In fact it is also important for those that do not share this interest, because the results of these discussions will affect them nevertheless. The goal of this blog post is to provide some pointers, distinctions, and questions; not answers. I have only relatively recently begun to engage with these topics myself and by writing this post I hope to test and develop my current understanding of the topic, which is very much in progress, perhaps even infantile.
A first distinction need to be made between “security” and “privacy”. Security relates to the regulation of access to some system. In a digital context these would be computer systems, from the servers of a secret agency to the personal computer of your grandma, if she has one. This is not the same as privacy, which can mean many things but in general essentially relates to persons or individuals. So a first basic distinction perhaps is that digital security pertains to various aspects of a communication channel itself, whereas privacy relates to the individuals involved in this communicative process, mediated by some technology.
Nevertheless security is a relevant topic for privacy. For a significant part, the security of communication between persons is a precondition for guaranteeing privacy. But it surely is not a sufficient condition. For example, with good cryptography one can make sure that third parties do not read the content of your communications. But the very fact of your communication may contain information that intrudes your privacy, e.g. it contains information about who you know, or about your location (e.g. where you work or live); which is information that you perhaps did not intend to share and which might be very sensitive. In that sense, cryptography, or computer security in general, solves problems only by moving them to another domain.
In some discussions security and privacy seem to exclude each other. Acts of terrorism never fail to spark debate on whether to give surveillance agencies more power to snoop on civilians, i.e. reduce their privacy, under the banner of increasing security against people with malicious intent. In that sense, the question becomes: security or privacy? But what we need is security and privacy.
In the following I want to map out some useful concepts related to security and privacy that I encountered so far when reading on this topic.
Considered naively, computer security can be easily thought of as some monolith stating “here and no further”. But in reality, security is a fluid concept that should be understood relative to an attacker with a given amount of resources. At the border case, absolute security requires resistance against an attacker with infinite resources. The notion of such an “absolute” security is meaningless, as it arguably requires a complete blockage of access. However, the point of securing real systems is not to block access as such, but to regulate access to whatever assets or capabilities that system has. That is, it needs to allow access, but only to the right people.
So the concept of security only makes sense against the backdrop of a potential attacker. But on top of that, whether something can be called secure depends on the context of the system, its purpose and the needs of its users. The security goals of various systems might differ, and the way in which we call system A secure can be different from the way we call system B secure. Let’s look at some examples of different security goals in different contexts. In order for critical systems in hospitals to be secure, their availability needs to be guaranteed at all times. If in that same hospital medical information is stored about you, its confidentiality is strictly required. Now imagine you need a blood transfusion, but someone changed the information on your blood type in the medical system, i.e. its integrity has been breached, with potential lethal consequences. In another example, to reliably transfer money using online banking, confidentiality is less of an issue than the authenticity of the sender, bank, and recipient. E.g. when you transfer money to the bank you want to be sure you do not in fact transfer money to a criminal “man in the middle”. More easily overlooked is the principle of non-repudiation: after you have transferred your money, you cannot later deny you did so.
You can think of many contexts where one security goal is absolutely required, whereas another may be less relevant. In other words, in different contexts “security” means different things, depending on the relative importance of the aforementioned security goals: confidentiality, integrity, authenticity, non-repudiation, availability. The goal of this section was merely to transmit the basic intuition that the concept of security is less univocal than it may seem, and provide some first differentiations.
But even if the communication is secure, what information do you give to these systems? How are they stored, and how are they used? Do you keep any control over this? Imagine the aforementioned hospital leaking your medical information to your health insurance: you can bet your fees to go up. A hospital doing so would be an extreme case. But now imagine a free fitness app doing the same after storing information about your health and condition (e.g. hearth rates during running, or weight etc.). The people installing that fitness app, after accepting the “terms and conditions” that they quite reasonably did not read, might potentially sell their data for such usage without being aware of it. So although that app would be “free” in the sense of gratis, it does come at a cost.
Privacy as a concept seems tightly entwined with the idea of an individual. The above example concerns sensitive information about individuals, and indeed most discussions about privacy nowadays concern the use of personal data by various companies and the control individuals keep over that use. This sense of privacy can be traced back to Westin’s definition of privacy in 1967 as “the claim of individuals […] to determine for themselves when, how, and to what extent information about them is communicated to others.” Privacy in these contemporary discussions thus means something like “control over your data”, and is a unique issue that occurs in the digital era. It is interesting that most people would probably not bother to hide their shopping cart when doing groceries, whereas knowledge of your online shopping behavior more quickly becomes a privacy issue. Privacy can thus mean something different online and offline. Simultaneously, the integration of digital devices into our lives increasingly blurs the line between online and offline, and also between public and private.
Perhaps this difference has something to do with a perceived sense of anonymity you have when sitting at home browsing the internet. In the domestic sphere of the house, you act from within a relatively protected and secluded situation, which suggests privacy in the “old fashioned” sense of “being left alone” (as defined by Warren and Brandeis already in 1890).
This situation shows a clear incongruency between different conceptions of privacy: you are indeed left alone, but simultaneously you are not in full control of your own personal data, and moreover this data is used to manipulate you without you realizing it, for example in the results you see for internet searches (cf. the “filter bubble”), or in targeted advertising. You do not only lose control over the information you expose to the internet, but also lose control over what information is shown to you. Perhaps the net is neutral, but the information you “find” (or: is presented to you), certainly is not. The famous meme “on the internet, no one knows you’re a dog”, was valid years back, but over time changed into the description of a comfortable illusion.
It depends on your definition of privacy whether it is violated or not in the scenario I just sketched. But most interestingly, what it shows is that the digital era effectively initiated a transformation in the concept of privacy that is occurring as we speak. A digital-savvy portion of the population is constantly sounding alarms left and right about privacy issues, whereas others do not only not experience a breach of privacy, but also think the discussion is nonsensical because they “have nothing to hide”.
These initial considerations barely scratch the surface. I want to think more about privacy in the coming months in all its nuance and complications. For example, what is the relationship between privacy and intimacy both in the “analogue” and digital domain? What are the links between related concepts such as secrecy or confidentiality, which are all partly overlapping but not the same? How does privacy relate to an individual’s freedom? It is also possible to develop perspectives that rely less on the individual, as I did here. How does privacy take shape in negotiations between individuals in communities, i.e. how is privacy also essentially a social process? And how can such a multi-faceted dynamic concept be adequately represented in law, which seems primarily concerned with regulating data processes?
A philosophical discussion that takes a step back before being activistic is relevant, because how can we really guarantee privacy in the applications we implement, or protect it by law, without having a clear view of what it is? That is not a plea for passivity, as it is unreasonable to expect of anything worthwhile that it can be captured in a few clear concepts. Neither does it invalidate raising your voice before being completely informed, because a certain degree of “conceptual entropy” (trying out a new term here) is unavoidable in a debate that is very much alive and in progress. Instead, I would say that conceptual reflections can contribute to informed action, which in my opinion always includes expressions of doubt rather than offering false certainties. A bit of reflection is needed to be a successful user of the internet, consciously shaping our pluralistic digital identities in all the systems we interact with, rather then becoming its victim.