Matt Might wrote 6 blog tips for busy academics, and I am intending on following all tips. This post follows two tips specifically.
Tip 2: “Reply to public” as post
Many of the academics that “don’t have time to blog” seem to have plenty of time to write detailed, well-structured replies and flames over email.
Before pressing send, ask yourself, should this answer be, “Reply,” “Reply to all,” or “Reply to public”?
If you put effort into the reply, don’t waste it on a lucky few. Share it.
And also a part of tip 3:
Any question asked more than once is a candidate for a blog post
Today I graded assignments about perceptrons learning to model logical functions, such as A /\ B or A / B. As a warm-up question first year students were asked how many boolean functions we can define for two and three inputs respectively. And in the case of two inputs, how many of those boolean functions can a perceptron model? I noticed that quite some people did not answer these questions correctly, and moreover I received emails asking me to explain the answer because the final exam is coming up in two days. And so I heard Matt Might’s voice calling me out to write this. I hope it is of use to someone out there. I suggest you give it a try yourself before looking at the answer!
n
inputs?I like to think of a function informally as a mapping from inputs to outputs such that each possible input has exactly one output.
A Boolean is a data type that can take on two values that usually represent a truth value, for example in classical logic or programming.
Classical logic makes the assumption of the excluded middle, namely that any proposition P is either true or not true (false): P \/ ~P
.
In computer science and programming, truth is usually denoted with a 1
and non-truth with a 0
.
So a boolean function is a mapping such that it takes an amount n
of inputs and then returns true (1
) or false (0
).
We could write that as such:
f: {0,1}^n -> {0,1}
We can see that the amount of inputs n
determines the space of possible inputs.
The question how many boolean functions there are for n
inputs can thus be formulated as such: in how many ways can we map the set of all possible inputs to the set of possible outcomes?
Another name for such a mapping is a truth table.
For example, this is the truth table of the logical disjunction A \/ B
:
A B | A \/ B 1 1 | 1 0 1 | 1 1 0 | 1 0 0 | 0
This truth table corresponds to one boolean function, because it maps each possible input to exactly one output.
Another way of asking how many boolean functions we can make with n
inputs is thus: how many of these truth tables are possible?
Notice that the disjunction above is a boolean function with 2 inputs that we here called A and B.
Each input can take two values because it is either true or false, so there are in total 2^n possible options for the inputs.
In other words, for 2 inputs we know that our truth table has 2^2=4
rows.
But notice that the ordering of the output column in the truth table matters! For example, if we switch the last two outputs of the disjunction, we end up with a different truth table and thus a different boolean function, which happens to be the material implication:
A B | A -> B 1 1 | 1 0 1 | 1 1 0 | 0 0 0 | 1
So given that each truth table has 2^n
rows, we now need to know how many possible sequences of 1s and 0s we can have in the output column.
This is equivalent to throwing a coin for 2^n times and writing down all possible outcome sequences of head and tails.
How many of those sequences are possible?
Well, the outcome is again either 1
or 0
, so for each row we have two options.
We already established we have 2^n
amount of rows.
So for n
inputs, 2^n
rows, and 2 output options per row, we have 22n possible truth tables, and hence so many boolean functions.
For 1 input, it’s not much work to draw out all 2^(2^1) = 4
options:
A | o1 o2 o3 o4 0 | 0 0 1 1 1 | 0 1 0 1
Likewise, for two inputs we have 2^(2^2)=2^4=16
possible boolean functions,
and for three inputs 2^(2^3)=2^8=256
possible boolean functions.
Now, the more interesting follow-up question was: how many of these boolean functions with two inputs can be modeled by a single-layered perceptron?
Perceptrons can model logical functions by classifying everything on one side of a decision boundary as true, and false on the other. Using the perceptron learning rule we can learn this decision boundary in a supervised manner by iterating over examples from the truth table of the function we want to model, but that’s a topic for another day. Such a decision boundary looks like so:
From the 16 possible boolean functions with two inputs, perceptrons can thus model those whose layout allows all positive instances to be separated from the negative instances.
This is only not possible for the XOR and its negation, the XNOR.
Boolean functions where each input is mapped to true, or each to false, can actually be modeled with a decision boundary far off to the side.
So single-layered perceptrons can model 16-2=14
boolean functions.
I have made progress in my understanding of Go templating, and in particular its scope limitations (see for example this). This allowed me to implement some new features that I was struggling with before. In this post I give an overview of new features, together with their implementation in Hugo. Most new features are small tweaks that extend on existing functionality. However, since I joined the IndieWeb, I also added a completely new aspect to this website, namely so-called “microposts”. Twittering is not my style, but I did crave for a place to share interesting bookmarks and other blogs in a more dynamic fashion than your classic blogroll (which I also have by the way). I had to write some code to support microformats2, which is what my microposts use.
Below I summarize the new features and provide related code snippets. Perhaps they are of use to you!
On the homepage I display the most recent blog posts (I call them “engrams”). For each post I wanted to add a preview of its tags. I limited the preview to two tags only, because otherwise the tags overflow on mobile phones. If the post has more tags then two, dots will be displayed.
You can navigate the site by clicking on the tags, try it out! Clicking on a tag will reload the same page, but you will see that the previewed blog posts all correspond to the chosen tag.
The following assumes you are looping over your posts:
<aside>{{ .Date.Format "January 2, 2006"}}
{{ if not (eq .Params.tags nil) }}
{{ range first 2 $value.Params.tags }}
<a href="{{ "/tags/" | relLangURL }}{{ . | urlize }}/"
style="text-decoration:none">#{{ lower . }}</a>
{{ end }}
{{ if gt (len .Params.tags) 2 }}
...
{{ end }}
{{ end }}
</aside>
Hugo offers a handy summary option that automatically generates an “abstract”. If the summary is too long, you can manually truncate it to a particular amount of characters.
<div class="preview">
{{ range $index, $value := first 6 (where .Pages ".Type" "posts") }}
<p>
<a href="{{ .Permalink }}">{{ .Title }}</a>
{{ if .Params.guest }} (by {{ .Params.author }}) {{ end }}
{{ if .Draft }} <span style="color:#FF4136;">(unpublished)</span> {{ end }}
</p>
{{ if (eq $index 0) }}
<blockquote>{{ truncate 350 .Summary }}
<p><a href="{{ .RelPermalink }}">Read more</a><p>
</blockquote>
{{ end }}
{{ end }}
<br>
<p> See <a href="{{ .Site.BaseURL }}/archives"> archives</a> for more ... </p>
</div>
On my homepage I have an overview of the tags of all posts, so that one can pick a tag of interest and browse through corresponding posts.
Previously I looped over all my posts, and then immediately rendered their tags.
The result of this naive approach is that the tag overview will have many duplicate tags.
In a normal programming language this is a trivial issue: you would keep track of a list of tags and make sure to not add duplicate tags (or perhaps work with a set), before rendering anything.
However, Go templating has its own unique way of defining the scope of variables.
For example, when you range over tags, the broadest scope you can access from within that loop ({{ . }}
) is that tag.
This means it is not straightforward to work with variables outside of that scope.
That is… until I found out about Hugo’s scratchpad, which allows you to define custom variables on the scope of the whole page.
You can add data of interest under a particular key that you define yourself.
One detail I had to get right in order to make this work, is to ensure that tags are added to a list, rather than replacing the previous value.
So rather than using the .Scratch.Set
method, I used the .Add
method.
The .Add
method assumes we are working with a list though, whereas our tags are strings.
So before adding tags, I convert it to a list with the slice
function.
<div class="tags">
<h2 id="tags"> Tag roulette </h2>
<br>
{{$tags := newScratch }}
{{ range .Site.Pages }}
{{ if eq .Type "posts"}}
{{ range .Params.tags }}
{{ $name := lower . }}
{{ $array := $tags.Get "tags" }}
{{ if not (in $array $name)}}
{{ $tags.Add "tags" (slice $name)}}
<a href="{{ "/tags/" | relLangURL }}{{ . | urlize }}/">{{ lower $name }}</a>
{{ end }}
{{end}}
{{ end }}
{{ end }}
</div>
The only thing that still bothers me is that I did not figure out how to do {{ $array := $tags.Get "tags" }}
inline.
The most important element here is to distinguish pages of the type “micro” from regular posts. The layout “content_only” calls a partial that I wrote for displaying html using microformats2 (see next section).
<div>
<h2 > Micros </h2>
{{ range first 3 (where .Site.RegularPages ".Type" "micro") }}
<div class="hover-box">
<p>{{ .Render "content_only" }}</p>
</div>
{{ end }}
<p> See <a href="{{ .Site.BaseURL }}/microblog"> microblog</a> for more ... </p>
<br>
</div>
I wanted to display different type of micros in different manners. For example, I wanted bookmarks to show a book symbol with the URL of the bookmark. For events I want to show a calender, and for music events (a subcategory) I want to show a music notes instead. For replies, I want to provide the URL of the post I am replying to. For likes, I want to show a heart.
This is work in progress, but for now I wrote the following partial:
<body>
{{ if not .Params.event }}
<div class="h-entry">
<div class="u-author h-card" style="display:none">
<a href="{{ .Site.BaseURL }}" class="u-url p-name">Edwin Wenink</a>
</div>
<div class="micro">
<a href="{{ .Permalink }}">
<h4>{{ .Title}}</h4>
<aside>{{ .Date.Format "January 2, 2006"}}</aside></a>
{{ if .Params.reply }}
<p>In reply to → <a class="u-in-reply-to" href="{{ .Params.target}}">{{ .Params.target }}</a></p>
{{ end }}
{{ if .Params.like }}
<p>Edwin ❤ <a class="u-like-of" href="{{ .Params.target }}"> {{ .Params.target }}</a></p>
{{ end }}
{{ if .Params.bookmark }}
<p>📖 <a class="u-url u-uid" href="{{ .Params.target }}">{{ .Params.target }}</a></p>
{{ end }}
{{ else }}
<div class="h-event">
<div class="micro">
<h4 class="p-name">
<a class="u-url" href={{ .Params.target }}>
{{ if eq .Params.category "music" }}
♬
{{ else }}
📆
{{ end }}
{{ .Title }}</a>
</h4>
<a href="{{ .Permalink }}">
<aside><time class="dt-start">{{ .Date.Format "January 2, 2006 15:04" }}</time></aside>
</a>
{{ end }}
<p class="e-content">
{{ if .Content }}
↬ {{ .Content | markdownify }}
{{ end }}
</p>
</div>
</div>
</body>
Hugo makes this feature extremely easy by providing default functions.
The with
function is particularly handy, because it knows how to deal with nils
.
This ensures that when the are at the latest post, we will not cause any errors by trying to find the next post, which does not exist.
<div>
{{$posts := ($.Site.GetPage "section" "posts").Pages.ByPublishDate.Reverse}}
<!--Grab the most recent-->
{{ range first 1 $posts }}
<p><b>Latest</b>: <a href="{{ .Permalink }}">{{ .Title }}</a></p>
{{ end }}
{{ with .NextInSection }}
<p><b>Next:</b> <a href="{{ .Permalink }}">{{ .Title }}</a></p>
{{ end }}
{{ with .PrevInSection }}
<p><b>Previous:</b> <a href="{{ .Permalink }}">{{ .Title }}</a></p>
{{ end }}
</div>
What would be a cool improvement for the future is also linking to a relevant post with a similar tag.
The most recent feature (I started on it today) is a preview of the latest comments on my website. The challenge for this feature was that comments are stored in a separate data folder in a nested manner, where each post has its own comment directory. Sorting all comments on their date per post is trivial, but it is harder to find the latest comment overall, so from all posts. Again, I could not solve this problem before I figured out how to use Hugo’s scratchpad. A nice feature I added is that clicking on each preview brings you to the exact location of the comment. I also distinguish between comments on the original post, and replies on comments of other people.
<div>
{{ $all_comments := newScratch }}
{{ range $commented_posts := $.Site.Data.comments }}
{{ range . }}
{{ $all_comments.Add "comments" (slice . ) }}
{{ end}}
{{ end }}
<h2> Latest comments </h2>
<br>
<aside>Last 4 of {{ len ($all_comments.Get "comments") }} comments in total:</aside>
<p>
{{ range first 4 (sort ($all_comments.Get "comments") ".date" "desc") }}
{{ if .reply_to}}
{{ .name }} replied to <a href="{{ "posts/" | absLangURL }}{{ ._parent | urlize }}#{{._id}}">{{._parent}}</a> on {{ dateFormat "Monday, Jan 2, 2006" .date }}<br>
{{ else}}
{{ .name }} commented on <a href="{{ "posts/" | absLangURL }}{{ ._parent | urlize }}#{{._id}}">{{._parent}}</a> on {{ dateFormat "Monday, Jan 2, 2006" .date }}<br>
{{ end}}
{{ end }}
</p>
</div>
There are still things to do though. I want to display the name of the post in a more pretty manner, rather than showing its url. In case of replies, it would also be nice to retrieve the name of the person replied to, but this has low priority and is rather complex due to the way my comment system is set up (see this post).
Recent successes in the production of so-called “deep fakes” sparked both the imagination and the fears of many. The word “deepfake” is a contraction of “deep learning” and “fake”, indicating the use of Artificial Intelligence (AI) to synthesize images and videos that are not real, while simultaneously not or barely being recognizable as fabricated. For example, the recently launched website thispersondoesnotexist.com [12] by Philip Wang showcases AI-generated non-existing faces that are extremely realistic. Notably, the underlying neural network technique based on Generative Adversial Networks (GANs) is published [8] and publicly available - including code - to those who are interested in implementing similar applications. Currently, an app called FakeApp is being developed with the goal to make the “technology available to people without a technical background or programming experience.”[2]. At the same time, there are serious concerns that as this technology becomes even better, not only images but also videos can be completely faked. In the current state-of-the-art it is already possible to “face swap” existing faces in videos, allowing for example the face of President Trump to be inserted in an arbitrary video. Despite leading to some very entertaining videos, this technology is simultaneously a next step in the production of fake news and has the potential to thoroughly disrupt democratic discourse.
In this essay I first highlight main threats of deepfakes to democratic discourse. I claim that what these threats have in common is that they result from a deepfake’s potential to mediate what we perceive to be “real”. Secondly, I discuss how awareness of these negative societal consequences elicits different stances towards the underlying AI-technology, in particular concerning the responsibility that developers have in openly publishing (or not) these technologies. Thirdly, I argue that a philosophy of technological mediation is not only an adequate framework for understanding how deepfakes threaten democratic discourse through mediating what is “real”, but also for expressing the full complexity of the question who is responsible for negative societal consequences.
Societally undesirable applications of deepfake technology have already emerged, and more negative consequences are anticipated to emerge as the technology matures. One major negative application threatening individuals is the creation of fake porn videos of celebrities, which are now actively being banned from reddit and porn sites as they amount to non-consentual porn [2] [1, p.18]. But on a societal level, there are major concerns that deep fakes might significantly disturb the type of political discourse that is essential for democracy to function. Bobby Chesney and Danielle Citron [1] are the first to extensively explore the relationship between deep fakes and democratic discourse. Deepfakes first of all enlarge threats to democracy that are already present in what some consider to be a “post-truth” era, in which fake news can be as effective for achieving political goals as actual news based on facts. This threatens democratic discourse because, as Chesney et al. adequately express: “One of the prerequisites for democratic discourse is a shared universe of facts and truths supported by empirical evidence. In the absence of an agreed upon reality, efforts to solve national and global problems will become enmeshed in needless firstorder questions like whether climate change is real. The large scale erosion of public faith in data and statistics has led us to a point where the simple introduction of empirical evidence can alienate those who have come to view statistics as elitist.” [1, p.21]. Deepfakes in this sense contribute to what Chesney et al. call intellectual vandalism in the marketplace of ideas [1, p.21]. That development is undesirable for democracy irrespective of its particular form, but is particularly worrisome for those supporting a pluralist or deliberative democracy, as they see opinion forming in a free and open dialogue or debate as essential to democracy [9, 4-5].
But secondly, deep fakes can even more effectively undermine fair and democratic intellectual competition in this marketplace of ideas than “normal” fake news does. Imagine a deepfake video spreading on the evening before elections, showing one of the candidates committing a serious crime. Due to the power of social media such a video can go “viral” and do serious damage to the eligibility of a candidate. In modern media “not guilty until proven otherwise” often hardly holds, and one can be convicted in the public eye for a crime that was not committed, without fair trial. A well timed deep fake can heavily disrupt fair democratic elections in this manner before there is a chance to debunk the deepfake. But even if a deepfake is exposed as false, its disruption of fair elections can still be effective by having set a cognitive bias in the minds of the electorate [1, p.19].
Using deepfakes to disrupt democratic discourse will be even more effective if they target situations that are already extremely tense. Imagine for example a deepfake of “an Israeli official doing or saying something so inflammatory as to cause riots in neighboring countries, potentially disrupting diplomatic ties or sparking a wave of violence.” [1, p.20]. Once such a situation is escalated, despite the cause being “fake news”, it is extremely hard to de-escalate them. In contexts where such distrust is already present, deep fakes can further erode trust in institutions of open democratic discourse. As Chesney et al. point out, in such tense situations the likelihood that opposing camps will believe negative fake news about the other side is higher, and only increases as deepfakes exploit this mechanism to further enlarge social divisions [1, p.23]. Not surprisingly, techniques to detect deepfakes are being developed to counteract these risks, for example by the US military DARPA [4]. But due to the flexibility of GAN neural networks it is likely that whatever technology is developed in detecting fake videos might also be used as a feedback mechanism, ultimately only improving the quality of deepfakes [4]. These examples show that combatting the threats of deepfake technology to democracy cannot be an exclusively technological story. Despite technological counter-measures, deepfakes still threaten democracy by setting cognitive biases and eroding a commonly agreed upon reality that serves as the background for a meaningful democratic dialogue. I argue in this essay that the mentioned threats to democratic discourse are grounded in a deepfake’s potential to mediate what humans perceive to be “real”. Furthermore, through mediating what is “real”, deepfake artefacts can co-determine human praxis. Because of how fundamental this theme is, I think we also need a philosophical story to understand the impact of deepfakes. In the following sections I first explore two diametrically opposed ways of coping with the societal impact of deepfakes. I then show how a theory of technological mediation is an appropriate philosophical framework for understanding this impact, and moreover that it is able to grasp the complexity of the question how to bear responsibility for it.
When one develops a technology that has a large societal impact, a quite fundamental ethical question is to what extent the developer is responsible for that impact. Philip Wang of thispersondoesnotexist justifies promoting the GAN technique used for deepfakes in an interview by pointing out that those “who are unaware are most vulnerable to this technology” [7]. This taps into what can be called a deterministic view on technology, which lets societal necessity follow quite automatically from technological potentiality with the motto: “if it can be done, it will be done”. In the field of AI deterministic attitudes are well represented as AI-technology is increasingly changing society. To the deterministic-minded person even those who worry about these societal changes and remind us of the dangers, are nevertheless equally subjected to the great historical impetus of technological progression. And this person then reasons: if the technology will emerge in society at some point in any case, then the best thing we can do now is raise awareness. In this way we, as a society, can adapt to the technology - rather than adapting the technology to human needs.
Other developers of AI-technology share the concerns for its potential negative societal impact, but conceive of their own responsibility differently. For example, the OpenAI research organization, which is dedicated to making sure AI benefits humanity, announced last month that they developed an AI that can write paragraphs of text that “feel close to human quality and show coherence over a page or more of text” [6]. However, contrary to the publications about video deepfakes, the OpenAI organization decided not to release the used datasets, nor the trained model or the used code, due “to concerns about large language models being used to generate deceptive, biased, or abusive language at scale” amongst other “malicious applications of the technology” [6]. They did however release a smaller trained model with less potential for abuse in order to still display the technical innovations that “are core to fundamental artificial intelligence research” [6]. As scientists, they do not want to counteract progression of the field. This experiment in responsible AI disclosure amounts to a more instrumentalist view on technology: its development is controlled by humans, instead of being an autonomous deterministic force to which humans have to adapt.
The primary hope of the decision to withhold the AI is that this will give the AI community as well as governments more time to come up with ways to prevent or penalize malicious use of AI technologies, quite similar to the practice of responsible disclosure in cryptography, where organizations are given time to repair security weaknesses before they are publicized. Interestingly, OpenAI’s explicit concern for the societal impact of their technology is framed in the context of political actors waging “disinformation campaigns” by generating fake content, requiring that “the public at large will need to become more skeptical of text they find online, just as the “deep fakes” phenomenon calls for more skepticism about images” [6]. In their policy OpenAI thus explicitly respond to the media attention surrounding deepfake neural networks that become better at deceiving people and are increasingly publicly available. Although not free of some hint of determinism, the OpenAI initiative exerts a responsibility for actively controlling technological development in AI, to make sure that it brings forth useful instruments that are to the benefit, and not the detriment of humanity.
The contrast in the positions between a) the open publishing of deep fake technology including trained models and code, and b) the controlled disclosure of text-generating networks, again shows that the development of these technologies does not only raise technical issues, but also societal ones. In both cases, the researchers are aware of the societal dangers of their technology, but take responsibility for it in different ways. In a deterministic vein, there is no reason to control disclosure of technology: someone else will publish it anyways, and it is better to inform people as soon as possible. From a more instrumentalist point of view, the act of disclosure is not as neutral: since humans have at least some control over technology, they also share responsibility for possible negative consequences within reasonable limits. After all, the technology itself is just a neutral instrument. Whether it is put to good use depends on humans.
Both views have in common that they conceptualize the human-technological relationship in terms of a subject-object divide, in which subject and object are external to each other, irrespective of whether the subject is human or some technology. But I think that these terms are no longer sufficient for understanding the complexity of deepfakes that heavily blur the demarcation between what is “real” and what is not, and consequently also not sufficient for understanding how this is the foundation of a threat to democracy. Accordingly, if we are to conceptualize the responsibilities of developers of such technologies, we need to take into account how these technologies mediate reality and human praxis.
In this section I argue that the philosophy of technological mediation as put forward by Verbeek [11][10] is appropriate for conceptualizing the threat of deepfakes to democracy in terms of their mediation of human praxis. Technological mediation “concerns the role of technology in human action (conceived as the ways in which human beings are present in their world) and human experience (conceived as the ways in which their world is present to them)” [10, p.363]. That technological artefacts mediate means that they “are not neutral intermediaries but actively coshape people’s being in the world”, and that they do so in two directions: they mediate how the world appears to humans (perception) and how humans give shape to their own reality by acting in the world through the use of technological artefacts (praxis) [10, p.364]. The mediation of deepfakes can be shown in both directions, and I will indicate how they are interrelated in the example of democracy.
First of all, what the name “deepfake” expresses is that a given image or video is perceived to be “real”, while what is represented does not exist in the represented capacity: i.e. it is “fake”. I chose this specific formulation because a deep fake of Trump does not necessarily mean that Trump does not exist, but merely that he did not say or do what is represented in the deep fake video or image.
Now imagine a video of a man committing a serious crime, with the face of Trump swapped in. In case of a successful deepfake, we do not see a man with Trump’s face superimposed. Instead we perceive this man as Trump. The “as” in that sentence indicates an important insight from hermeneutic philosophy: the beings in our world always already appear to us as meaningful in a quite practical sense. The stereotypical example, based on Heidegger’s early philosophy, is that we see a hammer not as a composite object with one wood handle and one metal head, but intuitively and immediately take it as something we can hit nails with [10, cf. p.364]. Philosophical hermeneutics regards this as an act of interpretation that is not some scholarly exercise, but one that quite fundamentally determines how beings become present to us in the context of a world [cf. 5]. The particularity of deepfakes is that their technology mediates this process by making us pre-reflectively take something “fake” as something “real”. What is important is that, against instrumentalism, a deepfake’s deceiving character is not simply due to the bad intention of its designer. The technology itself is not a completely neutral tool in the theory of technological mediation. As it helps to shape what counts as “real”, this technology quite fundamentally sets a horizon for human moral and/or political action. Instead, mixing up fiction and reality is a core feature of the GAN technology that actively influences the relationship between a human and its world. A deepfake can thus be said to have its own “technological intentionality” [10 p.456] that affords (not causes!) the interpretation of “fake” as “real”.
But against determinism, this technological intentionality does not imply that the technological artefact autonomously decides our social realities, as if the technological artefact takes care of its own interpretation. As Verbeek makes clear, following Don Ihde, this technological intentionality only takes form in the interaction with humans [10 p.456]. Stating that technological intentionality does not coincide with human intentionality is analogue to the hermeneutic insight that the meaning of a text is not equal to the intention of its author. Despite this independence from the author’s intention however, it is equally naive in hermeneutics to say that the meaning of a text resides solely in the text itself as some pure ideal content, which would then be the same and equally complete even if nobody ever read it. Instead, and herein lies the analogue, a text’s meaning unfolds in the interaction with a reader. With respect to deepfake technology, this also means that its effects cannot be fully predicted independent of any real world interaction of humans with deepfake artefacts. I argue that in this manner a deepfake mediates how we perceive beings in the world by affording an interpretation of the fake as the real. If effective, a deepfake is not seen as just a video, but as representing an event in the world as we perceive it around us. But this interpretative step is everything but neutral. If we revisit the example of a deepfake of Trump performing a criminal act, we can see that this does not only imply we perceive the criminal as Trump, but that it also implies we now might perceive Trump as a criminal. We can then see how the hermeneutical effect of deepfakes underlies its effects in praxis:
If the fake is interpreted as real, then the real is reinterpreted in terms of the fake.
So if a candidate for a democratic election is shown in a deepfake to perform e.g. criminal acts (something fake is interpreted as real), then this candidate is potentially reinterpreted and reassessed by citizens as if he were a criminal (the real interpreted in terms of the fake). The aforementioned cognitive bias could also be interpreted along these lines: it is a re-valuation of something in the world because the deepfake artefact meddled with the interpretative process by which we take something as something.
Deepfakes thus contribute to the further blurring of the demarcation between real and fake news. As a result, even real and genuine discourse can become suspect, as it is now fair game to the question “fake or real?” But can we then still establish what we said was necessary for democracy? Can we in the future still have the certainty of an agreed upon reality, on the basis of which we can have a meaningful dialogue in the marketplace of ideas within a democracy?
I have argued that the threat of deepfakes to democracy can be framed in terms of technological mediation, as we have regarded serious threats to democracy as a result of interpreting something fake as real. That means that deepfake technological artefacts can mediate both the (hermeneutic) experience of the surrounding world, and the actions humans take in it. But the perspective of technological mediation only makes the question who is responsible for (unintended) negative consequences more complex. One the one hand, developers of these technologies cannot be held fully responsible for negative consequences of technology, because they cannot fully predict how the interaction with users works out. But neither can developers realistically waive all responsibility by claiming that the development of AI is a historical movement shaping our social realities, independent of human interaction. Instead, when AI increasingly changes our social and political reality in unexpected ways, the more accurate position is admitting that somehow responsibility is distributed between developers, the technology itself, and its users. And especially if AI systems take on more autonomy in the future, the question of sharing responsibility with moral machines becomes increasingly urgent and intriguing.
Although such an open conclusion is not satisfying, it is the more honest position. When it comes to the moral responsibility (rather than a more limited legalistic story), issues around deepfakes can join the ranks of complicated ongoing debates about ethical responsibility in accidents with self-driving cars, or killer drones. The unresolved paradox is that unforeseen negative consequences may occur due to the learning capacity of AI, whereas at the same time, this flexibility is intended and exactly the main innovation of state-of-the-art AIs. And yet, we can reasonably ask of developers to foresee certain undesirable applications of their technologies. From the viewpoint of technological mediation both the stances of Philip Wang and of the OpenAI foundation have their own place. The decision of OpenAI to withhold their AI technology results from a reasonable anticipation of negative consequences, awaiting further democratic discussion before full disclosure. At the same time, this attitude should not tip the balance towards censorship. Withholding a technology from society in order to protect democracy seems paradoxically undemocratic and patronizing if not based on a sustained debate. Informing the general population about the threats of a technology is also desirable, but should not depart from a deterministic motivation. It is good, not because we have to learn to adapt to an uncompromising technology, but to spark a democratic debate with all involved stakeholders about how to design a better interaction with the technology [cf. 10].
[1]: Robert Chesney and Danielle Keats Citron. Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. SSRN Electronic Journal, 2018. doi: 10.2139/ssrn.3213954.
[2] Samantha Cole. We are truly fucked: Everyone is making ai-generated fake porn now, 2018. URL https://motherboard.vice.com/en_us/article/bjye8a/reddit-fake-porn-app-daisy-ridley. (accessed: 2019- 03-21).
[3] Maarten Franssen, Gert-Jan Lokhorst, and Ibo van de Poel. Philosophy of technology. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, fall 2018 edition, 2018. URL https://plato.stanford.edu/archives/fall2018/entries/technology/. (accessed: 2019-03-27).
[4] Will Knight. The defense department has produced the first tools for catching deepfakes, 2019. URL https://www.technologyreview.com/s/611726/the-defense-department-has-produced-the-first-tools-for-catching-deepfakes/. (accessed: 2019-03-23).
[5] C. Mantzavinos. Hermeneutics. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, winter 2016 edition, 2016. URL https://plato.stanford.edu/archives/win2016/entries/hermeneutics/. (accessed: 2019-03-27).
[6] OpenAI. Better language models and their implications, 2019. URL https://openai.com/blog/better-language-models.
[7] Danny Paez. 'this person does not exist' creator reveals his site's creepy origin story, 2019. URL https://www.inverse.com/article/53414-this-person-does-not-exist-creator-interview. (accessed: 2019-03-21).
[8] Timo Aila Tero Karras, Samuli Laine. A style-based generator architecture for generative adversarial networks, 2019. URL https://arxiv.org/abs/1812.04948. (accessed: 2019-03-21).
[9] Jan A. G. M. Van Dijk. Digital democracy: Vision and reality. Innovation and the Public Sector, 19:49-62, 2012. doi: 10.3233/978-1-61499-137-3-49.
[10] Peter-Paul Verbeek. Materializing morality: Design ethics and technological mediation. Science, Technology, & Human Values, 31(3):361-380, 2006. doi: 10.1177/0162243905285847. URL https://doi.org/10.1177/ 0162243905285847.
[11] Peter-Paul Verbeek. Mediation theory. 2019. URL https://ppverbeek.wordpress.com/mediation-theory/. (accessed: 2019-03-23).
[12] Philip Wang. Thispersondoesnotexist, 2019. URL https://thispersondoesnotexist.com/. (accessed: 2019- 03-21).
Gert-Jan van der Heiden THE TRUTH (AND UNTRUTH) OF LANGUAGE Heidegger, Ricoeur, and Derrida on Disclosure and Displacement 300pp. Paperback. Duquesne University Press. 978-0-8207-0434-0
In philosophy equivocal language can count on resistance and criticism. It is often considered as unnecessary and striving against philosophy’s main imperative to be clair et distinct, if I may borrow Descartes famous phrase here. The unnecessary use of unclear and equivocal language is a point of criticism often uttered against some philosophers that are known to be difficult to read and understand, such as Martin Heidegger and Jacques Derrida, who happen to be two protagonists of Gert-Jan van der Heiden’s reworked edition of his doctoral thesis. Van der Heiden investigates how language can disclose beings to our understanding, but is also characterized by several displacements that problematize the idea that language can present reality unequivocally. Fortunately for us, van der Heiden succeeded in writing a book that excels in clarity, which is a major accomplishment considering the difficulty of his subject and his choice of authors.
Van der Heiden sets out to investigate the relationship between truth and language in contemporary hermeneutic philosophy. This branch of philosophy is called ‘hermeneutic’ because its major intuition is that we can have no access to the world and the beings existing in it outside of linguistic structures (‘hermeneutics’ is traditionally the art of text interpretation). When reality is structured like a ‘text’, so to speak, hermeneutics deals with our access to reality and becomes of philosophical interest. Our language use is then not simply a representation of a reality otherwise unaffected by understanding and interpretation. With this conception of language another conception of truth arises that does not presuppose the presence of things, but rather concerns their coming into presence, which is then seen as a primordial function of language: it lets things be. This is a different conception of language than one that understands sentences only as assertions about a pre-existing world. In that case truth is understood as the correspondence between language and reality, and untruth as the lack thereof. When language is understood in its power to let things be in the first place, this disclosing function has been said by Heidegger to denote an experience of truth as aletheia (disclosedness) that the Ancient-Greeks already had, but that had been pushed to the background in the history of philosophy that followed by a conception of truth as correspondence (or varieties thereof). This conception of truth brings with it another form of untruth. Untruth is in this case not a lack of correspondence, but rather the simple concealment that is necessary in order for things to be unconcealed. This simple concealment makes truth as disclosure possible, and is called untruth precisely because it is the space out of with truth lights up, which of course cannot be measured according to truth itself. Then we have an indication of the title: both the truth and untruth of language are at stake here.
When we take the sketched ‘linguistic turn’ for granted, we can understand the two major tracks Van der Heiden identifies in this hermeneutic philosophy. One the one hand, language becomes the medium through which things are disclosed and show themselves. On the other hand, language causes all sorts of displacement. Metaphorical language, for example, transfers a word from its proper domain into another. When Van der Heiden discusses metaphoricity, it is not so much for the sake of beautiful poems or engagement with art, but rather to address the metaphorical power of language to displace itself and the things it discloses. It is not accidental that in Van der Heiden’s treatment of displacement the notion of writing takes a central position, because it is writing par excellence that embodies the displacing characteristics of language that poses a danger for the desired clarity and univocality of philosophical concepts. Written language distances us from the original place and time of utterance, allowing distortion of the intended meaning and thus facilitating misunderstanding. For Plato this was reason enough to say that serious philosophy should not be written down. It would distort the full understanding of truth, and created the risk that philosophical truth would be ridiculed by the masses that also gained (superficial) access to it if it was written down. Language as it is spoken apparently does not have these dangers. When I teach someone my philosophical insights I am present to correct them and the truth of what I say takes place in the here and now. When truth is thought of as something absolute, this apparently immaterial taking place of language in my saying indeed seems to be the most undiluted presentation.
Seen from that perspective the displacing qualities of language are a danger to its ability to disclose something truthfully. It is fitting that Van der Heiden’s book begins with a treatment of Heidegger, because it is he who stresses this ability of language to disclose things. But although Heidegger in general is trying to overcome, insofar that is possible, the tradition that thought of writing as a danger to the immediate pureness of truth, Van der Heiden argues that Heidegger still privileges speech above writing. Thinking truth as disclosure presupposes also the thought of ‘something’ concealed. Without this concealment there would be no occurrence of truth in the sense of unconcealment. But one of the dangers that has always been attributed to writing is that, although it is not fit for truth, it appears to be so. Phrased in these terms, the danger of writing is that it acts as a disguise, it shows something, but only in a covered up way while acting as if it is the correct one. This concealment (pseudos) is not the concealment (lethe) that is necessary for truth as unconcealment (a-letheia), but rather the concealment that covers up the more primordial disclosure of things and the simple concealment involved in it, that Heidegger understands following the model of saying. So it seems that in the end the displacements involved in language are secondary to a most primordial disclosure of the being of things in language. In order to let itself be grasped by this disclosure, thinking has to finds its proximity to poetry, struggling with language to seek a genuine way of saying that does not displace the primordial disclosure of being.
This turn to poetry in Heidegger late works is quite famous (some would say notorious), but strangely enough the metaphoricity that we usually associate with poetry is not embraced at all by Heidegger. Van der Heiden succeeds in providing a very clear overview of Heidegger’s thought on metaphor, without ever losing the healthy distance that is required in order to differentiate himself – a philosopher writing about Heidegger – from a fanatic disciple (a so-called ‘Heideggerian’). In order to understand Derrida’s comments on metaphor for example, it is very important to understand why Heidegger renounces metaphor. According to Heidegger, metaphors imply a distinction of domains that is metaphysical, that is to say, they imply a transference from the domain of what we are familiar with (‘the sensible’) to an unfamiliar domain of abstraction (‘the intelligible’). The distinction itself is metaphysical, and poses the intelligible as a separate domain. The ultimate conclusion of this separation of domains is that in the end we cannot access the intelligible domain in itself, but only from the perspective of what we already know. For Heidegger this is typical of the very metaphysics he tries to overcome: it tries to answer the question of what the being of a being is, by looking at a being that is familiar to us. But then ‘being’ in general is only knowable for us insofar as we have metaphorical access to it, because it resembles something we already know. Then we implicitly act as if the being of a being is a being. In Plato’s famous allegory of the cave for example, the notion of the Idea of ‘the Good’ is metaphorically accessed through the image of the sun. Van der Heiden provides a very clear overview of Heidegger, and highlights all the relevant points for setting up a discussion with Ricoeur and Derrida, but not without questioning for example Heidegger’s thought that metaphors only exist within metaphysics (it only seems obvious in the case of philosophical metaphors). Van der Heiden never fails to remain clear-headed and always provides a fresh and clear overview of the discussed authors. To be honest, this is a quality that cannot be underestimated when you think of the literary hocus-pocus and bewildering erudition going on, especially while reading Heidegger and Derrida, but often also in secondary literature that deals with them.
Heidegger is in a way the hinge around which the theme of this book unfolds, for his thought forms the background against which both Ricoeur and Derrida develop their own thought. But both Ricoeur and Derrida understand the displacement of language as a productive element, rather than something risking to disguise the original disclosure of being through language. It is fitting that Van der Heiden spends two chapters on metaphor and on mimesis, because in these themes the lines of disclosure and displacement intersect. They have a connection to both the creative and productive aspects of language and to the displacing aspects. In the case of metaphors, a word is transferred and thus displaced to another domain, but by doing so it provides a new understanding. Mimesis presents something anew, but is at the same time a representation and thus a displacement. So in these cases displacement is not seen as a disguise of disclosure, but rather as itself constitutive of a new or repeated disclosure.
The choice to focus on Derrida and Ricoeur in discussing disclosure and displacement is a good one, because they both take on the heritage of Heidegger in characteristic ways. Generally speaking, we can say that Ricoeur follows up on Heidegger in line with the ‘hermeneutic’ tradition, while Derrida appropriates the ‘hermeneutic’ way of thinking language in order to deconstruct it. Van der Heiden shows convincingly that for Ricoeur the disclosure resulting from the displacement involved in a metaphor is taken up in the process of interpreting that aims at deciphering a hidden ideal meaning of a text. So here disclosure does not so much give us meaning in the first place, but is guided by a meaning of the text that precedes it. Metaphors provide in grasping this ideal meaning.
Derrida lays more emphasis on other aspects of Heidegger’s heritage, and thinks disclosure more fundamental than Ricoeur does. Derrida follows Heidegger’s thought that disclosure and truth are only possible on the basis of a preceding untruth. But Derrida radicalizes this thought by arguing that every disclosure can only be a disclosure on the basis of a previous displacement because in order for something to be given in language, this language is always a repetition, which involves a transmission and translation from context to context. (Those who are interested should investigate Derrida’s notion of ‘iterability’). As I said earlier, the relation between the clarity of the philosophical concept and the displacing powers of metaphor is full of tensions. Derrida drives this tension to its ultimatum by arguing that the metaphor does not simply exist within metaphysics, but rather points to the original displacements that make philosophical language possible in the first place.
I had to indulge in giving these abstractions, because, frankly this book is very abstract. The matters discussed in The Truth (and Untruth) of Language are mainly of theoretical philosophical concern, so surely this book is not for everyone. It is written in an academic style and for academic purposes. Reading this book will not fulfil a reader looking for a revolutionary reading, a bag of literary tricks or fun storytelling. Apart from the last, one could always read for example works of Derrida himself. But then again, I would say this is in no way a shortcoming on behalf of Van der Heiden. Academic language can be revolutionary in this case, because it brings together authors that can themselves be quite enigmatic, to say the least, with a comprehensibility that is not often achieved. This philosophy of truth is hermeneutic, but certainly not hermetic.
‘I don’t know what the question is any more. Between Lucy’s generation and mine a curtain seems to have fallen.’ (Coetzee 2000, 210).
In het boek Disgrace van J.M. Coetzee wordt het leven van de academicus David Lurie en zijn dochter Lucy overhoop gegooid door een aanval op de boerderij van Lucy, op het platteland van Zuid-Afrika, waar David na zijn ontslag vanwege een ongepaste relatie met een studente tijdelijk verblijft. Daarbij raakt David gewond aan zijn oor, en wordt Lucy verkracht door de drie zwarte overvallers. Na het voorval doet Lucy alsof er niets aan de hand is, en probeert zij het plattelandsleven weer op te pakken. Zij spreekt er niet over, wil er niet over spreken. Pas aan het einde van het boek uit ze zich voorzichtig, maar toch schieten woorden dan te kort:
‘I can’t talk anymore, David, I just can’t,’ she says, speaking softly, rapidly, as though afraid the words will dry up. ‘I know I’m not being clear. I wish I could explain. But I can’t. Because of who you are and who I am, I can’t. I’m sorry.’ (Coetzee 2000, 155).
De verkrachting, een gebeurtenis die alleen Lucy op een specifiek moment heeft ondergaan, markeert haar voor het leven. Lucy kan de gebeurtenis hoogtens verwoorden met een geweldadige metafoor: ‘Pushing the knife in; exiting afterwards, leaving the body behind covered in blood’ (Coetzee 2000, 158). Tegelijkertijd is die letterlijke insnijding in haar lichaam een demarcatielijn tussen David en Lucy, die hen van elkaar vervreemdt. Tot aan de verkrachting hebben David en Lucy een goede verstandhouding, maar daarna vormt de verkrachting steeds dat punt waarop het gesprek tussen David en Lucy stokt. Het fysieke trauma dat Lucy heeft opgelopen lijkt voor David ondanks goede bedoelingen voorbij elke verstaansmogelijkheid te liggen. Zo zegt Lucy:
Stop it David! I don’t need to defend myself before you. You don’t know what happened. (Coetzee 2000, 134).
Ondanks de dreiging van een nieuwe aanval is Lucy vastberaden op het platteland te blijven wonen, terwijl voor David vaststaat dat ze het beste kan vertrekken naar een veiligere plek. De volstrekt singuliere gebeurtenis van de verkrachting ontwricht de relatie tussen vader en dochter. Het is niet een gebrek aan rationaliteit of simpelweg onwilligheid waardoor David en Lucy niet nader tot elkaar kunnen komen, maar een door de verkrachting geïntroduceerde andersheid: ‘because of who you are and who I am’.
Geconfronteerd met de begrenzing van zijn begrip, probeert David de betekenis te vatten van wat er gebeurd is, en biedt Lucy de volgende interpretatie aan:
‘It was history speaking through them,’ he offers at last. ‘A history of wrong. Think of it that way, if it helps. It may have seemed personal, but it wasn’t. It came down from the ancestors.’
(Coetzee 2000, 156).
Het volstrekt dramatische moment van de verkrachting is dus beladen met een betekenis die niet alleen persoonlijk, maar ook historisch is. Op zeer geweldadige wijze wordt Lucy ingevoegd in een historisch gesprek dat al gaande was voordat zij er deel aan ging nemen: een Zuid-Afrikaans gesprek tussen wit en zwart, tussen een geschiedenis van overheersing en slavernij, van discriminatie en apartheid. Dit is geen onschuldig gesprek, maar een gesprek dat gaat over de toekomst van Zuid-Afrika, dat cirkelt om de vraag: hoe moeten we, met in ons achterhoofd de herinnering aan een geschiedenis van apartheid, samenleven? Daaronder gaat een andere vraag schuil: hoe kunnen we ons vanuit de vooroordelen die onze verschillende tradities meebrengen, ons openstellen voor de ander, in het bijzonder wanneer dat ook gevaren met zich meebrengt? Dit is één van de meest fundamentele problemen van de hermeneutische filosofie, die alleen maar aan maatschappelijke relevantie wint met de opkomst van identiteitspolitiek.
Na de verkrachting van Lucy begint de discussie over wat ze nu moet doen. David meent dat Lucy weg moet vluchten naar het veilige (en voornamelijk blanke) Nederland, waar raciale spanningen minder leven dan in Zuid-Afrika. Lucy besluit echter te blijven, blijkt zwanger te zijn van haar verkrachters, en zoekt bescherming bij Petrus, een zwarte man die eerst de status van een hulp had en naar het einde van het boek steeds meer een zelfstandig landeigenaar wordt.
We hebben al gezien dat de verkrachting demarceert, David en Lucy van elkaar onderscheidt. We zien nu een mogelijke grond van die demarcatie: Lucy is een gesprek over haar toekomst aangegaan dat David niet meer kan volgen. De verkrachting heeft haar in dat gesprek gedwongen, ze moest het een plaats geven. Hij kan met geen mogelijkheid de keuze van Lucy begrijpen om op dezelfde plek te blijven wonen. David gaat het gesprek over de toekomst niet aan zoals Lucy dat doet. Want: ‘he is too old to heed, too old to change.’ (Coetzee 2000, 209). David is niet meer bereid zijn vooroordelen te overstijgen en zich te openen voor de ander. De verkrachting heeft hem in de positie van een buitenstaander geplaatst, wat een fundamenteel thema is dat terugkeert in Coetzee’s romans en een belangrijke reden waarom hij de Nobelprijs voor literatuur gewonnen heeft.
Lucy daarentegen accepteert op een wrange manier de verkrachting. Door de verkrachting, bezien vanuit Davids interpretatie als een botsing van verschillende geschiedenissen, heeft zij toegang gekregen tot de kern van de zaak van een historisch gesprek. Zij heeft daarin gezien wat het voor haar betekent om in Zuid-Afrika te wonen. Lucy zegt op een gegeven moment zelfs:
‘But isn’t there another way of looking at it, David? What if… what if that is the price one has to pay for staying on? Perhaps that is how they look at it: perhaps that is how I should look at it too. They see me as owing something. They see themselves as debt collectors, tax collectors. Why should I be allowed to live here without paying? Perhaps that is what they tell themselves.’ (Coetzee 2000, 158).
Op een bepaalde manier berust zij, hoe cru het ook is, in de verkrachting. Zij berust erin dat ze bescherming nodig heeft van een zwarte man (Petrus) wil zij als alleenstaande blanke vrouw stand houden. Zij besteedt geen bijzondere aandacht aan de jonge jongen die bij de verkrachting aanwezig was, wanneer blijkt dat deze jongen familie is van Petrus. Zij dient geen officiële aanklacht in. Zij is zwanger van haar verkrachters, maar heeft geen abortusplannen. David daarentegen is het met alle bovenstaande stappen oneens, en elke keer wanneer hij zich rond Lucy begeeft ontstaan er spanningen. Lucy symboliseert hier een toekomst van een verzoening tegen een hoge prijs, en David een oneigenlijke toekomst die in het verleden wil blijven hangen.
Zowel Lucy als David zijn gemarkeerd door de verkrachting en aanval. ‘They have marked me’ (Coetzee 2000, 158), zegt Lucy. Beiden dragen een merkteken, zijn onherstelbaar gemarkeerd met ongenade (disgrace). Ten opzichte van de verkrachting zijn er twee houdingen: die van het vergeten met het oog op de toekomst, en die van het herinneren. Lucy wil het merkteken, de insnijding die in haar lichaam is gemaakt, vergeten, er niet stil bij blijven staan, en doorgaan. David wil herinneren. Wanneer hij een zwarte jongen op een feest van Petrus herkent van de aanval en verkrachting, wil Lucy gewoon weg, maar zoekt hij de confrontatie op. De passage van die confrontatie eindigt met: ‘He lifts a hand to his white skullcap. For the first time he is glad to have it, to wear it as his own.’ (Coetzee 2000, 135). Zijn verbrande oor, zijn merkteken, zijn fysieke herinnering aan de verkrachting, aan de aanval en aan zijn onvermogen er iets aan te doen, is een herinnering aan de ongenade waarin hij is vervallen door zijn oneervolle ontslag en de situatie met zijn dochter. David wil die herinnering niet vergeten, maar wil gerechtigheid voor daden uit het verleden. Maar die gerechtigheid heeft geen plaats in de Zuid-Afrikaanse praktijk waar Lucy zich in bevindt, en verstoort die praktijk zelfs. Lucy neemt hem dat kwalijk: ‘Everything had settled down, everything was peaceful again, until you came back.’ (Coetzee 2000, 208).
De vraag naar het merkteken en hoe daarmee om te gaan wordt in Coetzees Disgrace op bijzondere wijze aan de kaak gesteld, maar niet beantwoord. Het boek opent een labyrint van vragen. Moet het verleden koste wat het kost herinnerd blijven worden? Kunnen we op die manier een vruchtbare toekomst tegemoet gaan? Of moeten we vergeten met het oog op vredig samenleven? Maar kunnen we ons verleden wel vergeten, wanneer die in onze taal, cultuur en vooroordelen is ingesleten?
De toekomst ligt nog open, het gesprek gaat door. Enerzijds is er het toekomstige kind van Lucy, dat zelf een merkteken van de verkrachting is. In de letterlijke vermenging van zwart en wit biedt het kind een toekomstperspectief. Echter, de zwarte jongen die aanwezig was bij de verkrachting van Lucy, roept na een aanvaring met David: ‘We will kill you all!’ (Coetzee 2000, 207). In deze tegenstelling, maar ook in die tussen Lucy en David, tussen verschillende generaties, openbaart zich een Zuid-Afrikaanse spanning tussen verleden en toekomst die ik hier heb laten cirkelen om Coetzees beschrijving van een verkrachting.
Tot slot: de verkrachting zelf is niet hermeneutisch – in diens afgrondelijke geweld spreekt hooguit het onvermogen te spreken. De verkrachting krijgt echter een hermeneutische duiding in Disgrace omdat het op een zeer problematische manier de ruimte opent voor een gesprek. Dat gesprek heeft het karakter van een moeizame therapie, van een verwerkingsproces dat nog lang door zal gaan. Zoals David tegen Petrus zegt: ‘It is not finished. On the contrary, it is just beginning. It will go on long after I am dead and you are dead.’ (Coetzee 2000, 202). Het gesprek gaat voort, al is het zonder ons.
Coetzee, J.M. 2000. Disgrace. London: Vintage.