Joran started a programming traineeship after graduating from a theater education. I asked him to write something about how he experiences this transition - Edwin
I’ve been dabbling in programming for a while now, on account of the fact that I have recently started working in it full-time. I reckoned I could be able to write an article for this website because of that, so I tried to come up with something clever to discuss until I came to my senses and realised I’m a total newb. Whatever computer related revelation I’ve had over the past few months would be somewhat out of place on Edwin’s site full of computer cleverness.
For example, you might know that is the button that turns things on. Fascinating stuff. But then I had this revelation when I learned that that button is a combination of a 1 and a 0, which I had no clue about! This experience of being clueless and amazed about the smallest things is however actually quite representative of my experience as a new programmer. It’s a pretty crazy experience being thrown head-first into the world of computers with no relevant knowledge or previous experience whatsoever, so surely there’s something interesting to be found there. Let’s see…
The main thing I noticed as a novice programmer is just how much terminology you have to get familiar with simply to be able to slightly follow what is going on. Of course, whenever you enter a new area of work, having to learn some new jargon is to be expected, but even when taking this into account I think the world of IT is pretty insane.
The first wave of new words consists of a plethora of technical terms, most of which I initially thought I understood. It turned out I really didn’t… Words like class, interface, router, server, browser, directory, method, parameter, garbage collection, integer, bootstrap, long, object, image, short and double… Then one needs to know a few abbreviations like CPU, JVM, OO, UI, HTML, CSS, XML, YAML, API, WAR, JAR, PHP, SQL, MVC, JSON, DOD, DOR, ORM, POD, MVP, GNU, SAP, PO, HANA, ABAP, DSP, MDBC, JDBC, TDD, SOAP, REST, WADL and HTTP. And to top it all off there’s a never-ending cascade of names: Jenkins, GitHub, Java, JavaScript, Python, Oracle, RedHat, Openshift, Spring, Slack, Sun, Eclipse, IntelliJ, Ansible, Angular, ThymeLeaf, Gradle, Maven, KeyCloak, Citrix, OnePassWord, iTerm, RaspBerryPi, Arduino, Kobolt, Unix, Hibernate, JUnit, Mockito, JHipster, Javalin, Jackson, Lombok and so on and so forth… and now imagine needing all of this to convey some crucial bit of information to you!
All this basically amounts to a sort of dialect that is incomprehensible to anyone who isn’t in the know. I realise that to an experienced programmer this probably reads like someone complaining about such unfathomable words like apple, window and tree and yes, those were the first three things I saw when I looked around, but I have to spend every day laboriously googling all the enigmatic words I hear just to try and get up to speed. Oh yeah, they also mention the Architect a lot, which makes me feel like I’m in the Matrix, it’s great.
Of course, getting to know so many new concepts is not just tiring; the most interesting technical term I’ve heard over the last few months is slave. Every once in a while someone unthinkingly uses it and is then corrected because you’re supposed to call it an agent or a helper now. At first this baffled me. The master/slave terminology is sometimes used when talking about a system in which one device unilaterally controls other devices. So whenever someone mentions it, they’re obviously referring to such a mindless device instead of to an actual slave, so why the sensitivity? The other day though, I stumbled upon a clip from Last Week Tonight that shed some light on the matter: in it John Oliver mocks the voice-over from a very old video about robots for repeatedly calling them mechanical slaves by saying: “Slaves, slaves, slaves! Oh how I have missed them! If you close your eyes you can forget that they’re mechanical…”
I suddenly understood why it might be a good idea to ditch the term. I do think it’s an intriguing phenomenon though… Is it just that we want to distance ourselves from the term in and of itself or might we also be crossing a different line, albeit inadvertently? That is, are we entering an era in which we feel increasingly uneasy about calling machines slaves because they seem increasingly human? Most machines we use today can still fairly be treated as nothing but objects, but could you for example say the same about Alexa or Siri? As the machines we use are getting more intelligent and more like living beings everyone in society is at some point going to have to change the way they regard them and I like to believe that the mechanical slave discussion is an early sign of the beginning of that process. Of course, some people have already been aware of such developments for a long time and they have been discussing them at length, for example on webpages not dissimilar to this one. I’m curious to see where it will take us. For now I’ll just continue learning the words so I can discover new layers of meaning and gather more insights into the magical world of programming.
Joran.
P.S. This is a podcast I found on the origin of the term robot. If anyone’s interested.
For a working group I give at the Radboud University, students were asked to read “The Coders Programming Themselves Out of a Job”. This article discusses the ethical considerations of people automating their own jobs, either partly or completely. What are these considerations, and how do they relate to common work ethic?
On the one hand, these people fulfill their job description to perfection through the scripts they wrote. One could applaud this clever increase in efficiency and the accompanying consistent performance. Moreover, freed up time can of course be well-spent, whether it is on family life or on learning new professional skills (that might also be valuable for your employer in the end!). On the other hand, as these people literally program themselves out of a job, they end up spending their time freely on non-work related activities while being on a payroll, sometimes for years on end. The question then is whether they are cheating, and deceiving their employer? And should they notify your employer of the fact that they are no longer spending time on your job? Such disclosure is not without risk. Many contracts treat everything developed under company time as their intellectual property, so after disclosure of the automation of your own job a company might not only claim the script as theirs, but might also dissolve your job and potentially those of your peers as well.
We could say that these clever programmers participate in a form of grassroots automation that emerges bottom-up rather than being issued top-down by some executive in a reorganization. This creates a slightly different set of issues than the more straightforward story of “my job got replaced by a machine”. The difference is that in the latter case the process of automation is publicly evangelized, whereas what makes the case of self-automation poignant is the ethical question of disclosure: Should I tell others, or my employer? And should I share my scripts? And why, or why not? These considerations concerning work automation only become more relevant as AI technologies become more widely available.
Another relevant aspect is that in the case of automation across the board, the beneficiary is usually the employer, whereas in this mode of grassroots self-automation, it is the employee that reaps the benefits. But if these clever employees get the job done more efficiently, why then do people often keep their self-automation silent, and feel that somehow what they are doing is ethically wrong or ambiguous at the least? The article states:
Even if a program impeccably performs their job, many feel that automation for one’s own benefit is wrong. That human labor is inherently virtuous — and that employees should always maximize productivity for their employers — is more deeply coded into American work culture than any automation script could be.
This resonates deeply with me, as I too am one of those people that attributes a lot of value to work. And many people in my environment are plagued by a constant sense of guilt: have we worked enough, shouldn’t we work more?
Through above-mentioned article, I came across the essay “In Praise of Idleness” of Bertrand Russell, written in 1932, but even more relevant today I would say. The sermon of technological automation is that the same amount of work can be done in let’s say half the time, and that this should lead to an increase in wealth and happiness for everyone. But instead of everyone then working half working days, a part of the population (those on the “right side” of automation) only seem to make longer days, whereas others become unemployed and see their life quality plunging (those made “redundant” by automation). If automation only contributes to the good of employers, then technology will only increase social divides along new lines, between “normal” workers and the those that are tech-savvy.
In that context, consider how relevant these words from 1932 sound:
If at the end of the War the scientific organization which had been created in order to liberate men for fighting and munition work had been preserved, and the hours of work had been cut down to four, all would have been well. Instead of that, the old chaos was restored, those whose work was demanded were made to work long hours, and the rest were left to starve as unemployed. Why? Because work is a duty, and a man should not receive wages in proportion to what he has produced, but in proportion to his virtue as exemplified by his industry. This is the morality of the Slave State, applied in circumstances totally unlike those in which it arose.
And:
Modern technic has made it possible for leisure, within limits, to be not the prerogative of small privileged classes, but a right evenly distributed throughout the community. The morality of work is the morality of slaves, and the modern world has no need of slavery.
What we should then think about is how technology and automation can increase the quality of life in a distributed and democratic manner. From this perspective, we can thus understand the hesitance to disclose self-automation towards one’s employer due to two reasons:
What is cool about the grassroots approach of self-automation is that it makes technology follow through on its promises to allow shorter working days and more happiness, e.g. by having a half working day with more time to spend with your family. How can we be against that? And why do many people, including me, associate this reduction of work with a loss of status and ambition? Much to think about. The main challenge for the future, and I think one that is extremely relevant in our contemporary society, is to distribute these advantages amongst peers in such a way that it comes to the benefit of all by taking a load of our shoulders.
Or in the words of Russel:
a great deal of harm is being done in the modern world by the belief in the virtuousness of work, and that the road to happiness and prosperity lies in an organized diminution of work.
N.B. The question the students had to answer was: a) In what ways could you automate your work as a student, and b) would you feel ethically obliged to disclose this automation to your study program? I had a lot of fun grading their work.
I largely quit social media because it transformed from being something that helps me keep in touch with people and that stimulates social interaction, to an endless stream of promoted clickbait content. The little “real” personal content that made it to my feed, was usually dull and did not facilitate any interesting conversations. With the creation of this website I’m trying to take control of my online identity, making this domain a main hub for various dispersed identities, somewhat along the lines of these principles.
Being free to do with this website what I want in a way that is meaningful for me, gives me joy. In addition, this website really is personal because I built it myself, which comes with a sense of pride (in the positive sense of the word). This is my first website, and I had no idea what I was doing when I started out. I had never written any html or css, but with the help of a friend during a nightly Skype-session (during which he fell asleep) I had a very simple one page site online within a few days. It was really, really bad. But it was out there, and it was mine. Over time I kept incrementally improving on it bit by bit, and by now I’ve reached a point where I’m quite content with the websites’ features and its minimalistic look (although I’m constantly fighting the urge to delete almost all css and go barebones). By now, I even find myself stimulating others to make their own personal website, and helping them out in the process. Over the last weeks, I helped my friend and philosopher Boris make his own website. I know that for the average philosopher all hands-on tech stuff is, well… Let’s just say in general they like thinking about technology more than using it. If you keep your website simple enough however, you can learn to maintain it yourself, even if you are a philosopher. Even although it might be a bit rough in the beginning, it will have all the more charm because it is authentic. Especially for academics, sober (or one might say: Spartan) websites have a long history (see for example this article. We need people to make their own websites again to keep the web an interesting and diverse place, and offer some resistance against the boring uniformness of yet another generic Wordpress blog or Facebook page. Boris’ work and thoughts are interesting and deserve a cool website. He published his new website last week, and I’m convinced it’s the beginning of a nice digital journey where a lot will be learned. Check it out here.
Anyways, the bottom-line is that what makes my website personal for me is not only that it contains personal content, but that it facilitates more meaningful interaction with people than so-called “social” media (and that does not mean more interaction). For me, the point of my blog is to help me shape my thoughts on topics of interest, but specifically in such a way that I can involve others in this process. The overarching goal is to enable dialogue and interaction with other people through a sensible digital identity, whether that means reinforcing existing relations or perhaps making new ones. I currently play with the idea to represent some of this dialogue directly on the website itself, by also allowing others to post on my domain, which is in the spirit of my plea that more people should have their own little place on the web. In short, I want to offer digital residence to friends on my domain. I do not yet completely know how this will work out, but my current experimental idea is this:
I made an example of such a homepage here.
As for the content of the blogs, I consider them to be an exercise in imperfection. They are not as unstructured as notes, but neither are they fully developed like articles. They are somewhere in between, more like essays in their original sense: they are attempts, exercises in writing and thinking that do not shy away from incompleteness. It’s yet another reason why I like to think of them in terms of a dialogue: in real conversations answers are not known in advance; they can be confused, open-ended. It is no coincidence that blog writing has such a conversational style. It reflects thinking on-the-way (yes, that is a reference to Heidegger), and in one sense it is a text that ideally does not want to be written down. Its conversational style is a resistance to the suggestion of completeness that accompanies writing, and tries to be spoken language: a voice amongst other voices.
For me personally, perfectionism has in the past prevented me from daring to publish or submit anything. An additional benefit of these blog posts is that they are a good exercise in letting go. I write them as quick as possible and then immediately publish them, to avoid that they stay on the shelf. There is another benefit: I have struggled the last years with formulating the relevance of philosophy. After starting a second university education, I have come to the belief that philosophy is at its best when it does not only engage with other philosophers, but (also) engages with other disciplines, shaping and enriching them from within. I hope that as this blog develops, it will be a testimony to this thought.
When one starts studying logic one is likely to be surprised by the workings of the so-called material implication, p –> q (if p, then q). Unlike the implication used in natural language, which can for example indicate causation, the material implication has a more restricted meaning. The material implication is true unless p is true and q is false. This is ultimately a matter of definition to resolve ambiguities present in natural language. A very simple and short sentence such as “visiting relatives can be boring” can already have two very different meanings. Either it is boring to visit relatives, or relatives that are visiting can be boring. Logic seeks to resolve such ambiguities by explicitly agreeing on a formal interpretation of symbols.
The meaning of the material implication is reflected in this truth table:
p | q | p -> q |
---|---|---|
0 | 0 | 1 |
1 | 0 | 0 |
0 | 1 | 1 |
1 | 1 | 1 |
A consequence of this stricter definition of the implication in logic means that sometimes its formal interpretation is at odds with the way implications are interpreted in natural language. This can lead to the experience of a paradox. The part of the material implication that is counter-intuitive to most people is that p -> q is true when p is false, irregardless of whether q is true. This is called a vacuous truth in logic: if p is false, then p -> q is asserting a true property of something that does not occur.
Since the material implication is defined as “it is not the case that p is true and q not” ( ~(p /\ ~q) ), the only way we can show that the implication does not hold is by a counterargument where p holds and q not. However, if p never holds, we are not able to give such an argument and it is said that the implication holds vacuously. The implication then is an “empty truth” that is true because we cannot show it to be false, but that does not convey any information.
Consider an analogue example. If you would say to your father that he is your favourite (biological) father, this would be true. But it is equally true that he is your least favourite father, and these two statements thus do not convey any information about the father (or your attitude towards him). Both statements can be thought of as empty truths: the statements are a comparison with another non-existing father and thus are true “automatically”.
We can show some more statements that are counter-intuitive but hold according to this definition of the material implication (see here). We can see in the above truth table that the material implication is true if p is false. If the expression p is a contradiction, it will always be false. Hence, if we have a contradiction, we can conclude any formula q:
Although this is “logical”, this leads to very weird results when translated to natural language. For example: if it rains and it does not rain, then my cat can fly. This is called ex falso sequitur quod libet. Anything follows from a contradiction.
At the same time, we can also see in the truth table that when q is true, the implication always holds, irregardless of the truth of p. But this also sounds a bit counter-intuitive: if q is true, then any p implies q.
For example, when we say in natural language “if I am sick, then I go to the doctor”, we assume there is a clear (causal) relation between these propositions. The above formula then would say: if I go to the doctor, then it holds that if I’m sick, I go to the doctor. That is clear enough. But logically speaking, it is equally true that “if I go to the doctor, then if I’m not sick, I go to the doctor.” This has a quite different ring to it: people usually do not go to the doctor because they are not sick (unless they are hypochondric).
So far we have that p -> q is always true if p is false, or if q is true. Thus we can rewrite p -> q as ~p / q. This follows pretty directly from the definition of the logical implication, i.e. “it is not the case that p is true and q is not true”:
You can easily see the equivalence of these formula’s as they have the same truth table:
p | q | ~p | p -> q | ~p / q |
---|---|---|---|---|
0 | 0 | 1 | 1 | 1 |
1 | 0 | 0 | 0 | 0 |
0 | 1 | 1 | 1 | 1 |
1 | 1 | 0 | 1 | 1 |
We can use this for example to show that if p does not imply q, then p holds and q does not hold.
This is also quite surprising! For example, this could mean: If Edwin eating a lot of cheese does not imply that Edwin lives in the Netherlands, then Edwin eats a lot of cheese but Edwin does not live in the Netherlands.
We can show that the left and right statement are equivalent, for example as such:
Proof:
N.B. ~~p ==> p does not hold in intuistionistic logic.
If you master all of this, you can make (and explain) jokes like these:
This is a short commentary I wrote in 2017, on Patrick Lin’s “Why Ethics Matters for Autonomous Cars”.
The book Autonomous Driving formulates a set of use cases that serve as a reference point for discussing the technical, legal and social aspects of autonomous driving (Wachenfeld et al. 2016). In addition, Lin sketches scenarios that require ethical choices to be made by autonomous vehicles (AVs). An advantage of this approach is that these scenarios can take the form of thought experiments, which are not obstructed by the fact that as of yet no fully AVs are in use: they nevertheless draw out moral intuitions that are in turn useful for helping us formulate how we expect robots to react in similar situations (Malle 2016, 250). One classical thought experiment in particular, the trolley dilemma, seems to experience a revival in the context of AVs (Lin 2016, Contissa, Lagioia, and Sartor 2017, Nyholm and Smids 2016). However, despite its popularity, I argue that ethical questions concerning AVs go beyond the scope of the original trolley dilemma.
In its classical form, the trolley problem situates an observer near a switch, overlooking a trolley on its way to kill five unsuspecting people working on the tracks. However, using the switch would divert the trolley to kill only one person. The question then is: is it correct for the observant to pull the switch? It is easy to imagine such a trolley problem for AVs, where the “switching” decision has to be made by the AV, and indeed Lin does so (Lin 2016, 78-79). However, whereas usually trolley problems concern the question what is the right choice, in the case of AVs it also includes the underlying question who made the choice and is correspondingly responsible for it.
Two cases sketched in Autonomous Driving, let’s say A and B, are particularly interesting in this regard. Case A concerns fully automated driving with the extension of a human driver that is able to take over drive control at any moment. In Case B, the driving task is performed completely independent of the passenger, which also entails that the passenger cannot take over driving control (Wachenfeld et al. 2016, 19). In case A, the trolley problem arguably maintains its original form: when a human is able to take over control, the switching decision also remains the responsibility of the human driver, but only as long as we ignore the practical issue that the handing over of control to the human driver is unlikely to be fast enough (Lin 2016, 71). In case B however, it cannot in any case simply be the human driver that is responsible for the life-death decision, and thus case B extends beyond the scope of the original trolley problem.
The AV would make such life-death decisions based on programmed algorithms and cost functions. I would say that therefore the real ethical decisions are no longer made split-second, as in the trolley problem, but instead are moved to the design stage where such time-constraints do not apply (Nyholm and Smids 2016, 1280-2). A particular difference with the trolley problem thus is that responsibility for possible deaths is distributed over a set of stakeholders. The possible answer to the question who is to blame has large consequences for example for producers of AVs (vulnerability to lawsuits) or insurance companies (dealing with damage claims). I agree with Lin that regardless of the answer, transparency of the decision-making should be central to AV ethics (Lin 2016, 79).
In that sense, one recent suggestion is particularly interesting as an addition to Lin’s deliberations in the context of the defined use cases. In case B, a way to again involve the user in the design process is to use an ‘ethical knob’ that makes the preference for the survival of passengers or third parties explicit in moral modes ranging from altruistic, to impartial and egoistic (Contissa, Lagioia, and Sartor 2017). These modes determine the “autonomous” decision-making of the AV, but are deliberately set by the user. In this manner, the moral load of using the AV becomes transparent to the user, which in turn may be a decisive factor for quantifying guilt in case of lawsuits and insurance claims. To conclude, in designing AVs robot ethics and machine morality can be connected by fine-tuning the moral competences AVs have with respect to a dynamic set of situations and preferences (cf. Malle 2016).
672 words.
N.B. click here for an alternative solution to the trolley problem.
Contissa, Giuseppe, Francesca Lagioia, and Giovanni Sartor. 2017. “The Ethical Knob: ethically-customisable automated vehicles and the law.” Artificial Intelligence and Law 25 (3):365-378. doi: 10.1007/s10506-017-9211-z.
Lin, Patrick. 2016. “Why Ethics Matters for Autonomous Cars.” In Autonomous Driving, edited by Markus Maurer, J. Christian Gerdes, Barbara Lenz and Hermann Winner, 69-82. Springer-Verlag Berlin Heidelberg.
Malle, Bertram F. 2016. “Integrating robot ethics and machine morality: the study and design of moral competence in robots.” Ethics and Information Technology 18 (4):243-256. doi: 10.1007/s10676-015-9367-8.
Nyholm, Sven, and Jilles Smids. 2016. “The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?” Ethical Theory and Moral Practice 19 (5):1275-1289. doi: 10.1007/s10677-016-9745-2.
Wachenfeld, Walther, Hermann Winner, J. Christian Gerdes, Barbara Lenz, Markus Maurer, Sven Beiker, Eva Faedrich, and Thomas Winkle. 2016. “Use Cases for Autonomous Driving.” In Autonomous Driving, edited by Markus Maurer, J. Christian Gerdes, Barbara Lenz and Hermann Winner, 9-38. Springer-Verlag Berlin Heidelberg.