A Meditation On Arrogance In Tech
Every day, I come across articles in my various news feeds about the latest moral or ethical outrage in tech. They’re usually different kinds of moral or ethical outrages, but I think they all stem from the same root problem: Arrogance on the part of the tech industry. I think, as a general rule, our industry tends to not think about the negative effects the things we create can have on society in general and the lives of individuals in particular. We tend to view the idea of tech taking over everything as a good thing, we assume that whatever we’re creating is part of that goodness, and when someone does bad things with it, we come back with something along the lines of “We can’t control what people do with our stuff.” We pretend that the things we create are neutral and that we as an industry are a shining city on a hill, when really neither of these beliefs are true at all and are prime examples of our telling ourselves what we want to hear and believe instead of acknowledging the realities of the situation: Technology is not neutral, we have a lot of power over society and over individuals thanks to tech being as pervasive as it is and getting more pervasive by the day, and while we can’t prevent every bad actor from using what we create, we could work a lot harder to make it more difficult to do horrible things with our creations. We’re not building neuclear bombs, after all.
If we want to become an industry the public can trust, we need to get our stuff together, and we can start with stripping away the arrogance. I’m thinking of a certain type of founder here, but really, we’re all susceptible to it. It’s really easy to convince ourselves that we’re drowning in awesome sauce, that everything we do has zero ethical problems, but that’s mostly because we generally refuse to consider ethics in tech, and so since we’re not considering it as a discipline, there must not be a problem. But ignoring a problem doesn’t mean it doesn’t exist, and the longer we continue to ignore this as an industry, the worse it’s going to get. And eventually, as people become more technologically literate, they’ll begin to look more closely at what we do outside of code as well as inside it. We cannot continue to freewheel through people’s lives, reduce them to one-dimentional users, and then walk away from the consequences that inevitably result from our actions. None of this is to say that the things we create are evil by default. I still believe that we can do a lot of good in the world. But we need to concentrate on making sure that we’re actually creating things that are beneficial, and guard against our creations being used for evil as extensively as we can. In short, we need to grow up and start demonstrating that we have the capacity to act ethically and responsibly.
OK. This is a great screed. What is missing is at least ONE specific example.
Yes, I am a technocrat. And, yes, I know that while technology and science are AMORAL (not IMMORAL), it requires the users of technology and science to manage the morality of their actions.
The development of gunpowder made it possible to move mountains and clear paths. It also made it possible for humans to kill other humans (and other living things). The ability to render speech from text or text from speech means we can be more productive, that those who can’t read (whether by impairment or illiteracy) can still obtain knowledge. But, it also means that robots can displace humans. The ability to manipulate DNA means we can eradicate harmful mutations in humans- or create a super-human race.
The choice of which- and how- we use things is a critical choice. One we- technocrats, anti-technocrats, and those in-between- must employ in our daily lives.
Just off the top of my head: Facebook’s algorithum behaving badly, both in what it shows to users, (surfacing things like posts concerning your very young child’s death within the last year, for example), as well as the spread of fake news. Twitter refusing to ban racists and harassers. Uber. The Bodega startup, which is going full steam ahead with it’s “bodega replacement technology” without considering the impact that will have on the quality of the neighborhoods they want to install it in. Computer science curricula including absolutely nothing about web accessibility or application accessibility in general, except for maybe a passing mention or one lecture. None of these are amoral decisions, and it’s not like tech doesn’t have the very examples you mention above to learn from. And saying something like “Well, technology is amoral” isn’t going to make a bit of difference to someone who’s neighborhood gets, for example, one of these “bodega in a box” vending machines and then has to close their store because they’re everywhere and they sell everything for a lot less money because no staff. I think the only way you can call technology amoral at this point is if you take the people behind the tech out of the equation. Obviously we can’t yet do that. So we’re left with having to admit, (I think), that the “technology is amoral” argument doesn’t hold water for all practical purposes.
Very good article here. thanks for the heads up on this; I have to definitely agree. I think though that part of it is a combination of both corporate America, as well as the ways of the human mind not advancing. Only examples I can think of right now are those contained in the Honor Harrington series of books; Humans living for hundreds of years, the ability for humans and telepathic creatures to communicate, space travel being a regular thing, and humans feeling the emotions of others. I would love to see our society advance toward something like that described in David Webber’s books, though before we can get there, we have to redefine the word privacy.