Just off the top of my head: Facebook’s algorithum behaving badly, both in what it shows to users, (surfacing things like posts concerning your very young child’s death within the last year, for example), as well as the spread of fake news. Twitter refusing to ban racists and harassers. Uber. The Bodega startup, which is going full steam ahead with it’s “bodega replacement technology” without considering the impact that will have on the quality of the neighborhoods they want to install it in. Computer science curricula including absolutely nothing about web accessibility or application accessibility in general, except for maybe a passing mention or one lecture. None of these are amoral decisions, and it’s not like tech doesn’t have the very examples you mention above to learn from. And saying something like “Well, technology is amoral” isn’t going to make a bit of difference to someone who’s neighborhood gets, for example, one of these “bodega in a box” vending machines and then has to close their store because they’re everywhere and they sell everything for a lot less money because no staff. I think the only way you can call technology amoral at this point is if you take the people behind the tech out of the equation. Obviously we can’t yet do that. So we’re left with having to admit, (I think), that the “technology is amoral” argument doesn’t hold water for all practical purposes.