You say AI and people immediately start thinking about Skynet, The Matrix, or HAL 9000. We have been programmed by decades of literature and film to associate the concept of AI with human like intelligence and thought processes that sees these creations go rogue and decide to “kill all humans”. Probably Boston Dynamics scary dancing robots and “dogs” are not helping here either! But as this book shows the reality is closer to the kind of decisions made in “I Robot” where a decision is made to save humanity by locking it up to protect us from ourselves – AIs make decisions that seem weird to us because they’re following logic rules we failed to understand.
In this book Janelle Shane looks at the more mundane reality of AI. Where we are nowadays would be better referred to as Machine Learning (ML) or applied AI rather than general AI – meaning these things learn to be good at one thing and tend to suck at everything else. Teach an AI recipes and they might come up with something that looks like a proper recipe, try to get the same AI to write Harry Potter and the cast will be weirdly obsessed with baking (genuine example in the book).
But this limitation and how people think about it leads to real world problems and this book does a great job of showing these. These algorithms are only as good as the data set they are taught with and the rules they are told. And they’re taught this by humans with all their inherent and implicit biases. It’s incredibly easy to go badly wrong and create an AI that believes every image is a penguin because it learned black+white = penguin and didn’t care about shapes or context – show it pictures of penguins and robins and you’d be impressed by it’s ability to tell the difference between these birds. Show it penguins, nuns, and zebra crossings and you’re going to be a bit surprised by how many penguins exist.
Now that’s an amusing downside but the real negatives show in implicit racism and sexism risks – and this book shows why these tools should not be used in recruitment or policing. If you’re a firm where 90% of your board is white and male and went to an Ivy League college then guess what, an algorithm trained to find people to work with you will pick the white, male, Ivy League candidates because that’s what you looked like. Our risk from AI is not that they’re smart, it’s that we and it are just too dumb right now to get it right.
This was an interesting read for anyone who has an interest in the topic and doesn’t assume you know much about it. It’s well structured and has cute illustrations (and contains reference to Murderbot which is a win!).