By: Tom Breloff
Re-posted from: http://www.breloff.com/software-one-point-five/
I recently read Andrej Karpathy’s recent blog post proclaiming that we are entering an era of “Software 2.0”, where traditional approaches to developing software (a team of human developers writing code in their programming language of choice… i.e. v1.0) will become less prevalent and important.
Instead, the world will be run by neural networks. Why not? They’re really great at recognizing objects in images, winning at board games, and even writing movie scripts. (Well maybe not movie scripts.)
I can’t decide if he’s being naive or if we should be scared (no… not from an army of infinitely intelligent super-robots).
Is he naive?
Neural networks are very powerful. There’s no question. But human software engineers do more than just pattern match inputs into outputs. In software development, it’s not enough to produce correct outputs 99% of the time (though even that is seemingly unachievable for most complex tasks). Imagine if your bank deposits only landed in the right account 99% of the time. Or if an air traffic control tower only assured your plane would land safely 99% of the time.
There are too many tasks that require near-certain guarantees on performance. And most importantly, many of those tasks require full human understanding of the processes and algorithms which determine the outcome. This is something we simply cannot expect from end-to-end neural (statistical) models.
I think he’s naive for claiming that statistical modeling can replace good ol’ fashion software engineering.
Should we be scared?
Neural networks are fragile, complicated, opaque, compute-heavy, and easily tricked. They are simultaneously hard to understand and easy for bad actors to manipulate. But… they get some amazing results in certain domains (most notably sensorial tasks like vision, hearing, and speech).
Humans are gullible animals. We have implicit biases, and constantly change the facts to match our understanding of the world. In a world filled with Software 2.0, where the software programs are written by statistical models, the output of that software will start to look like magic. So much so that people will start to believe that it is magic.
Throughout history, people have been happy to worship and serve a power greater than them. What if people start to believe in computing magic, and trust important life decisions to a statistical model? Insurance companies might deny your coverage because a neural network told them a procedure wouldn’t help you. Employers will discriminate based on expected performance. Police will monitor and arrest people through statistical profiling, predicting crime that hasn’t yet happened. Courts will prosecute and sentence based on expectations of repeat offense.
You might be saying… “This is already happening!” I know. I think we should be scared of relying on statistical models without properly accounting for their biases and shortcomings.
It’s both.
Just like the spreading IoT time bomb, placing blind trust in Software 2.0 is a trojan horse. We let it into our lives without full understanding, and it puts us at risk in ways we can’t realize.
The path forward is in developing human-led technology. Building machines that can help and advise, but do not assert full control. We shouldn’t worship a machine, and we shouldn’t put our blind trust in statistical methods. Humans are more than just pattern matchers. We can transfer our experience to new environments. We can plan and reason, without having to fail at a task millions of times first.
Instead of rushing to Software 2.0, lets view neural networks in proper context: they are models, not magic.