In many cases the answer is no.
The computer doesn’t understand common sense, and won’t listen to the person who may.
Back to Boeing for second: There are two rivals at the top of the aviation industry — Boeing and Airbus.
For decades they have followed different philosophies in aircraft design.
Boeing coming from an era when planes still had all mechanical cable and pulley controls and Airbus a relatively newer entity.
In Boeing planes computers aided the pilot but the pilot was ultimately in control — until now.
Airbus planes from the beginning were designed with the computer overseeing the pilots decisions and possibly having the last word.
The MCAS system implicated in Boeing’s accidents was the first case of the computer working in the background and being stubborn on a Boeing plane.
And the pilots hadn’t been told or trained for it.
All software needs to be tested.
And all software has bugs, otherwise we would never have to install security updates and urgent patches.
If the computer does make life or death decisions and it also has the last word, the bar for testing goes way up.
That has cultural implications in terms of budget, schedule, and who gets to say ‘not ready yet’.
Airbus would have that well in place after many years of designing planes with a higher degree of computer influence.
Boeing may have been caught off guard on that front, not being culturally prepared for that type of software testing.
They apparently got caught off guard by failure rate of sensors that should have driven higher safeguards, and mis-judged not the individual functions of the system, but the cumulative impact after multiple resets.
Exactly the type of thing that is hard to test, because it goes beyond the tidy flow chart and instead covers complex system interactions.
[based on NY Times reporting]The thing is, in traditional programming, the human breaks down the problem into many small steps and if-then-else decisions.
Then the program is written to run through these steps at breakneck pace.
Once the program is written it has to be tested with as many scenarios as possible and verified that the intended result occurs.
Problem A is that this very complicated and time consuming.
So often only a fraction of this can be done, measured as test coverage.
The rest is left to hope, confidence, and chance.
And in most cases if something is missed, well there is always a software update a few weeks away.
Problem B is that it’s very difficult to foresee all the permutations both during design and then also during testing.
How do we know that we truly thought of everything that could happen?.Well, the reality is we don’t.
We can build statistical models and gain a higher degree of confidence.
But we must accept that there will be unforeseen circumstances.
What to do then?.Well, the computer itself only follows orders (first of the programmer and second of the user), it doesn’t think.
So we would have to leave it a human to assess the situation and make the best decision possible under the circumstances.
Which of course requires the human to be able to take control of the situation.
Any pilot should be able to fly the plane by hand, meaning without auto pilot or computer guidance.
They’re trained for that.
In both of the accidents they apparently tried to do that.
The computer wouldn’t let them — based on information that is available so far.
And that should also be our concern with AI, the next wave of computers taking hold in our daily lives.
The general public doesn’t know much about AI, and the word itself is quite misleading.
Artificial Intelligence gives the impression that the computer may be on par with humans at some point.
That it might have common sense traditional programs lack.
Maybe some day in the distant future they may.
The key difference between a traditional program and an AI program is that in the traditional software the developer has to foresee the entire complex problem the software is solving in every single detail.
That faces a scaling challenge in the ever bigger tasks we give computers.
AI can solve some of these bigger problems, because not everything has to be pre-considered and coded.
The programs can be taught patterns of input and output and then find ways to apply that same logic to cases that were not considered individually during the development process.
Uber’s fatal accident in Arizona a while back can possibly shed some light on that.
To train self-driving cars the algorithms are fed with millions of images of standard traffic scenarios and road conditions.
That helps them analyze the camera feed and decode what the car is facing and what they should do with it.
The problem is that a lot of the imagery being fed during training is not diverse enough for the occasional strange thing.
Like a woman crossing in the dark of night in the middle of the block while pushing a bike.
How many photos of that can you find on Google?A human driver may see this for the first time too.
But a human driver is still superior in making the best possible decision in the moment.
The human brain has more life experience than today’s AI systems.
Can an AI program eventually parallel a human’s judgment?.Yes, once we can train it on 7B lifetime experiences, possible, and desired outcomes over a century of history.
But that is a whole other level of scale.
This is why we should be very skeptical of computers in our lives.
They no doubt have made our lives so much better in every imaginary way.
But we do need ways for common sense to prevail.
We do need someone we can argue with if the computer is wrong and stubborn.
And we need a way to override the computer if necessary with the proper oversight.
The simplest example of this are IVR systems (automated phone menus for customer support).
How often have you found your answer in the automated menu?.Rarely I assume.
We’ve all learned to press ‘0’ or say ‘Representative’ to hit the escape button and talk to a human if we have a non-standard problem.
Which by the way is usually the case when we call a support number.
I find IVR system a very misguided and frustrating cost saving tactic.
But the worst is if the system doesn’t let you press zero or talk to a human.
I once had T-Mobile after three attempts of not understanding my foreign accent simply hang up on me with ‘Sorry we couldn’t understand you, good bye!’.
The definition of a computer winning the battle but losing the war.
Luckily it was a simple service issue, not a life or death situation.
These days I use AT&T.
The computer is your friend.
But it can also be your enemy.
Let’s not find out the hard way.