A.I. folly - people die!
** Partially gleaned from © 2018 Guardian News and Media Limited or its affiliated companies. All rights reserved.
Article: Franken-algorithms: the deadly consequences of unpredictable code:
It was rather stalled by our stupidity because we weren’t smart enough to learn how to break the problem down. You discover when you program that you have to learn how to break the problem down into simple enough parts that each can correspond to a computer instruction [to the machine]. **
Date: 8/31/2018 2:27:45 PM ( 6 y ) ... viewed 2084 times Franken-algorithms: the deadly consequences of unpredictable code
=================
“No one knows how to write a piece of code to recognize a stop sign. We spent years trying to do that kind of thing in AI – and failed! It was rather stalled by our stupidity because we weren’t smart enough to learn how to break the problem down. You discover when you program that you have to learn how to break the problem down into simple enough parts that each can correspond to a computer instruction [to the machine].
We just don’t know how to do that for a very complex problem like identifying a stop sign or translating a sentence from English to Russian – it’s beyond our capability. All we know is how to write a more general purpose algorithm that can learn how to do that given enough examples.”
Hence the current emphasis on machine learning. We now know that Herzberg, the pedestrian killed by an automated Uber car in Arizona, died because the algorithms wavered incorrectly categorizing her. Was this a result of poor programming, insufficient algorithmic training or a hubristic refusal to appreciate the limits of our technology?
The real problem is that we may never know.
How algorithms are pushing the tech giants into the danger zone.
{Having worked with computer controlled Military, Major Industrial and Data Systems for over 40 years - I have seen many cases where algorithms have played all over the field in machine control. The results have been questionable to downright wrong often times. Plus there was the ever-present data corruption commands and/or just plain static or non-control. Basically, it means machines will not in the near future be absolutely correct in their functions.}
Read more:
“And we will eventually give up writing algorithms altogether,” Walsh continues, “because the machines will be able to do it far better than we ever could.
Software engineering is in that sense perhaps a dying profession. It’s going to be taken over by machines that will be far better at doing it than we are.”
Walsh believes this makes it more, not less, important that the public learns about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect.
When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting:
“I would suggest the problem is that algorithm now means any large, complex decision-making software system and the larger environment in which it is embedded, which makes them even more unpredictable.”
A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in technology, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs.
“Where there are choices to be made, that’s where ethics comes in. And we tend to want to have an agency that we can interrogate or blame, which is very difficult to do with an algorithm.
This is one of the criticisms of these systems so far, in that it’s not possible to go back and analyze exactly why some decisions were made because the internal number of choices is so large that how we got to that point may not be something we can ever recreate to prove culpability beyond doubt.”
{Being able to prove without a doubt is more often impossible than not. This is where the perverse idea of supposing mind reading. Machines can at certain times be proactive or predictable when a certain set of events known to it which occurs in a certain order. But there is no magic way for a machine to know exactly how or what even a human can or will do.}
The counter-argument is that, once a program has slipped up, the entire population of programs can be rewritten or updated so it doesn’t happen again – unlike humans, whose propensity to repeat mistakes will doubtless fascinate intelligent machines of the future. Nonetheless, while automation should be safer in the long run, our existing system of tort law, which requires proof of intention or negligence, will need to be rethought.
A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable. In an algorithmic environment, many unexpected outcomes may not have been foreseeable to humans – a feature with the potential to become a scoundrel’s charter, in which deliberate obfuscation becomes at once easier and more rewarding.
Pharmaceutical companies have benefited from the cover of complexity for years (see the case of Thalidomide), but here the consequences could be both greater and harder to reverse.
{Reversing certain actions is next to impossible with humans - and then much more complex with human-machine interfaces! Much of this area of machine control Algorithms sis more wishful thinking and the result is like my Granddaddy used to say
" Is like skating on thin ice ".}
** This is a time to use great caution!
{* added by blogger!}
Add This Entry To Your CureZone Favorites! Print this page
Email this page
Alert Webmaster
|