“To err is human; to really foul things up requires a computer.” William E. Vaughan made this observation back in 1969. Giving control over to an automated system carries potential for the system to go awry and cause serious harm before it is checked.
Automation is not new, but it is becoming a lot more widespread thanks to the integration of digital and physical systems. The upside of automation at scale is great efficiency. But the downside of relying on a set-it-and-forget-it system is that someone may fail to set it properly.
With a system that simply follows through with no intervention and no way of stopping the machinery, you can have destructive effects. Accordingly, tech can create the type of situation depicted in “The Sorcerer's Apprentice,” when what appears to make life easier actually runs out of control.
Fired by the Machine
Automation without intervention is what resulted in a tech worker in the U.K. finding himself out of a job with no cause last year. Ibrahim Diallo noticed that his security clearance cards weren’t working at work and came to discover that it was because he was out of a job. “The Machine Fired Me” is the title he gave to his extended blog post on the event.
Ultimately, the cause of Diallo’s termination was not some kind of algorithm’s assessment that determined who should be eliminated. The problem wasn’t within the system, but was one of human error. In this case, it was all the automated response to a human’s failure to put in Diallo’s contract renewal information.
It’s not that the machine decided he should be fired for something in particular. It simply carried out the steps programmed into it for someone whose status showed up as no longer employed. As he clarifies in the comments, this is not really AI but “automated script.” (To learn about how AI can help (instead of hurt) in businesses, check out What AI Can Do for the Enterprise.)
Automation and Job Disruption
This kind of effect is not exactly what people envision when describing the wonderful benefits we can anticipate in an automated future. The usual optimistic outlook for a shift in jobs as tasks are taken on by automation, jobs will be redefined – not terminated by automated systems. But the reality is that some jobs will be eliminated, and the people who held them won’t necessarily be able to make a seamless transition to the new careers in a largely automated industry.
Job disruption is one of the more minor risks that Elon Musk envisioned for the rise of AI, though his own vision of the impact on jobs is far more pessimistic than Fitzgerald’s. In Musk’s view, AI needs strict regulation because it poses “a fundamental, existential risk for human civilization.”
Applications from Consumer Electronics
Despite Musk’s own tech credentials, some experts in the field like Rodney Brooks, who served as the founding director of MIT’s Computer Science and Artificial Intelligence Lab, and cofounded both iRobot and Rethink Robotics, say Musk is wrong about that threat of AI and how robotics actually operate.
In an interview with TechCrunch, Brooks indicated that it is foolhardy to call for regulations without having the tech matured to the point at which we can identify exactly what must be regulated. He challenged Musk: “Tell me, what behavior do you want to change, Elon?”
Brooks did concede that robots will bring about job displacement. But he also thinks that it is possible to shift the paradigm in industry to follow that of consumer electronics.
The way he put it in the TechCrunch interview was: “We have a tradition in manufacturing equipment that it has horrible user interfaces and it’s hard and you have to take courses, whereas in consumer electronics [as with smart phones], we have made the machines we use teach the people how to use them.“
That is what he said should be the goal for changing the way we relate to “industrial equipment and other sorts of equipment, to make the machines teach the people how to use them.”
What Brooks suggests may point us in the direction of a solution to the problem of human error that sets off the automated process that seems to spiral out of control. To refer back to Mickey’s misadventure in “The Sorcerer’s Apprentice,” the problem all stems from the person who activates the system but has no real way of communicating with it to get it to stop or change direction.
But if the interface is made along the lines of consumer electronics rather than traditional industrial models, it could literally put control back into human hands. To truly be effective, the interface has to not just be accessible, but designed to keep people in the loop about what is happening, providing data on the updates it has received and what actions it has taken.
How It Could Work
In the case of Diallo’s accidental termination, that would mean that the automated system would not just lock him out of the system and then provide an email to his recruiter that he was terminated. It would first recognize that the contract renewal was not put in on the date anticipated. Before commencing termination actions, it would offer the manger and recruiter the update on the lack of renewal and the consequences that would follow within the day if no action is taken.
That kind of alert would allow people to make informed decisions about whether to let the automation go on or to intervene to rectify the human error that was the original cause of the problem. But people do also have to do their part, respond to the alert, and take the proper action. In other words, the answer to the question that Brooks posed about behavior applies to humans; they need to be less passive in the face of automation. (For more on how humans and machines can work cooperatively, see Channeling the Human Element: Policy, Procedure and Process.)
As Diallo wrote in the comments section of his blog, the reason this was able to get to the point it did was that people refused to go against the machine:
Another thing that goes overlooked, is that even though everyone knew that it was a human error that triggered it, and that it was purely a mistake, they chose to follow the emails. It’s like putting a ‘smoking allowed’ sign in the hospital and people respect the sign instead of using common sense.
Accordingly, the policies we need to adopt in order for automation to work with and for humanity rather than against it are twofold: On the machine side, we need interfaces that are accessible and informative, and on the human side, we need people to be empowered to identify when something is not right and to step in to rectify it.