Consider this imaginary scenario. Scientists have engineered a new type of AI capable of doing most of the work of a police officer—a real-life Robocop. In particular, all traffic stops can now be handled by robots. The authorities decide to implement this wonderful new technology as quickly as possible, thereby rendering most the current police force unemployed. This decision comes into effect just a few weeks after the invention is announced. Hundreds of thousands of police are now unemployed, unpaid, and desperate. They have few options, so most turn to crime (they still have their weapons and are knowledgeable about criminal activity). The crime rate skyrockets with murders vastly increased. Society descends into a hell of crime and violence as one-time police officers do battle with police robots and the few human police officers remaining on the force.
Obviously, the situation was badly mishandled by those in charge, to put it mildly. It was completely unethical to fire the police force in the peremptory way described and stunningly shortsighted not to foresee the likely consequences of doing so. What should have been done instead? Supposing that the redundant police officers had no immediately available alternative employment, the right thing to do was to guarantee them a living wage until they could phase into something else (say, private sector security jobs). Also, the robots should have been introduced more gradually and strategically so as to minimize disruption. These prescriptions are obvious—but the excitement about the benefits of AI to police work blinded people to the possible results of a sudden replacement. A catastrophe ensued. The argument that it would be costly to pay the fired police officers was a terrible misjudgment given the likely results of not doing so. In retrospect, everyone can appreciate this, but at the time expediency and enthusiasm clouded vision.
This is the model to adopt in considering the likely effects of AI technology on society. It cannot be allowed to trigger mass unemployment without some sort of safety net. Driverless cars are likely to have just this effect unless carefully managed, because a new technology can have very sudden impacts (consider the demise of the typewriter or the vinyl disc). Given that the pace of AI development is picking up, the authorities need to think hard about how to minimize its societal impact—and that will necessarily involve financial support for redundant workers. It is no use letting market forces take care of the problem, as the case of the redundant police force makes clear. That way catastrophe lies. The pace of technological innovation cannot be halted, but its consequences can be mitigated and even turned to advantage. But this requires careful thought and a proper sense of humanity. Philosophers can help.
An internationally acclaimed philosopher and teacher, McGinn was educated at Manchester University (Psychology, BA and MA, 1972) and Oxford University (Philosophy, B Phil, 1974), and went on to teach philosophy at University College London, Oxford University, UCLA, Princeton, and Rutgers. He was a philosophical advisor to Geoge Soros from 2008-2013.