How we can help?

In considering any policy proposal, it’s extremely important to anticipate possible future consequences. The problem, of course, is that predicting the future is challenging at best, and humans are notoriously bad at it. Examples of catastrophes caused by this kind of human deficit include the space shuttle disaster (“O” rings failed at low temperature), 9/11 (failure to anticipate the use of commercial airliners as weapons), and Volkswagen’s recent debacle with diesel emission testing. Anticipating the consequences of actions is not inherently impossible, but it requires special skills, imagination, and a particular mindset. As philosophers, we are trained to envisage and consider all possibilities, however outlandish, so as to cover all of “logical space.” This requires a special combination of openness and rigor, critical components of philosophical thinking. These skills can be fruitfully applied to current policies and practices, and can be taught in order to improve the analytical skills of executives and decision makers.

As an example of catastrophe anticipation and avoidance, consider the future of genetic engineering. It’s easy to see how genetic selectiveness might affect the citizens of a given country, by giving an advantage to those selected for superior intelligence, strength, etc. This might produce civil unrest, deepening economic inequality, resentment, and injustice. But a less obvious consequence concerns international relations. What if another country decides to exploit this new technology to improve its human capital? We would have no control over this effort, since that country would be sovereign within its own boundaries; we couldn’t pass any laws restricting them. Their intention might be to produce world domination for themselves, especially economic domination, but also military. This might require a relatively modest boost of 10 IQ points per capita achieved over a couple of generations, or less if they are really determined. Such a change would clearly introduce a massive competitive edge over countries unwilling to follow this route to human improvement. The incentive to take preemptive action against the outlaw country would be considerable, even to the point of war. This would be a potent source of international discord. Even knowledge that such a eugenic enterprise was in the works would have enormous political ramifications (suppose it was China planning to upgrade its population genetically).

Given that genetic engineering makes this a theoretical possibility, a potential catastrophe is looming. We would do well to take it seriously before anything gets out of hand. I would propose international talks now to address the issue with potential genetic rivals, possibly with a view to signing treaties that prohibit the kind of genetic engineering in question. It may seem like science fiction now, but history has a way of making the worst disasters out of powerful technologies, though they seem harmless and desirable at first glance. Above all, we must think hard about remote consequences.