In Cixin Liu's The Three-Body Problem, it is discovered that an alien civilization is on its way to Earth, intent on conquering the planet in order to secure its species' survival. Suppose such a thing is true: there is, at this moment, an alien species en route to Earth, planning to eradicate us and take over our world. Since we cannot know how likely this is and cannot gather evidence about it--alien craft will be too small to see until they are nearly upon us--then we are in the land of dubious probabilities. If we assume there are numerous advanced alien species which can travel near (or faster than) the speed of light, the probability is higher. If we assume there are few advanced alien species and they could only move at a fraction of the speed of light, like Voyager I, then the probability is negligible.
How should we handle against such a possibility? Since it was a certainty in Liu's book, effectively governments took money away from social welfare programs in favor of military spending and took away present freedoms in favor of survival-oriented control. (This strategy ultimately failed; people weren't willing to lose their freedoms even at the cost of extinction.) But in the hypothetical AI case, this would be a grotesque overreach if the threat was only a tiny possibility. Still, the argument might go, considering it is possible, we should probably do something to ward against. As a kind of insurance, we might say, we should expend resources so that we can address this issue if it becomes necessary.
This is the basic argument many, call them the "insurance salesmen", make about the risks posed by AI. Even if a critic argues it is improbable that AI will actually pose an extinction-level risk, they cannot rule it out a priori. Thus, the salesmen are on good grounds for saying, as Tegmark repeatedly did in the Munk debate, that we need to insure against the risk of AI. The alternative--expending no resources on the matter--would seem to be needlessly careless.
I think the salesmen's' overall argument is fine, so far as it goes. But return to the invading alien example: what would insuring mean against a possible, though highly uncertain, alien invasion? Probably not much; maybe a small subgroup in large nation's space force devoted to studying the issue while also dealing with a host of other, futural issues. But, on the whole, the overall strategy would be to keep the status quo, focusing on predictable human threats. And this makes sense here, because the status quo already spends a lot of money on the military, and there is significant (and likely growing) overlap between the R&D needed for protecting against the risk of aliens and the risks of other nations in space.
This provides a useful way to evaluate the AI risk argument: we already have funded a number of organizations--like the Future of Life Institute--to focus on the theoretical dangers of AI. Moreover, the status quo already involves lots of resources expended on making AI safe. This suggests any efforts spent making AI safer at present would likely be useful for making AI safer in the future. In short, the AI risk is already insured against. If anything, critics can argue we're overpaying: if the risks are low, we really shouldn't be funding these groups studying AI risk so much.
So are we done? Why are the salesmen so insistent if we're already insured? One cynical argument is that they just want more money, pushing more governments and billionaires to throw cash at their institutes. But it increasingly seems like they want power: they are trying to influence the way AI is made so that it is in the hands of significantly fewer people, and those who are in charge all share the same beliefs. Critics consistently (and rightly) point out that this is a recipe for crony capitalism that would use politics to increase the size and power of big tech. In this case, "AI Safety" could be cynically used to engage in regulatory capture.
But I don't think that captures all of the salesmen. Many clearly are also opposed to big tech and even the development of AI, so they aren't intentionally promoting regulatory capture. It seems most plausible to me that they really are hoping to staff the future regulators. The goal is effectively to form non-profits which, in time, will write the legislation to form the political institutions which will regulate AI and then, later, populate those institutions. In this regard, the real game is shaping the conversation in ways that ensure people think we need this kind of regulator.
This takes us back to our first point, though. If we are facing a certain future of conflict with AI, then establishing these AI agencies and granting them extraordinary powers to regulate who can create AI makes sense. But if it is a very hypothetical risk, this is a gross usurpation of power, one that would allow ideologically motivated bureacrats dictate huge sections of social, economic, and political life. The latter is utterly unjustified by the insurance argument. The insurance argument, at best, licenses us to have a couple think tanks and maybe someone at the pentagon messing around with war games. We need much stronger arguments for forming regulatory agencies along these lines.
Comments