top of page
Writer's pictureJake Browning

Avoiding a False Dilemma about AI Doomerism

Updated: Aug 8, 2023

There is an increasing surge of articles demanding people stop worrying about AI doom and focus on real-world, present day harms (e.g., Stop talking about tomorrow’s AI doomsday when AI poses risks today). A common rejoinder is that such arguments don't even address the concerns about the AI Safety community (i.e., those often dubbed "doomers"). They contend we should be doing both, addressing real, short-term risks and preparing for future scenarios.


This is a fair response. While some dismiss it as more distraction, it is appropriate to care both about short-term and long-term risks for any technology: we can ensure natural gas is cleaner today while hoping to eliminate it tomorrow. So there is nothing inappropriate about actively budgeting for future problems. Even for those (such as myself) who are skeptical of AI posing an existential danger there is no in principle reason to reject thinking about future risks.


The difficulty is, as Evegny Morozov points out, the approach to long-term solutions offered often diminish our current institutions and their ability to fight present-day AI risks. The solution often involves setting up a parallel network of AI Safety experts: non-profits, Big Tech companies who say the right things, and new AI government regulatory agencies focused on future harms. This is often under the presumption that only people who have studied these systems in depth and explored the potential harms are truly capable of figuring out what to do. The push consolidates power in the hands of those committed to the right beliefs about AI risk.


The mistake of this approach is blasting (or threatening) those who are focused on present concerns, such as AI ethicists, congresspersons, regulators, and government institutions. Those actors are often portrayed out-of-step, lacking expertise, and too hidebound to address the increasingly complex nature of AI--effectively concerned with lesser risks, not the kind which worry the real experts. The end-result is heightened animosity between the groups. This is fully deserved, too; neither respects the other.


But a simpler, healthier way through is available: the AI Safety community should actively support and promote current institutions. Moreover, they should do so based on their own creed: bolstering present institutions and supporting those concerned about present harms is the best way to create the kinds of institutions capable of dealing with any AI risks. This is especially the case with government institutions: ensuring they are actively capable of impacting the development of dangerous AI, starting with small risks happening right now, is the best pathway to empowering them to address long-term risks later.


This might seem dismissive to the AI Safety community's concerns, but it isn't. Their attempts to set up parallel institutions is simply a politically naive strategy. If current government institutions aren't empowered to address current AI risks, no agency will have the trust and respect to handle them tomorrow. The AI Safety movement often proposes solutions, such as bombing data centers in other countries, that will not have broad, popular support. If extreme actions are necessary, having a populace that trusts institutions is essential. And to build trust, there is no substitute for hard-work and experience dealing with problems--precisely what the AI Ethics community is trying to do.


Helpful illustration: the Paris Climate Treaty has effectively no enforcement mechanisms against countries that fail to live up to their promises. Other states can tut-tut a bad actor, but it is up to each country to abide by its commitments. If the government doesn't have robust, respected institutions, then they will back down when living up to their commitments is unpopular. This is for good reason, too; countries often seem perfectly willing to impose costs on working class persons while avoiding the cost to business. If people do not trust institutions to have their own interests at heart, they won't support actions that might penalize them. Building up trust requires government institutions to push back against present harms created by industry, showing that no special interest controls the process.


This isn't a glamorous solution. Op-eds and Ted talks about AI Safety attract eyeballs and many billionaires are happy to support existential risk foundations. But this talk is a distraction--and irrelevant for preventing long-term harms--if it is not also actively promoting current actions to combat the dangers of AI. If the AI Safety community is concerned about the future, they need to spend just as much time bolstering present institutions as solving conjectural problems.


Moreover, they should do so regardless of whether the respect is reciprocated. The AI Safety community often demands respect, touting how many experts support it, without providing the receipts that they are willing to fight AI harms right here and now. Building trust in one's expertise can't be done without a track record of success. If the AI Safety movement is really interested in respect, they have to earn it--and bad-mouthing critics isn't going to help. It just hurts the cause.


Simple takeaway: if you are concerned about long-termism, build up present-day institutions and fight present-day harms. Any other approach really is misguided and should be dismissed out of hand; anyone who won't help the present isn't serious about helping the future.




19 views0 comments

Recent Posts

See All

Copyright and Generative AI

The recent wave of cases concerning generative AI, such as Silverman v OpenAI, have not gone well. From a legal perspective, this isn't...

Comments


bottom of page