top of page
Writer's pictureJake Browning

Is it bad faith? The Politics of Regulating AI and SuperAI

Updated: Jul 3, 2023

The central puzzle of the current AI doomsayers is how their apocalyptic predictions are often compatible with designing, building, and deploying current systems that seem prima facie dangerous. This disconnect is especially pronounced when many of the signatories, like Sam Altman, can both call for regulations to stave off AI Armageddon while also threatening to pull out of Europe if they hold ChatGPT to data privacy standards. How can doomsayers want regulations while avoiding regulations? Is the whole move in bad faith?


The truth, though, is more prosaic: contemporary large language models, while impressive, are intrinsically stupid. While they may have “sparks” of superintelligence, their understanding is ultimately shallow. The threat they pose is rather banal, not existential—things like disinformation and job redundancies. No one is worried ChatGPT will lead to existential risks. Thus, regulations on superAI won’t have any effect on current AI because the current stuff isn’t super at all.


But the other issue is that current AI, because it is stupid, is already doing things that are questionably illegal. Hence the proposals for regulation from various governments. The difficulty for people like Altman is that current models cannot be fixed; no alignment work seems to prevent them from inappropriateness: making up libelous and slanderous stuff, plagiarizing and reproducing copyrighted content, spitting out offensive and biased comments, and generally being unreliable. As many have noted, these are effectively part of the design of these systems and it is not clear how they can be avoided.


As a result, Altman et al. need to oppose regulating these chatbots like other products. These chatbots should have a huge “for entertainment purposes only!” label flashing on them at all times, and yet they are integrated with search engines or other truth-sensitive programs as if they are reliable. Any company that intentionally pushes nonsense as fact would be held accountable for it, and so Altman et al. need these systems to not be held to the same standards as other products. They want an “AI regulation” that is more accepting of these systems’ intrinsic faults.


This is compatible with claiming that truly dangerous superAI is possible and should be regulated. The two are, in fact, complementary: if a new AI regulatory agency could be designed and staffed by current AI people who understand the tech, they could regulate current chatbots according to standards they could meet—effectively allowing them “wiggle room” for all their bullshit—while also preventing “truly dangerous” future tech from destroying us all. There is no inconsistency in someone saying we need to avoid holding current tech to “impossible” standards—like data privacy, copyright, and slander laws that current LLMs can’t satisfy—while also saying a future superintelligence needs to be held to very high standards.


The takeaway is that there is no bad faith happening: they believe superintelligence is real, but they also think current chatbots are pretty stupid. They also don’t know how to make current chatbots less stupid. To avoid having them all removed from commercial purposes by regulation, they simply don’t want them held to current regulations on content being reliable, non-discriminatory, copyright-respecting, and so on.

10 views0 comments

Recent Posts

See All

Copyright and Generative AI

The recent wave of cases concerning generative AI, such as Silverman v OpenAI, have not gone well. From a legal perspective, this isn't...

Comentarios


bottom of page