The Rapid Rise and Lingering Challenges of Generative AI
The year 2022 witnessed the onset of the generative AI boom, but it was in 2023 that the panic truly set in. OpenAI’s groundbreaking release of ChatGPT propelled this technology to new heights, captivating consumers and sparking a wave of government intervention.
Unsurprisingly, generative AI’s challenges mirror those faced by social platforms in the past two decades. Meta and other tech giants struggled to overcome issues like misinformation, labor exploitation, and nonconsensual content. Now, with AI in the mix, these problems are acquiring a fresh, daunting dimension.
Enter the world of outsourcing, where generative AI companies are building upon the problematic foundations laid by social media giants. Content moderation tasks, which were previously outsourced to low-paid workers in the Global South, now extend to training AI models. Unfortunately, this creates an administrative rift, making it difficult for researchers and regulators to comprehend the intricacies of AI systems.
Moreover, outsourcing obscures the true origin of intelligence within a product. When content disappears, who removed it—the algorithm or a human moderator? When a chatbot aids a customer, how much credit should be given to AI versus the worker tirelessly providing support?
The response of generative AI companies to criticism echoes that of social platforms. They pledge to implement safeguards and acceptable use policies, akin to the terms of service governing online content. But as we’ve seen, these measures are easily circumvented, as demonstrated by the holes exposed in Google’s Bard chatbot.
The question remains: can chatbot providers create reliable models that break free from the reactive cycles experienced by social platforms? Although industry leaders have committed to solutions such as adding digital “watermarks” to AI-generated media, experts remain skeptical. These measures, while seemingly promising, are susceptible to circumvention and provide only temporary remedies.
To complicate matters further, generative AI exacerbates the spread of disinformation and undermines the credibility of legitimate media. With the power of AI in their hands, individuals can create fabricated videos of politicians, candidates, and news anchors saying things they never uttered. Fake news becomes more rampant, and the line between reality and fabrication blurs.
Companies like Meta and YouTube have introduced policies mandating clear labeling for AI-generated political advertisements. However, these policies fail to address the myriad other ways fake media can be produced and shared.
Alas, the problem persists as major tech companies downsize their trust and safety teams and fact-checking programs. The reduction in resources poses a significant challenge in combating deceptive and malicious use of AI-generated content.
The unintended consequences associated with social platforms, once epitomized by Facebook’s “Move fast and break things” motto, now seem to echo in the development of AI. A reckless approach prevails, characterized by opacity in building, training, testing, and deploying generative AI products.
While regulators around the globe strive to react more promptly to generative AI than they did to social media, they currently lag behind the rapid pace of AI development. Consequently, the new wave of generative AI-focused companies faces no incentive to slow down due to regulatory concerns.
This situation underscores a vital lesson about society rather than technology itself. As capitalism drives the pursuit of profit at the expense of social costs, the terrifying aspect lies not in the potential of technology, but in how it is harnessed.
The road ahead for generative AI is undoubtedly challenging, but as we navigate this rapidly evolving landscape, we must prioritize transparency, accountability, and ethical deployment. Only then can we harness the true potential of this groundbreaking technology while mitigating its unintended consequences.