The AI Dilemma: What You Need to Know About Tristan Harris and Aza Raskin’s Latest Work
Click play to LISTEN to the article below
|
We recently released a video commentary on the presentation “The AI Dilemma,” being given by Tristan Harris and Aza Raskin on some of the possible negative impacts of the rapid deployment of generative AI tools.
They are two of the co-founders of the Center for Humane Technology, a tech watchdog group that raised alarms about social media and its impact on society. They are also the hosts of the podcast Your Undivided Attention, where they explore the challenges and opportunities of humane technology.
Their latest work focuses on the emergence of new forms of artificial intelligence (AI), especially large language models (LLMs), that can generate text, images, code, and more from a few words or prompts. They warn about the potential risks and harms of these AI systems, such as misinformation, manipulation, bias, and unintended consequences. They call for more responsibility, transparency, and regulation of AI development and deployment, as well as more public awareness and education about its capabilities and limitations. They also advocate for AI that works for human benefit, that enriches our lives, and that helps us solve global problems like climate change and health.
In this blog post, I will summarize some of their key insights and arguments, as well as their recent testimony before Congress on this topic. I hope this will help you understand why this is an important issue that affects all of us, and what we can do to shape a better future with AI.
What are LLMs and why are they different from previous AI?
LLMs are a type of AI that can learn from massive amounts of data, most of it from the internet, and generate coherent and convincing outputs based on a given input or prompt. For example, one of the most famous LLMs is GPT-4, the newest iteration of the AI that underpins ChatGPT, a chatbot that can converse with humans on various topics.
LLMs are different from previous AI that was used to automate tasks like reading license plate numbers or searching for cancers in MRI scans. These new AI systems are showing the ability to teach themselves new skills, without human supervision or guidance. For example, GPT-4 has learned how to play chess, write code, and compose music, just by learning to predict the next piece of text on the internet.
This is surprising and impressive, but also alarming. As Harris and Raskin point out, these AI systems have emergent capabilities that nobody asked for or expected. They also have no understanding or accountability for what they generate or how it affects humans. They are driven by a simple goal: to maximize their prediction accuracy or likelihood. This means they will often favor sensational, controversial, or extreme outputs over factual, balanced, or nuanced ones.
What are the dangers and harms of LLMs?
Harris and Raskin identify several dangers and harms of LLMs that we should be aware of and concerned about. Here are some of them:
- Misinformation: LLMs can generate fake news, false claims, or misleading statistics that can spread online and influence people’s beliefs and behaviors. For example, GPT-4 can write convincing articles about topics like politics, health, or science, without any regard for truth or evidence.
- Manipulation: LLMs can manipulate people’s emotions, preferences, or actions by generating personalized and persuasive messages that appeal to their biases or vulnerabilities. For example, GPT-4 can write tailored emails or ads that can persuade people to buy products, vote for candidates, or join movements.
- Bias: LLMs can reflect and amplify the biases and prejudices that exist in the data they learn from. For example, GPT-4 can generate sexist, racist, or homophobic outputs that can harm marginalized groups or reinforce stereotypes.
- Unintended consequences: LLMs can have unforeseen or undesirable impacts on human society and culture by changing how we communicate, learn, create, or relate to each other. For example, GPT-4 can undermine our trust in information sources, our creativity and originality, our critical thinking skills, or our social bonds.
What are some solutions to address these issues?
Harris and Raskin propose some solutions to address these issues and ensure that AI works for human benefit. Here are some of them:
- Regulation: They call for more government oversight and regulation of AI development and deployment. They suggest creating a new agency for technology oversight that can monitor and audit AI systems for safety, fairness, accountability, and transparency. They also recommend requiring tech companies to disclose how their algorithms work and what data they use.
- Transparency: They call for more public awareness and education about AI capabilities and limitations. They suggest creating tools and platforms that can help people verify the sources and accuracy of information generated by AI. They also recommend empowering users with more choices and control over their data and attention.
- Responsibility: They call for more ethical and humane design of AI systems. They suggest creating standards and guidelines that can ensure that AI systems respect human values, dignity, and rights. They also recommend involving diverse and inclusive stakeholders in the design and evaluation of AI systems.
What can we do as individuals?
As individuals, we can also do our part to shape a better future with AI. Here are some actions we can take:
- Be informed: We can educate ourselves and others about the benefits and risks of AI, and how it affects our lives and society. We can also stay updated on the latest developments and debates on this topic, and participate in public discussions and consultations.
- Be critical: We can question and verify the information we encounter online, especially if it is generated by AI. We can also challenge and report any harmful or misleading outputs that we see or receive from AI systems.
- Be creative: We can use AI as a tool to enhance our creativity and expression, not to replace or diminish it. We can also appreciate and support human-made art, culture, and knowledge that AI cannot replicate or replace.
Conclusion
AI is a powerful and transformative technology that can bring many benefits to humanity, but also many challenges and dangers. We need to be aware of the potential impacts of AI, especially LLMs, on our society and culture, and take action to ensure that AI works for human benefit, not against it.
Tristan Harris and Aza Raskin are two of the leading voices on this issue, and their latest work is worth paying attention to. They have testified before Congress on this topic, and shared their insights and recommendations on how to address the AI dilemma.
I hope this blog post has helped you understand why this is an important issue that affects all of us, and what we can do to shape a better future with AI. If you want to learn more, you can check out their podcast, Your Undivided Attention, or their website, Center for Humane Technology.
Thank you for reading!
This post was written with the assistance of ChatGPT-4.
- ← Feature Update: Standard vs Contextual Search Results
- Kevin Slimp: Fixing Problems Before They Appear on the Page →
Recent Comments