"AI and the Election: Deepfakes, Chatbots, and the Information Environment," Professor Andrew Hall
08 Nov 2024 (14 days ago)
The Dawn of a Paradigm Shift and Historical Context
- The current era is at the dawn of a paradigm shift in content creation, distribution, and consumption, which may have significant implications for democracy (23s).
- This shift is not entirely new, as human history has seen repeated disruptions to information ecosystems due to new technologies (1m2s).
- The printing press, for example, was a highly disruptive technology that allowed for the democratization of information and led to significant changes, including the Reformation and the Catholic Church's restructuring in Western Europe (1m33s).
- The printing press also brought about a period of turmoil, violence, and efforts to control the spread of ideas, as well as the creation of fake and misleading content (2m12s).
- Adapting to the printing press took time, and people had to learn how to verify the authenticity and accuracy of printed information (2m35s).
- The industrialization of printing in the 19th and 20th centuries further increased the scale and frequency of printed content, leading to the mass production of information (3m24s).
- This increased production capacity led to the creation of a large volume of content, including what might be considered low-quality or misleading information (3m52s).
- The historical context of technological disruptions to information ecosystems can provide insights into the current challenges and opportunities presented by AI and its impact on the election and democracy (48s).
- Propaganda has been used throughout history to shape public opinion, with wealthy individuals buying newspapers to promote their ideas, for better or worse, as seen in the example of yellow journalism in the late 19th and early 20th centuries (3m57s).
- Yellow journalism, characterized by sensationalized and often inaccurate reporting, played a significant role in shaping public opinion, particularly during the sinking of the USS Maine, which was blamed on Spain despite uncertainty over the actual cause (4m17s).
- The mass printing of newspapers created a new information environment that took time to adapt to, ultimately leading to the development of modern journalistic practices, including journalism schools and the importance of citing sources (4m55s).
- Father Charles Coughlin, a Catholic priest and radio personality, was a prominent figure in the early days of radio, using his platform to espouse anti-Semitic and pro-Hitler views to a massive audience of 30 million weekly listeners in the 1930s (6m2s).
- Coughlin's use of radio as a propaganda tool was eventually curtailed by President Roosevelt through legal action, highlighting the need for institutions and norms to regulate the use of new technologies in shaping public opinion (6m58s).
- The development of radio and television as mass media platforms has continued to disrupt the information environment, with each new technology requiring adaptations and the development of new norms and laws to regulate their use (7m7s).
- The first televised presidential debate marked another significant shift in how American voters learned about politicians and their policies, highlighting the ongoing impact of technological advancements on the information environment (7m43s).
- The advent of television required people to learn new behaviors in judging individuals on TV, as seen in the debate between JFK and Nixon, where JFK's makeup and Nixon's sweatiness were notable factors (8m0s).
- The first presidential debates on TV were not impressive, and it took time to develop a system for orchestrating things on television in a more informative way (8m30s).
- The rise of the internet, particularly the World Wide Web, was a disruptive moment that threatened professionalized journalism, and it has been a lengthy process for newspapers to adapt and make money online (9m5s).
- The Arab Spring marked the beginning of the second internet era, where social media allowed everyday people to speak directly to large groups at a low cost and with little effort (9m57s).
- Initially, social media was seen as a positive force for democracy, allowing people to speak out against their officials and coordinate opposition, but this optimism has since flipped, and there are concerns that social media is now being used to control the information environment for personal gain (10m30s).
- The history of technology disrupting the information environment is a deep topic, and this brief overview highlights the significant changes that have occurred, from the printing press to social media, and how these changes have impacted democracy (11m32s).
- A manipulated photo of Princess Kate with her children was released by the palace, but it was later pulled back by the Associated Press (AP) after being analyzed and determined to be fake, highlighting the evolving information environment (12m2s).
- The photo was not a deep fake, but rather a manipulated image created using Photoshop to make the children appear as though they were all smiling in one picture (13m43s).
- The incident triggered suspicion among people that things can now be completely fake in a way that would have been difficult in the past, leaving them quite suspicious (12m55s).
- The biggest effect of deep fakes may not be that people see deep fakes, but rather that they start to believe things are fake even when they're not (14m13s).
- The potential solution of watermarking or labeling content as AI-generated may not be effective in addressing the issue, as it could lead to further confusion and mistrust (14m29s).
- The nature of the problem and the solution to the problem is more complex than initially thought, requiring a deeper understanding of the evolving information environment (14m56s).
- The speaker works at the GSB, studying democracy in tech, and also works in tech, providing a unique perspective on the issues surrounding democracy and technology (15m12s).
- The speaker helped co-chair the AI committee at the GSB and is a leader in the business and beneficial technology pillar, spending a lot of time thinking about AI-related issues (15m34s).
- The discussion will cover three topics: where deepfakes are, how people are adapting to them, and what can be done to make these things work together better (16m10s).
The Impact of Deepfakes on Elections
- There haven't been many deepfakes, contrary to expectations, and the ones that have appeared haven't had a significant impact on the election (16m28s).
- An example of a deepfake is a photo circulated during a hurricane in Florida, which was politicized to criticize the Biden Administration and Kamala Harris, but it was a "nothing burger" as it didn't reveal any new information (16m51s).
- The deepfakes that have appeared seem to be either satirical or ineffective in shifting people's minds in an important way on their vote (18m25s).
- Despite predictions, deepfakes have not had a significant impact on the election, and it's surprising that they haven't been built in ways that would have a big effect (18m13s).
- The explosion of chat GPT and other AI tools was expected to lead to more deepfakes, but this has not occurred, and it's unclear why (18m50s).
- There are two possible reasons why deepfakes and AI-generated content may not have a significant impact on the election: people may have already recognized that the technology is not effective in changing minds, and the information environment is too noisy for deepfakes to cut through (19m4s).
- It is extremely hard to change people's minds, especially in America, and this might be a saving grace when it comes to AI-generated content (19m22s).
- Studies have consistently shown that online political ads and information have very small effects on people's attitudes because they already have strong views (19m57s).
- The noise from other information sources, such as news about the election cycle and hurricane relief, may make it difficult for deepfakes to have an impact (20m14s).
- People may already be primed not to think deepfakes are real, especially since a large fraction of Americans have used chatbots or similar tools themselves (20m36s).
- An alternative view is that the big, impactful deepfakes have not been seen yet, and a super compelling video with a salacious claim could cut through the noise and potentially change people's views (20m52s).
- A deepfake with a huge, believable claim about a presidential candidate could potentially move the needle, especially for swing voters in battleground states (21m30s).
- The 2016 election's October surprise, where James Comey made a significant announcement, had a huge effect on the outcome, and something similar could happen in the next week or two (21m48s).
- However, it's possible that a significant deepfake may not happen at this point because many Americans have already voted, and early voting is becoming increasingly popular (22m18s).
- The timing of releasing deepfakes has changed, as people can now vote early, and it's possible that half of the people have voted before Election Day, making the impact of deepfakes potentially less significant (22m35s).
Challenges in Close Elections and AI-Generated Evidence
- Close elections make election administration more challenging, requiring recounts and faster processes, which can lead to lengthy delays in determining the winner (23m9s).
- In the event of a close election, people may make strong claims about the vote-counting process, questioning its accuracy and fairness, and it's likely that AI will be used to generate evidence for these claims (23m39s).
- Researchers, including Justin Grimmer, have worked with individuals who believe the election system is rigged and have developed statistical evidence to support their claims, which may be amplified by AI after Election Day (23m57s).
- While civic engagement and scrutiny of the election system are essential, it's crucial to be skeptical of statistical efforts that claim the election is fake, as they may be based on flawed or misleading information (24m25s).
- A self-styled election fraud expert in Nevada used ChatGPT to analyze election returns data and claimed to have found evidence of election fraud in the Republican primary election in Washoe County, which led to a delay in certifying the election (25m2s).
- The expert's use of ChatGPT and claims of using sophisticated artificial intelligence platforms and supercomputers made his claims seem more persuasive to those without a strong understanding of statistics (26m15s).
- A study was conducted using ChatGPT, where a user uploaded data and asked the AI to analyze it, but the AI stopped analyzing the data and started making things up, a phenomenon known as hallucination, which is common in generative AI tools (26m46s).
- The AI reported back to the user with false information, including statements about contacting law enforcement and not certifying an election as legitimate, which the user claimed was drawn from the data, but was actually fabricated (27m10s).
- This type of incident may become more common, especially during close elections, and could potentially be used to manipulate public opinion (27m43s).
Public Skepticism and Adaptation to Generative AI
- A survey was conducted to investigate how people update their beliefs about what's real or fake online, and how they adapt to the increasing presence of generative AI (28m9s).
- The survey found that Americans are generally skeptical of information they see online, with most people not believing that a single piece of content, such as an image or video, is necessarily real (29m38s).
- The survey asked participants to rate their confidence in the authenticity of a piece of content on a scale of 0 to 3, and found that people are more skeptical of content seen on social media than on TV news (29m41s).
- The survey also found that people's skepticism varies by age, with older people being more skeptical of online content (28m27s).
- The study's findings suggest that people are adapting to the increasing presence of generative AI by being more skeptical of online information (28m15s).
- Americans are generally skeptical about the authenticity of audio, photo, and video content, with average confidence levels falling below "somewhat confident" across all three types of media (30m8s).
- The levels of skepticism are relatively similar across audio, photo, and video, but people tend to be slightly more willing to believe the authenticity of video content, possibly due to the higher cost and difficulty of creating convincing fake videos (30m29s).
- People are even less confident in the authenticity of content if it's on social media compared to television, likely because they assume TV content has undergone some level of verification (30m53s).
- Older Americans (55 and above) are the most skeptical about the authenticity of content, while younger Americans (18-34) are less skeptical across all types of media and venues (31m45s).
- There are differences in skepticism levels based on age, with older Americans being more skeptical, but it's unclear whether this is due to adaptation or pre-existing attitudes (32m1s).
- Small numbers of people can make a significant difference in elections, especially in close contests, making it challenging to study the impact of factors like skepticism about content authenticity (32m38s).
- Further research is needed to understand how swing voters in battleground states think about the information environment, which could require more funding and a more focused approach (32m49s).
- Preliminary findings suggest that there are differences in skepticism levels based on ethnicity, race, income, and education, with more educated, higher-income, and white individuals being more skeptical on average (33m14s).
- A concern was raised about the effectiveness of surveys in measuring the impact of deepfakes on people, as respondents may claim they wouldn't be fooled, similar to how people respond to telemarketing scams, but this may not accurately reflect their true behavior (33m46s).
- To address this issue, researchers are studying deepfakes "in the wild" to see how people react to them in real-life situations, rather than just relying on surveys (34m28s).
- The goal is to determine whether people will adapt to the presence of deepfakes by becoming more skeptical of social media content, and whether this adaptation will vary across different age groups (34m54s).
- Research has shown that older people tend to be more skeptical of online content, while younger people are more likely to be fooled, but also more sophisticated in certain ways, such as recognizing the unreliability of social media profiles (35m16s).
- The intensity of prior beliefs may also play a role in how people respond to deepfakes, with deepfakes serving as reinforcement for existing beliefs rather than changing people's minds (35m57s).
- Further research is needed to study the impact of deepfakes on people with different prior beliefs and to determine how deepfakes can be used to reinforce or challenge existing beliefs (36m21s).
The "Liar's Dividend" and the Erosion of Trust
- The rise of deepfakes and AI-generated content has created a huge problem where people can no longer be certain what is real and what is fake, leading to increased suspicion and potential for the spread of misinformation (37m16s).
- This phenomenon is often referred to as the "Liar's dividend," where a politician can deny the authenticity of a real scandal, and conversely, a fake scandal can be presented as real (37m41s).
- Examples of this have already been seen in elections in other parts of the world, such as in Turkey, where a real scandal was denied by the person implicated, and a fake scandal was presented as real (37m57s).
- A similar instance occurred in the US, where Trump claimed that pictures of a well-attended rally by Kamala Harris were fake (38m25s).
- The increasing use of AI-generated content has created a sense of uncertainty, where people can no longer trust their senses, and factual claims can no longer be proven or disproven with certainty (39m28s).
- This has led to a return to a state where people can no longer make factual claims about a politician's actions and directly prove it, which is a return to the norm in human history (39m41s).
- The last 50 years, where audio and video recordings were considered reliable evidence, are now seen as an unusual period in time, and we are returning to a state where information is more uncertain (39m55s).
- To address these problems, new solutions will be needed, which may involve a return to old methods or the development of new technologies to verify the authenticity of information (39m1s).
- The traditional method of verifying information through audio or video recordings is no longer sufficient due to the development of deepfakes and other technologies that can manipulate media, making it difficult to determine what is true or false (40m46s).
- Prior to the development of these technologies, other methods of verifying information were being developed, including journalists who built a reputation for making accurate claims that could be verified in the real world (41m3s).
- The business model of journalism needs to be restored, and new methods of verifying information need to be developed, such as online platforms that use fact-checking and other techniques to determine the accuracy of information (41m43s).
- One example of a new method is Community Notes, a platform that uses a randomly recruited set of people to fact-check information posted online, which has had a surprising amount of success (42m32s).
- Community Notes is a project that was originally developed at Meta and later implemented on Twitter as Bird Watch, and it allows everyday people to fact-check information and provide context to posts (42m34s).
- The platform has been successful in fact-checking posts, including those from high-profile individuals like Elon Musk, and has been able to provide accurate information to users (43m5s).
- Other methods of verifying information are being developed, including cameras that can cryptographically stamp images, allowing users to trace the origin of the image and determine its authenticity (43m35s).
- News organizations like the New York Times are using these cameras to validate the authenticity of images and provide accurate information to users (43m48s).
- The development of these new methods and technologies is a positive step towards creating a more accurate and trustworthy information environment (44m6s).
Opportunities and Challenges of Generative AI in Democracy
- Researchers have been working on using generative AI to improve democracy, such as ingesting political parties' platforms at the local level in the US to create well-informed chatbots that can have conversations with voters, providing quick summaries of political information and increasing voter knowledge (44m11s).
- Studies have shown that people like using these chatbots, are significantly more informed after using them, and want to use them in the future, indicating a powerful way to synthesize political information (44m58s).
- The current disruption to the information environment due to AI is significant, but it is also true that similar disruptions have occurred in the past, and humans have adapted, albeit sometimes painfully (45m15s).
- The opportunities presented by this technology should be closely monitored, as it has the potential to make people smarter and benefit democracy (45m43s).
- Generative AI can be used to create multiple versions of ads, which can be served to people in an automated loop to see which ones are working, and then shift to those, a technique already used in marketing (47m36s).
- Research has shown that being overwhelmed with information over time can change people's opinions, and groups like Future Forward are using machine learning to generate massive impressions of factual information (46m31s).
- The impact of using machine learning to generate this type of information on democracy is not yet fully understood and requires further study (47m11s).
- Machine learning with generative AI can be used to create engaging content, including organic content generation, ads, and potentially political ads, which is likely already happening (47m52s).
- There is a history of overstating the impact of new technologies on individuals, as seen in marketing and politics, with Cambridge Analytica being a notable example of this phenomenon (48m25s).
- The combination of new technologies, such as AI, requires close monitoring to understand their potential effects (48m55s).
Government Regulation and AI Safety
- The question of whether government should take action before or after new information technologies are established in the market is a complex one, with no clear answer (49m2s).
- Some governments have passed laws intended to address misinformation, but these laws can be used to allow governments to censor content they deem "unsafe," which can be problematic (49m49s).
- The concept of AI safety is important, but it can be used as rhetoric to argue for laws that engineer what people are allowed to say or think, which is not a successful strategy (50m3s).
- The US has not passed any federal laws addressing these issues, but globally, there is a need for a middle-ground approach that addresses clear problems while avoiding overregulation (50m38s).
- Celebrities are already being imitated in ways that are causing problems, and there are broader issues, such as the creation of sexual imagery, that need to be addressed (50m49s).
- A potential approach is to start by addressing clear problems and being cautious of going too far, as governments' efforts to regulate social media have not been effective in the past (51m10s).
- The state of New Jersey is mandating media literacy courses in public schools to address the issue of misinformation, and this approach is being considered as a potential model for other institutions, including Stanford University (51m29s).
- Stanford University has people in the Education school working on media literacy, and they are making progress in teaching it in a way that works for young people, although it can be challenging to make the subject engaging and not overly preachy (52m11s).
- Stanford has also rolled out an undergraduate civics education program that gives students a broader context on democracy and the potential dangers of creating tech products without considering their impact on society (52m52s).
- The current media environment is highly bifurcated, with people often only consuming news from sources that align with their existing views, and it is unclear how to transition to a more unified and open-minded media environment (53m40s).
- The problem of creating a more balanced media environment is partly due to the market, as it can be challenging to make money from news sources that aim to present a neutral or middle-ground perspective (54m12s).
- Some elite news services, such as The New York Times and The Wall Street Journal, are trying to maintain a high level of journalistic integrity and present true statements, even if they have a particular slant or perspective (54m37s).
- Elite media services capture enormous subscription revenue from rich people, but there is no popular media service doing this for the general public, indicating a need for a shift in consumer demand or education to drive market change (54m57s).
- A solution to the issue may emerge spontaneously, as seen historically, rather than being planned, and could potentially involve new bundles of different types of content on social media (55m26s).
- In mid-20th century America, people consumed their political information through newspapers, which they often bought for other reasons such as radio schedules, sports news, and entertainment, creating a market for centrist news (55m31s).
- The nature of the economic exchange in the newspaper industry led to the creation of a market for centrist news, and a similar model could potentially be applied to social media to promote more balanced information consumption (55m51s).
- The path forward may involve finding new ways to bundle different types of content together on social media, but the exact solution is uncertain (56m0s).