By Kim Gill
It can produce the glossiest photos your smartphone could ever create, transport you to your favorite fantasy realm, and write books and movie scripts so well that it would take months or even years for the human brain to conceptualize. No one can deny the power of artificial intelligence (AI) as an emerging technology. AI has been dominating the tech discourse for months even as its potential dangers have come into question. Despite how quickly it can produce extraordinary content, people are worried about using this technology to replace, erase, and change people’s livelihoods and identities.
This past week, AI dominated the news cycle as the most prominent CEOs in the tech
industry, including Elon Musk, Mark Zuckerberg, and Bill Gates, held a closed meeting with the U.S. Senate for the first AI Insight Forum to discuss the potential dangers of AI and how to regulate it properly. The forum’s mission was to build a foundation for bipartisan tech policy. However, critics had issues with the forum’s setup and questioned the legitimacy of it. The meeting was closed to cameras and the press. Senators were also not allowed to ask live questions and had to submit written questions beforehand.
Despite the forum’s setup, the senators expressed that they were all concerned about using this technology for nefarious reasons. Senators across the aisle met with the press after the forum and addressed these concerns. According to The Hill, Senator Richard Blumenthal (D-Connecticut) said, “These tech executives, giants of the tech industry, are coming forward because they understand the American people are rightfully and understandably fearful about what AI could mean to their freedom and economic opportunity.”
Senator Mike Rounds (R-South Dakota), on the other hand, said, “AI’s not going away. It’s going to be here for a long, long time. The challenge that we have is when we try to catch up on, first of all, the development here in the United States, how can we stay ahead of the rest of the world? Can we lead the rest of the world? And second of all, can we do it in such a fashion that’ll benefit the people?” per The Hill.
These statements emphasize valid fears the public has about the technology. It’s
currently one of the biggest concerns for the Writer Guild of America (WGA) and the Screen Actors Guild and American Federation of Television and Radio Artists (SAG-AFTRA), who have been on strike for the past few months. One of their main concerns is that studios use AI to write scripts for movies and television shows, thus replacing writers and actors with AI to cut studio costs.
In his interview with Variety this past week, actor Sean Penn presented a controversial example of how voice scanning could be used. “So you want my scans and voice data and all that. OK, here’s what I think is fair: I want your daughter’s, because I want to create a virtual replica of her and invite my friends over to do whatever we want in a virtual party right now. Would you please look at the camera and tell me you think that’s cool?” Penn said. His comments were met with heavy criticism for using an underage girl as an example to emphasize his point. However, his analogy presented a disturbing way the technology could be used and why it needs to be regulated.
One of the major problems surrounding AI is the ability to discern what we see online, especially during the upcoming election cycle. AI images of Donald Trump in prison have circulated the internet during his second indictment. AI images of Black Lives Matter and far-right protests accompanied by propaganda messages have also circulated online in the past month. When disinformation is rampant, people fear AI could influence future elections. They also fear being tied to crimes they didn’t commit.
One thing is for sure: we must regulate AI before it runs amuck. How long before that happens is the question. In the meantime, there must be efforts made by social media platforms to help the public properly discern what is authentic versus what is artificial.