Technology

Half of American Adults Who Use Social Media Want Better Labels for AI Posts, CNET Finds

Anyone who has scrolled through social media lately knows that AI is everywhere. But we’re not always good at seeing it when we see it. That’s a big problem, and our frustration with AI is growing.

The AI slop infected on all platforms, since soulless pictures to weird videos again easy to read text. The majority of US adults who use social media (94%) believe they have encountered content created or altered by AI, but only 44% of US adults say they are confident they can tell real photos and videos from AI producers, according to an exclusive CNET survey.

Read more: AI Slop Is Ruining the Internet. These People Are Fighting To Save It.

There are many different ways that people are fighting against AI content. Other solutions focus on better labels for content created by AI, as it is harder than ever to trust our eyes. Of the 2,443 respondents who use social media, half (51%) believe we need better AI labels online. Some (21%) believe there should be a complete ban on AI-generated content on social media. Only a small group (11%) of respondents say they find AI content useful, informative or entertaining.

AI isn’t going anywhere, and it’s reshaping the internet and our relationship with it. Our research shows that we still have a long way to go to solve it.

Key findings

  • Most US adults who use social media (94%) believe they have encountered AI content on social media, but very few (44%) can confidently distinguish between real or fake photos and videos.
  • Most US adults (72%) said they take action to determine if a photo or video is real, but others do nothing, especially among Boomers (36%) and Gen Xers (29%).
  • Half of US adults (51%) believe that AI-generated and edited content needs better labeling.
  • One in five (21%) believe that AI content should be banned from social media, without exception.

Watch this: AI is Inseparable from Reality. How Do We Spot Fake Videos?

American adults don’t feel they can see AI media

Seeing is no longer believing in the age of AI. Tools like OpenAI’s Sora video generator and Google There’s Banana Graphical modeling can create hyperrealistic media, and chatbots seamlessly integrate text that feels like it was written by a real person.

So it’s understandable that a quarter (25%) of US adults say they don’t trust their ability to distinguish real photos and videos from AI-generated ones. Older generations, including Boomers (40%) and Gen X (28%), are the least confident. If people don’t have a ton of knowledge or exposure to AI, they may feel unsure about their ability to perceive AI accurately.

People take action to verify content in different ways

AI’s ability to mimic real life makes it even more important to verify what you see online. Nearly three in four US adults (72%) say they take some form of action to determine if a photo or video is true if it raises their suspicions, with Gen Z the most likely (84%) of the age group to do so. The most obvious — and popular — method is to closely examine images and videos for visual cues or artifacts. More than half of US adults (60%) do this.

But the invention of AI is a double-edged sword; the models have improved rapidly, eliminating the earlier errors we relied on to see AI-generated content. The em dash has never been a reliable sign of AI, but extra fingers in photos and continuity errors in videos are once glaring red flags. New AI models generally don’t make those pedestrian mistakes. So we all have to work hard to find out what is real and what is not.

ai-slop-cnet-survey-actions-taken.png

You can check the differences with the labels to see the AI ​​content.

Cole Kan/CNET/Getty Images

As the visual cues of AI disappear, other forms of content validation are becoming increasingly important. The next two most common methods are checking labels or displaying information (30%) and searching for content elsewhere on the Internet (25%), such as on news sites or through reverse image searches. Only 5% of respondents reported using ia deepfake discovery tool or website.

But 25% of US adults do nothing to determine whether the content they see online is authentic. That lack of action is highest among Boomers (36%) and Gen Xers (29%). This is troubling — we’ve seen that AI is an effective tool torture again fraud. Understanding the origin of a post or piece of content is an important first step in navigating the Internet, where anything can be faked.

Half of US adults want better AI labels

Many people are working on solutions to deal with AI slop attacks. Labeling is a huge area of ​​opportunity. Labeling relies on social media users to disclose that their posts have been made with the help of AI. This can also be done behind social media, but it is somewhat difficult, leading to unexpected results. This may be why 51% of US adults believe we need better labeling of AI content, including deepfakes. Support was strongest among Millennials and Gen Z, at 56% and 55%, respectively.

mind-ai-slop-cnet-survey.png

Very few (11%) found AI content useful, informative or entertaining.

Cole Kan/CNET/Getty Images

Other solutions aim to control the flood of AI content shared on social media. All major platforms allow AI-generated content, as long as it doesn’t violate their standard content guidelines — nothing illegal or offensive, for example. But some platforms have introduced tools to limit the amount of AI-generated content you see in your feed; Pinterest has been released its filters last year, when TikTok still to test some of them. The idea is to give everyone the ability to allow or exclude AI-generated content from their feed.

But 21% of respondents believe that AI content should be completely banned on social media, with no exceptions allowed. That number is even higher among Gen Z at 25%. When asked if they believe AI content should be allowed but strictly regulated, 36% said yes. That low percentage may be explained by the fact that only 11% find AI content to provide meaningful value — whether it’s entertaining, educational or useful — and that 28% say it provides little or no value.

How to limit AI content and spot potential deepfakes

Your best defense against being forgotten by AI is to have an eagle eye and trust your gut. If something sounds weird, too shiny or too good to be true, it probably is. But there are other steps you can take, such as using a deepfake detection tool. There are many to choose from; I recommend starting with the Content Authenticity Initiative tool, as it works with many different file types.

You can also check the account that shared the post for red flags. Oftentimes, AI slop is shared by mass slop producers, and you’ll be able to easily see that in their feed. They will be filled with weird videos that don’t seem to have any continuity or similarity between them. You can also check to see if anyone you know is following him or if that account is unfollowing anyone else (that’s a red flag). Spam posts or scam links are also indications that the account is invalid.

If you want to limit the AI ​​content you see in your social feed, see our guidelines for blocking or muting Meta AI on Instagram and Facebook and filtering AI posts on Pinterest. If you encounter slop, you can mark the post as disinterested, which should indicate to the algorithm that you don’t want to see more of it. Without social media, you can disable Apple Intelligencethe AI ​​in The Pixel again The Galaxy phones and Gemini in Google search, Gmail and documents.

Even if you do all this and still get fooled by the AI ​​from time to time, don’t feel bad about it. There’s only so much we can do as individuals to fight the rolling tide of AI slop. We may all do wrong at times. Until we have a universal system to successfully detect AI, we have to rely on the tools we have and our ability to teach each other what we can do now.

How to do it

CNET commissioned YouGov Plc to conduct the research. All figures, unless otherwise stated, come from YouGov Plc. The total sample size was 2,530 adults, 2,443 of whom used social media. The work was conducted from February 3 to 5, 2026. The survey was conducted online. Figures are weighted and represent all US adults (ages 18 plus).



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button