Google responds to companies using AI-generated content

Posted by Edith MacLeod on 18 Jan, 2023
View comments Content
Guidance reiterates focus on content made for people, not search engines.

Google and ChatGPT.

Image background generated using DALL·E 2

ChatGPT exploded onto the scene late last year, as something of a game-changer, provoking a great deal of discussion in the SEO community on what this means for SEO and search. (see our own blog post on ChatGPT and SEO)

As people and companies experiment with ChatGPT and explore its capabilities, there’s been much interest in Google’s stance.

Twitter user Tony Hill cited examples of companies (BankRate, CNET, and CreditCard.com) using AI to generate some of their content. He posted screenshots from Bankrate and the other companies with wording clearly stating “the article was generated using automation technology and thoroughly edited and fact-checked by an editor on our editorial staff”.

Bankrate AI-written content.

Source: Twitter

This then prompted a direct question to Google from Blair MacGregor:

“Would help to get some kind of on-the-record statement from Google about this, relative to their previous guidance on AI-generated content. Is this considered non-violative based on it being "backstopped" by a human? Other reasons? Genuinely curious.”

Source: Twitter

Google’s Danny Sullivan responded with a thread reiterating guidance that content should be written for people rather than search engines. The issue is not how the content is generated, but the quality and any attempt to game the system.

“As said before when asked about AI, content created primarily for search engine rankings, however it is done, is against our guidance. If content is helpful & created for people first, that's not an issue.”

He referenced a previous tweet saying the problem wasn’t with AI-generated content per se, but content written for search engines:

“We haven't said AI content is bad. We've said, pretty clearly, content written primarily for search engines rather than humans is the issue. That's what we're focused on. If someone fires up 100 humans to write content just to rank, or fires up a spinner, or a AI, same issue…”

He goes on to mention Helpful Content, saying if the content is not helpful then the system would catch that. Google's spam policies address low quality automatically-generated content:

“Our spam policies also address spammy automatically-generated content, where we will take action if content is "generated through automated processes without regard for quality or user experience".

Finally, he mentions Google’s updated E-E-A-T guidelines, which are used to rate quality:

“For anyone who uses *any method* to generate a lot content primarily for search rankings, our core systems look at many signals to reward content clearly demonstrating E-E-A-T (experience, expertise, authoritativeness, and trustworthiness).”

You can read the whole of Danny Sullivan's Twitter thread here.

Further reading: CNET outlines their aims in running the experiment creating content using AI in their blog post.

Recent articles

Google retires Page Experience report in Search Console
Posted by Edith MacLeod on 19 November 2024
Google Maps now lets you search for products nearby
Posted by Edith MacLeod on 18 November 2024
Google rolls out November 2024 core update
Posted by Edith MacLeod on 12 November 2024
14 essential types of visual content to use [Infographic]
Posted by Wordtracker on 3 November 2024
OpenAI launches ChatGPT search
Posted by Edith MacLeod on 31 October 2024