First announced in September as part of its Operation AI Comply sweep, the final consent order against Rytr, an AI “testimonial and review” service was just approved. The final order bars the company from marketing or selling any service dedicated to – or promoted as – generating consumer reviews or testimonials and, as is typical, requires twenty years of compliance reporting.
At issue, as described in the complaint, is the Rytr.me website, an artificial intelligence-enabled “writing assistant.” The service generates written content for its users under different “Use Cases,” allowing the user to manually select, copy, paste, and use the generated content. The use cases offered include a variety of different types of content, including “Email,” “Product Description,” “Blogs,” “Articles,” “Story Plot,” “Google Search Ads,” and – of concern to the FTC -- “Testimonial & Review.” For this “use case,” the user could select a variety of outputs, including language, tone (e.g., “formal,” “cautionary,” “critical,” “convincing” “worried,” “urgent,” “funny”), level of creativity, to which the user would add keywords, phrases, and titles. Based on this user input, the service would generate genuine-sounding, detailed reviews quickly and with little user effort.
The problem, though, as with much AI-generated content, is the hallucination: as alleged, Rytr’s service “generates detailed reviews that contain specific, often material details that have no relation to the user’s input.” As a result, the service would generate false reviews that users could easily copy and publish. Worse, “…these false reviews feature details that would deceive potential consumers deciding to purchase the service or product described.” The service also sets no limit on the number of reviews a user with the unlimited output subscription can generate and copy. The complaint alleges that “at least some of its subscribers have utilized the Rytr service to produce hundreds and in some cases thousands of reviews.”
The FTC determined that this service causes or is likely to cause substantial harm to consumers and has no or de minimis reasonable, legitimate use. It alleged that the service “pollutes” the marketplace with a “glut of fake reviews.” Indeed, making the case for a finding of unfairness, the FTC alleged as follows: “Honest competitors who do not post fake reviews can lose sales to businesses that do, which can result in reduced consumer choice and lower quality products and services. Consumers cannot reasonably avoid these injuries because the reviews Respondent’s service generates appear authentic enough to make it difficult or impossible for consumers to distinguish a real review from a fake one. The harm caused by Respondent’s service is not outweighed by countervailing benefits to consumers or to competition; indeed, there are no legitimate benefits to the public from a service that generates an unlimited number of false reviews.” Accordingly, the complaint charged Rytr with providing the means and instrumentalities for the commission of deceptive acts and practices and with unfair and deceptive acts and practices in violation of Section 5.
I think this enforcement action is fascinating. Certainly, a writing “assistant” -- in and of itself -- should not be considered the instrumentality of falsehood and deception, at least no more so than a good editor. Prompts and suggested word and tone choices do not deception make. But the Rytr tool apparently made it too easy for reviewers to be lazy, and worse, nefarious. It generated fake reviews that users would not be required to authenticate or correct before publishing. And for those users who generate reviews as a business, for money, the tool made it very, very easy to flood the market with bad information.
The interesting question is whether the tool, if hallucination-free, would still be problematic. What if the tool included prompts that required the user to include more details about their actual experience with the product or company? What if the service didn’t invent and add material product information, or a characterization of the reviewer’s feelings about it, but instead just suggested additions or re-phrasings, like the ubiquitous grammar prompts we all use do? Indeed, when does AI go from being a useful tool to a tool of deception? And will that distinction become clearer once generative AI tools get better, and less inclined to just make sh*t up?
For now, as these and the other Operation AI Comply enforcement actions make clear, it is unwise to turn a blind eye to (and, worse, profit from) AI’s hallucinations and misuse by bad actors.