The Federal Trade Commission has been giving increased attention to marketers' use of artificial intelligence. In a series of blog posts, the FTC has been highlighting different concerns about the ways in which marketers can misuse AI. About a week ago, the FTC released a new post which raised some important issues that marketers should take into account. The FTC explained, "If we haven't made it obvious yet, FTC staff is focusing intensely on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers."
Noting that marketers are starting to use AI to influence consumers' beliefs, emotions, and behavior -- "tapping into unearned human trust" -- the FTC warned marketers not to steer people unfairly or deceptively into making harmful decisions. The FTC wrote, "Companies thinking about novel uses of generative AI, such as customizing ads to specific people or groups, should know that design elements that trick people into making harmful choices are a common element in FTC cases . . . ." In other words, marketers shouldn't use AI to manipulate consumers into taking action contrary to their intended goals. It's an important reminder that the FTC's authority isn't limited to deception, but that it also covers unfair practices that cause substantial injury to consumers that are not reasonably avoidable by them (and are not outweighed by other benefits).
The FTC also warned marketers about placing ads within generative AI. The FTC has long told marketers that they shouldn't use deceptive formats to confuse people about whether they are seeing advertising. In other words, consumers have a right to know that they are being advertised to. What does that mean in the context of AI? The FTC explained, "it should always be clear that an ad is an ad, and search results or any generative AI output should distinguish clearly between what is organic and what is paid." In other words, if a consumer is seeing content generated by AI -- and that content includes sponsored results -- you'd better make sure that that content is clearly labeled as advertising.
The FTC also said consumers have a right to know whether they are communicating with a real person or a machine. If you're offering customer service through an AI-powered chatbot, then, the FTC will want you to make sure it's clear to consumers that they're not texting with a real person, but instead are only receiving automated replies. (It will be interesting to see how the law develops in this area. Absent material deception, or some sort of substantial harm to consumers, do consumers always have the right to know that they're dealing with a computer?)
The FTC emphasized the importance of focusing on ethics and responsibility. Presumably referencing the recent news about a big tech company firing its ethics team, the FTC wrote, "Given these many concerns about the use of new AI tools, it's perhaps not the best time for firms building or deploying them to remove or fire personnel devoted to ethics and responsibility for AI and engineering." The FTC warned that, if it investigates a company's conduct, the company is going to have a much harder time convincing the FTC that it was acting responsibly when it doesn't actually have a team focused on ensuring that the AI is, in fact, used responsibly.
Finally, the FTC reminded marketers that, when employing AI tools, they should consider how those tools are actually used -- and the potential harm they may cause (even in ways that are unintended). The FTC explained, "your risk assessment and mitigations should factor in foreseeable downstream uses and the need to train staff and contractors, as well as monitoring and addressing the actual use and impact of any tools eventually deployed."
"it should always be clear that an ad is an ad"