The Federal Trade Commission (FTC) recently released a business blog post entitled, "Chatbots, deepfakes, and voice clones: AI deception for sale," highlighting that the FTC Act's prohibition on deceptive or unfair conduct can apply if businesses make, sell, or use a tool that's effectively designed to deceive – even if that’s not its intended or sole purpose. The FTC reminds businesses to pay particular attention to the following:
- Offering the product at all. If you develop or offer a synthetic media or generative AI product, consider at the design stage and thereafter the reasonably foreseeable – and often obvious – ways it could be misused for fraud or cause other harm. Then ask yourself whether such risks are high enough that you shouldn’t offer the product at all.
- Warning about or disclosing potential risk is not enough - deterrence measures required. Take all reasonable precautions before AI hits the market. Merely warning customers about misuse or having them make disclosures is not sufficient. Deterrence measures are necessary and must be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.
- Is human emulation necessary? If your tool is intended to help people, ask yourself whether it really needs to emulate humans or can be just as effective looking, talking, speaking, or acting like a bot.
- Are you over-relying on post-release detection? Recognizing AI-generated or fake content is challenging and the detection tools are lagging behind the AI technology. The burden shouldn’t be on consumers to figure out if a generative AI tool is being used to trick them.
- Are you misleading people about what they’re seeing, hearing, or reading? If you’re an advertiser, you might be tempted to employ AI tools to sell, but if ads mislead consumers via doppelgängers, such as fake dating profiles, phony followers, deepfakes, or chatbots, they will violate the law in ways that could result (and have resulted!) in FTC enforcement actions.
As more and more businesses adopt AI solutions, it's important to understand the risks associated with these cutting-edge tools. The FTC is taking these concerns seriously and has issued other guidance (including here and here) on how to use AI technology in a responsible, non-discriminatory, and transparent way. This advice from the FTC is yet another reminder of the potential challenges associated with AI, and highlights the importance of taking a proactive approach to risk mitigation in this space.
"While the focus of this post is on fraud and deception, these new AI tools carry with them a host of other serious concerns, such as potential harms to children, teens, and other populations at risk when interacting with or subject to these tools. Commission staff is tracking those concerns closely as companies continue to rush these products to market and as human-computer interactions keep taking new and possibly dangerous turns."