• Taylor Swift Deepfakes (Again): At the risk of alienating everyone who is not a Swiftie, we’re here to talk about Taylor Swift again. In our last roundup of AI news we discussed a deepfake of Taylor Swift advertising a Le Creuset giveaway. This week, we have yet another deepfake of the singer, this time of fake, sexually explicit images which were disseminated across several social media platforms. Although fans attempted to crowd out the images by posting related keywords, one image shared on X was viewed 47 million times before the account was suspended. 

    Attempts to regulate the proliferation of deepfakes, either through user reporting of the content to the platform, or restrictions by generative AI companies in their user terms and conditions, have not managed to stop the flow of these false images. While some states have restricted the use of AI in some spaces, the lack of a federal law has hindered the success of these laws. And, developers of AI tools have not managed to entirely prevent the use of these tools for illegal or inappropriate purposes. It is likely that these high-profile cases of deepfakes will renew interest in regulation such as the proposed “Preventing Deepfakes of Intimate Images Act.” In fact, the White House has already commented on the Swift case, expressing “alarm” over the false images. In the absence of stronger federal regulations, AI developers and platforms should tighten its policies and guardrails to protect the public.
     
  •  Midjourney: This week the New York Times reported that the AI image generator Midjourney was creating images “nearly identical” to copyrighted material, even without references to specific material in the prompts. For example, when prompted to create a “popular movie screencap,” the tool created an image of Iron Man in the same pose as a copyrighted Marvel image. Similar tests were run on other AI image generators, including OpenAI’s Chat GPT and Microsoft Bing. Interestingly, when prompted to create clearly copyrighted material (“Create an image of Spongebob Squarepants”), the generators seemed to intentionally create distinctions in its image as compared to the copyrighted material. When prompted without reference to copyrighted material (“Create an image of an animated sponge wearing pants”), the generator created an image closer to the copyrighted material.

    Developers of AI tools should implement guardrails to prevent the accidental violation of the intellectual property rights of third parties. Particularly given an increase in copyright lawsuits against AI tools, terms and conditions prohibiting infringement may not be enough. Companies should regularly test and monitor the outputs of its tools and carefully evaluate the training data used in its models.