A Federal Trade Commission staff report released last month and discussed during the last open meeting of the Commission examines certain policies and practices of social media and video streaming service providers—namely, around data collection and use, advertising, the use of automated decision-making technologies, and those relating to children and teens—and makes key recommendations to address certain risks identified by Commission staff.
The comprehensive report, available here, is based on responses to Orders to File Special Reports under Section 6(b) of the FTC Act issued in December 2020 to nine of the largest social media and video streaming services, including Amazon, Facebook, YouTube, and others. The Orders requested information about, among other things, (i) how these companies collect, track and use personal and demographic information, (ii) how they determine which ads and other content are shown to consumers, (iii) whether and how they apply algorithms or data analytics to such personal and demographic information, and (iv) how their practices impact children and teens. Based on the information provided by respondents, as well as publicly available materials and the Commission’s extensive experience with streaming services, the staff report includes detailed analyses of each of these issues.
Staff issued the following key findings, noting that each finding may not be applicable to every one of the subject companies:
- Many of the companies collected, and could indefinitely retain, “troves of data” from and about both users and non-users, including personal and demographic information as well as information about their interests and activities elsewhere on the Internet. Data collected by some of the companies appeared to include information input by users themselves as well as data purchased from brokers and other third parties.
- Many of the companies sold advertising services to third parties based on their users’ personal information—oftentimes unbeknownst to the users. Acknowledging that consumers’ use of these services is often conditioned on targeting, and that opting out of such targeting is often unavailable, FTC staff is concerned that users may not understand how much privacy they are giving up, largely to facilitate targeted advertising, when using the streaming services.
- Consumers often lacked meaningful control over how their personal information was used for the companies’ AI-fueled systems, which power functions like content recommendation, search and advertising.
- Many of the companies ignored the reality that children use their services, while also failing to provide transparency around their compliance with the Children’s Online Privacy Protection Rule. Additionally, most of the companies allowed teens on their platforms, yet placed no restrictions on their accounts and collected personal information from them in the same manner as they do from adults.
Based on these findings, FTC recommended that these service providers:
- Implement baseline privacy protections, including: (i) minimizing data collection to only that data necessary for the services; (ii) implementing concrete data retention and deletion policies; (iii) limiting data sharing with affiliates and other third parties; and (iv) adopting consumer-friendly privacy policies that are clear, simple and easily understood.
- Implement safeguards around advertising practices, including ones that prevent the collection, use and disclosure of sensitive consumer information for targeted advertising. According to FTC staff, companies can start by carefully examining their policies and practices regarding ad targeting based on sensitive categories of information, construing the categories of information considered to be sensitive broadly.
- Be transparent about, and grant users control of, the data that powers their automated decision-making systems.
- Ensure greater protection of children and teens, including by: (i) treating the COPPA Rule as representing minimum standards; (ii) affording teens more protections; and (iii) providing parents and guardians with a simple way to access and delete their child’s personal information.
The Commission voted 5-0 to issue the staff report, though several Commissioners dissented in part. For instance, while concurring in the decision to publish the report, Commissioner Melissa Holyoak expressed concern with how its analysis and recommendations may directly or indirectly affect free speech online. And Commissioner Andrew N. Ferguson dissented in part from the sections of the report concerning targeted advertising and artificial intelligence, labeling the report’s claim that consumers can be profoundly threatened and suffer extreme harm by being shown a targeted ad an “unjustified, [] gratuitous attack on the online economy made with the goal of justifying heavy-handed regulation.”