Companies using “bots” should know that California’s “bot labeling” law becomes operative later this year on July 1, 2019. The law requires clear disclosures of use of chatbots in certain commercial and political communications with consumers.
In general, Bots are software applications that run automated tasks on a network, such as the Internet, that can interact with computer systems or users; bots can be used for a variety of things, including to scrape website data or communicate with individuals online.
In particular, “chatbots” are programmed scripts that can communicate online with users and imitate human conversations. Chatbots have been increasingly used on social media platforms, such as Facebook Messenger, Twitter, Skype, SMS/text message, WhatsApp, Instagram, YouTube, and Slack. Certain chatbots respond to simple keyword commands and can assist users in customer support and answering routine questions. Other chatbots rely on artificial intelligence and can actually personalize communications with users as if the bot is human.
The law, Cal. Bus. & Prof. Code §17941 et seq. (SB1001), prohibits use of a bot to communicate with individuals in California online with the intent to mislead the person about the artificial identity of the bot to incentivize a purchase or influence a vote in an election; however, one can comply with the law by clearly and conspicuously disclosing use of a bot.
“Bot” is specifically defined in the law to mean an automated online account where all or substantially all of the actions are not the result of a person. The law does not impose any duty on service providers of online platforms.
While there is a federal law, the Better Online Ticket Sales Act, which prohibits use of bots to scalp tickets by automatically purchasing them and reselling them at higher prices, and while other states have introduced laws to regulate bots, California’s law is unprecedented. Proponents of the law have argued that consumers are often deceived if they are not able to discern if they are interacting with a bot. Opponents countered that the law was, among other things, too vague.
The law applies to use of bots in commercial transactions or to influence a vote in an election.
In particular, sponsors of the law were concerned about how bots were being used to spread fake and misleading news, reshape political debates, and influence advertising audiences. In fact, legislative history notes that in February 2018, the U.S. Department of Justice charged 13 Russians and 3 companies with creating fake profiles on social media to post contentious comments related to religion, race, and politics and then using bot accounts to like, share, and retweet the posts to help them gain traction; information showed that Russian agents published over 130,000 messages on Twitter, uploaded a thousand videos on YouTube, and disseminated inflammatory posts that reached 126 million users on Facebook.
Companies using chatbots that communicate with consumers online should review their practices and policies to ensure compliance with the new law and consult with experienced counsel to put adequate disclosures in place, if needed. In addition, any company using bots or other artificial intelligence should work with their attorney and other professionals to stay abreast of legal requirements and updates.
This entry was posted on Wednesday, January 16, 2019 and is filed under FTC Advertising Law Compliance, Internet Law News.