Bots can no longer perpetuate misleading messages in California — without identifying themselves first

Senator Bob Hertzberg
3 min readJul 2, 2019

--

Over 3 billion people log onto social media accounts around the world. This collective yearning to be connected continues to cause explosions of progress and technological advancements for sites like Facebook and Twitter — for better and, sometimes, for worse.

This interconnectedness comes with a price.

In the fall of 2016, news outlets began reporting a growth in the number of automated accounts — or bots — posting messages related to the major U.S. presidential candidates.

Sixteen months later, U.S. Special Counsel Robert Mueller indicted 13 Russians and three companies, charging them with election interference that was largely executed through an army of social media bots that used fake accounts to spread misinformation. One of those companies, the Internet Research Agency, has been churning out memes, YouTube videos, Facebook posts, and Twitter accounts in an attempt to sway political campaigns and conversations around the world since at least 2013.

It is no secret that social media bots are being weaponized to spread fake and misleading news, lend false credibility to people and ideas, and reshape political debates.

That’s why, last year, I authored and Governor Jerry Brown signed SB 1001, The BOT Act of 2018, to shed light on bots by requiring them to identify themselves as automated accounts.

According to a Pew Internet Center study released as the bill was moving through the Legislature last year, 66 percent of all tweets that include links are posted by bots.

Believe it or not, this bill was met with challenges. As a lawmaker who has introduced a great deal of legislation on issues like bots, blockchain, and data privacy, I am acutely aware of the unique challenge that lawmakers face when it comes to regulating technology: we must protect consumers, but not at the expense of stifling innovation and progress in the tech space.

As a result, the BOT Act does not prohibit the existence of bots, but rather simply requires them to identify themselves.

Bots are not inherently evil; many entities use automated technology to share important information, like seismic activity, electronic receipts, or results of online searches.

source: Twitter account @earthquakeBot, which is in compliance under the BOT Act

But it is clear that, despite repeated reporting of this issue in the media and high profile hearings in Congress, the problem is not going away. Bots continue to misrepresent public sentiments and perceptions about topics, or to mute dissenting opinions and distract from current events.

Just last week, hundreds of incendiary tweets poured into the public conversation after the second presidential debate, disputing Senator Kamala Harris’ ethnic heritage.

source: Twitter user @RVAwonk

Some savvy Twitter users were able to call out these tweets for what they were: a coordinated attack by hordes of bots.

But what about other users, who might not fully understand the extent to which bots are taking over their social media feed? What about those who can’t tell that these accounts, which appear in all other ways to be a normal account, are not in fact operated by a person?

We must protect social media users by providing them with tools to understand where their information is coming from: a human, or an automated account disguised as one. This law is a strong first step.

Misinformation of our public and meddling in our elections is where policy makers must draw a line. Our democracy depends on it.

--

--

Senator Bob Hertzberg
Senator Bob Hertzberg

Written by Senator Bob Hertzberg

Clean energy entrepreneur and former Assembly Speaker currently representing the San Fernando Valley in the California State Senate

Responses (1)