CategoriesSpeaking Freely with Todd McMurtry

The Defamation Lawsuit is Essential to Our Future

Billionaire Chamath Palihapitiya recently tweeted that we “may be one defamation lawsuit away from canceling cancel culture.” Palihapitiya was suggesting a person defamed by a comment made by a New York Times reporter should sue. The reporter falsely alleged on Twitter that this person had made the R-slur during a session on the social media app Clubhouse. His point is that the only way to hold corporatist-media to account for profit-driven, non-stop misrepresentations and false reporting is through defamation lawsuits.

Most people attacked by corporatist-media are known as a “public figure” or a “limited purpose public figure.” For one of these people to hold the media to account for misrepresentations and false reporting, they must prove that the reporter made a statement that was defamatory and that it was made with “actual malice.” This is generally defined as “an evil intent or motive arising from spite or ill will; or culpable recklessness or willful and wanton disregard of the rights and interests of the person defamed.” This is hard to prove, because reporters often do just enough work to pretend they acted in good faith. There are, however, ways to attack corporatist-media when the reporters write hit pieces instead of news.

When a reporter relies on biased or anonymous sources, issues threats or other negative statements, demonstrates ill will or hostility, or is a rival, such conduct may support a claim of actual malice. A reporter’s bias might also support an allegation of actual malice. For example, if a reporter has a business relationship with one party and then writes a hit piece on that party’s business competitor, the business competitor can point to the reporter’s bias to prove actual malice. By these standards, many hit pieces may be actionable.

So, what is to be done? We need more suits against corporatist-media that engages in writing hit-pieces. These efforts might expand the law to make the chances of success in such litigation more likely. Justice Clarence Thomas has suggested that the law is ripe for change. As well, if they know they will be sued for their actions, they might be more careful about what they write. The problem is the time and expense of pursuing litigation. Most people cannot afford to pay an attorney by the hour and most attorneys can only handle so many contingency fee cases, especially ones that present the unique challenges of defamation litigation.

What is needed is a public interest law firm. Such a firm is a private firm, like any other, but it is focused on representing a particular cause. It is not profit oriented but is instead issue oriented. Such a firm would rely upon outside funding to operate, as its cases would not necessarily make money. Perhaps billionaires such as Palihapitiya can spare a little change to empower a public interest law firm dedicated to taking on the corporatist media. This effort could rebalance the relationship between corporatist-media and those it attacks.

CategoriesSpeaking Freely with Todd McMurtry

Swaying with the Algorithm: How Twitter Allows Abuse and Manipulation

How reflective of your likes and interests is your Twitter feed? And who’s behind deciding what you see in the first place? The social media platform would say “you,” but a skeptical public isn’t so sure.

Todd McMurty

How reflective of your likes and interests is your Twitter feed? And who’s behind deciding what you see in the first place? The social media platform would say “you,” but a skeptical public isn’t so sure. Over the past several months, Twitter’s algorithm practices have been questioned by everyone from CNN to PBS to the Washington Post to Twitter users themselves. There is a strong argument that social media algorithms helped incite the recent post-election violence. Why? Because something, as they say, is rotten in the state of cyberspace. Hate-speech and harassment, disguised as paid content and “helpful” content suggestions that miss the mark are regular occurrences on the social media giant, and its algorithm is taking the blame.

What’s an algorithm, exactly?

As defined by Wikipedia, an algorithm “is a finite sequence of well-defined, computer-implementable instructions, typically to solve a class of problems or to perform a computation.” Sounds innocent enough, right? It is. It’s nothing more than an aspect of computer science.

However, Yale data scientist Elisa Celis (who studies fairness and diversity in artificial intelligence) explains that companies like YouTube, Facebook, Twitter, and others refuse to reveal what’s exactly in their respective algorithm’s codes. Most, she says, seem to “revolve around one central tenet: maximizing user engagement­­—and, ultimately, revenue.”

So, are Twitter’s algorithms nothing more than a money-making tool? On the surface, yes. It’s learning what a user’s behaviors are while engaging with the content on the platform: The articles shared, the search terms used, and so on. The idea is to take that data and translate it to relevant products and services.

“These things aren’t malicious, and they’re not out of control,” states Celis in PBS “Nova” reporter Katherine J. Wu’s article, “Radical ideas spread through social media. Are the algorithms to blame?”. “But it’s also important to acknowledge that these algorithms are small pieces of machinery that affect billions of people.” As Wu puts it, at what point does personalization cross the line to polarizing? The algorithms can’t tell the difference between boating and bigotry, and they aren’t trying to.

Who is to blame?

Like any tool, however, Twitter’s algorithm can be used for benevolent, benign, or malicious purposes. The question is, how influenced are we by them, and more importantly, who is behind the influence? “If the global reach of social media were being used merely to spread messages of peace and harmony—or just to make money—maybe there wouldn’t be any [harm]. But the purposes are often darker,” writes Bloomberg reporter Shelly Banjo.

According to the tech companies that implement them, these programs exist only to help and serve you, the user. In essence, they are saying, “Yes, turning a profit is the ultimate goal, but not before bringing you relevant, customized stories, news information, and products based on your likes and dislikes. You’re the one in control, not us. And if you act out based on content fed to you, then that’s your fault, not ours. It’s your interests and online behavior that caused it to appear in the first place.”

Do (but don’t) be influenced by media

It’s the same illogical mentality behind the idea of product placement in television and movies: Don’t be influenced by the sex and violence on the screen, just the BMW and Coke that happen to be there. If content leads a person to act out in a way other than shopping, especially any negative way, that’s on them. Wu notes, “It would be an oversimplification to point to any single video, article, or blog and say it caused a real-world hate crime. But social media, news sites, and online forums have given an indisputably powerful platform to ideas that can drive extreme violence.”

Maybe all you do is look at hilarious cat videos and share links to your favorite recipes. Think your feed is safe? Think again. In “Facebook, Twitter and the Digital Disinformation Mess,” Banjo also highlights how “social media manipulation campaigns” have been utilized by governments and political parties in 70 countries, including China, Russia, India, Brazil, and Sri Lanka. Circumventing and outsmarting social media firewalls and algorithms, state-sponsored smear campaigns in these countries utilize artificial intelligence and internet bots to flood targeted news feeds with extremist messages and videos. The technology to do this exists, and it’s happening now.

Yet, not all algorithms exist to sway your purchasing decisions or serve tech-giant masters. One promising solution was presented by Binghamton University late last year. Computer scientist Jeremy Blackburn, along with a team of researchers and faculty, “have developed machine-learning algorithms which can successfully identify bullies and aggressors on Twitter with 90 percent accuracy.” While not perfect, it’s important to note that this technology also exists, and it’s a bright ray of hope.

Abuse on Twitter a regular occurrence

This concern over the unchecked power of Twitter, et al and their algorithms cross party lines and media bias, affecting celebrates and everyday citizens alike. (Even actor Sasha Baron Cohen uses the Trump-popularized phrase “fake news” stating in his op-ed piece for the Washington Post that online, “everything can appear equally legitimate.”) He isn’t alone in his criticisms. Fed up with the onslaught of abuse and hate speech, fellow celebrities including Ed Sheeran, Millie Bobby Brown, and Wil Wheaton have limited their presence on Twitter—and have been quite vocal about doing so.

While one might argue that living in the public eye comes with consequences, those not in the limelight are equally disgruntled with the social media platform’s refusal to address rampant harassment. Every day, average users continue to question why nothing operates on the platform to combat abuse. Especially critical of Twitter CEO Jack Dorsey, those active on the social media platform call him out for continually refusing to address cyberbullying concerns. In The Atlantic article “Twitter’s New Features Aren’t What Users Asked For,” author Taylor Lorenz shares one frustrated user’s tweet. “The annoying thing is that every few months Jack comes out with a big speech about how they’re going to fix twitter, and ever[y] time they just continue to get it wrong.”

And what of the onslaught of abuse and harassment suffered by private citizens who find themselves thrust into the spotlight as a result of sloppy reporting? Or peer-to-peer cyberbullying occurring across the personal devices of children and teenagers every day? What fills the Twitter feeds of their tormentors? As Wu states, “[Algorithms] don’t have a conscience that tells them when they’ve gone too far. Their top priority is that of their parent company: to showcase the most engaging content—even if that content happens to be disturbing, wrathful, or factually incorrect.” Are abusers fed more and more volatile articles and videos, which in turn fans the flame of the hate and anger they unleash on others?

Twitter slow to respond to user demands

Although Twitter states that combating abuse is a “work in progress,” the company instead chooses to implement useless updates and changes that are, in some instances, only making it easier to engage in harassment. Lorenz adds, “While the company continues to dedicate time and resources to making minor changes aimed at boosting engagement, easy fixes for harassment are ignored.” Most recently, Twitter purged an untold number of QAnon conspiracy theorists, but this one-time housecleaning will not solve how algorithms move the speech on Twitter.

Lorenz reports that in 2016, Online Abuse Prevention Initiative founder Randi Lee Harper laid out several improvement options in a Medium post. Although most were addressed by Twitter eventually, several suggestions that addressed minimizing harassment were ignored. Instead, some of the “updates” the social media platform chose to rollout were mostly cosmetic:

  • changing its user avatars from square-shaped to circular
  • redesigning Moments
  • adding topic tags to the Explore page
  • spamming users’ timelines with a “happening now” section
  • adding endless notifications
  • upping the character limit to 280
  • promoting live videos of sports events
  • revamping its algorithm to give older tweets more prominence


Taking Twitter to Task

Close on that last one, Twitter, but you miss the mark again. An algorithm revamp, but of a different sort, is what the public is demanding. New on the media scene (compared to that of television, movies, and the radio), social media’s persuasive power has remained largely unchecked, and the law is desperately trying to catch up.

In his op-ed piece, Baron Cohen brings to light a chilling fact: the large technology companies behind these platforms are, for the most part, beholden to no one—not even the law:

“These super-rich “Silicon Six” care more about boosting their share price than about protecting democracy. This is ideological imperialism—six unelected individuals in Silicon Valley imposing their vision on the rest of the world, unaccountable to any government and acting like they’re above the reach of the law. Surely, instead of letting the Silicon Six decide the fate of the world over, our democratically elected representatives should have at least some say.”

The “Silicon Six” Baron Cohen refers to are American billionaires and tech giant CEOs and/or founders Mark Zuckerberg (Facebook), Sundar Pichai (Google), Larry Page (Google), Sergey Brin (Google), Susan Wojcicki (YouTube), and Jack Dorsey (Twitter). Similarly, Wu notes that one of the biggest reasons to be wary of social media companies’ algorithms is that, “[only] a limited subset of people are privy to what’s actually in them.”

In his article for The Verge, reporter Casey Newton writes that while Baron Cohen efforts to amend Section 230 of the Communications Decency Act (the driving force behind his speech and opinion piece) are somewhat misguided, he raises some valuable points. Newton agrees with him about not only the dangers of algorithmic recommendations on social platforms but that the aforementioned “Silicon Six” have been permitted so much influence “thanks to a combination of ignorance and inattention from our elected officials.”

Data journalist Meredith Broussard, communications expert Safiya Noble, and computer scientist Nisheeth Vishoni (all interviewed for Wu’s article for “Nova”) feel social media algorithms should be tested and vetted as strenuously as drugs before they hit the market.

Noble further states, “We expect that companies shouldn’t be allowed to pollute the air and water in ways that might hurt us. We should also expect a high-quality media environment not polluted with disinformation, lies, propaganda. We need for democracy to work. Those are fair things for people to expect and require policymakers to start talking about.” These companies can’t police themselves, nor should they. If social media companies do not change their ways, then our elected officials in Washington should change the rules for them.

Todd McMurtry is a nationally recognized attorney whose practice focuses on defamation, social media law, cyberbullying, and professional malpractice. You can follow him on Twitter @ToddMcMurtry.

CategoriesSpeaking Freely with Todd McMurtry

So Someone Called You a Racist or Bigot, . . .

As many of you know, for the past few years my law practice has become more and more focused on reputational issues. Nearly every day, someone who has been called a racist or bigot contacts me to seek guidance. Businesspeople, professionals, professors, college students, and even high school students are targeted for condemnation and cancellation. It is routine for people to file website petitions (on sites like change.org), calling for another person’s firing due to a comment perceived to be racist or bigoted. Today, a person who competes too aggressively on the playing field can be called a racist. Raising the slightest objection to a corporate policy geared toward the LGBTQ community earns you the “bigot” title. The smallest transgression can result in the immediate loss of a job, removal from an office, and even scrubbing from employer records. I am not exaggerating when I say that cancellation represents an existential threat to your future.

So, what is a person to do? The first and best policy is to avoid statements that lend themselves to misinterpretation. As a part of this strategy, you should get off social media entirely. Close your Facebook and become anonymous on Twitter. Even the most carefully cultivated social media posts, reinterpreted five years from now, can be condemned for saying the “wrong thing.”

At work, you must learn that you do not have any friends whom you can trust. You can never let down your guard. You can never trust that your coworkers, partners, or those whom you teach will not misinterpret something you said. The risks are too high.

However, if someone calls you a racist or bigot despite the best efforts, I have learned that the only effective response is to fight back with everything you have. I also recommend that you first hire an attorney to advise you about your circumstances before you do anything. Each state has different laws on these issues, so you need a competent professional familiar with your state’s laws to help.

If a coworker calls you a bigot and will not retract after you confront them, you need to have your attorney contact them. If a person establishes a change.org petition calling you a racist, you need to send that person a letter and force them to take the petition down immediately or face the consequences. In today’s electronic society, these things do not go away. When you apply for college, for graduate school, for your first job, for your 2nd job, and on and on, the record of the false allegation will live forever.

As our society moves more and more in the direction of cancellation for people who fail to abide by prevailing opinions, they run a severe risk of losing their economic livelihood or suffering hundreds of thousands of dollars of damages to their ability to earn an income. Again, I am not exaggerating when I say that people call me and tell me how they were fired or how their businesses were destroyed over racism and bigotry allegations.

So, is there hope? I have always believed that truth wins out in the end. Unfortunately, right now, we are at a point in our history that does not tolerate dissent. I am sure in time, as has happened in the past, things will balance out. Until then, be very careful. A parting thought is that you personally do not need to fight this fight. There are many people out there who have already been canceled and can speak truth to power. Let them do their job, and maybe in time, things will get better.
Be safe.


CategoriesSpeaking Freely with Todd McMurtry

Adios Mr. President, You Are Banned from Twitter!

[et_pb_section admin_label=”section”] [et_pb_row admin_label=”row”] [et_pb_column type=”4_4″][et_pb_text admin_label=”Text”]

So how did this happen? Twitter, Facebook, Instagram, and others unilaterally decided to kick the president of the United States off their platforms and terminate his social media relationship with as many as 80 million Americans. It seems to most Americans that this is a violation of former Pres. Trump’s right to freedom of speech as protected by the First Amendment of the United States Constitution. Unfortunately, our Constitution offers no such protection. Generally, your speech in only protected in public spaces, not private property.
As you have likely heard, many people refer to social media companies as private corporations that can do whatever they want on their platforms. This is true. It is also true that in 1996 Congress passed the law called the Communications Decency Act or the CDA. It says: “No provider or user of an interactive computer service shall be held liable on account of — (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” This act gives companies like Twitter the absolute right to police any content that any person, including the president of the United States, posts on their platform. The courts in the United States have broadly interpreted the law to provide maximum discretion to these companies to decide what their users can post on their platform.

When you sign up for Twitter, you must press a button agreeing to the terms of service (“TOS”). Social media platforms constantly update their TOS to decide exactly what you or former Pres. Trump can or cannot say on social media. First, you can talk about election interference’s, then they decide that you cannot. First you can complain about the origins of the Covid-19 virus, next if you mention it on social media, they shut you down. Examples like this go on and on. Will this end? It likely will not until Congress or the courts change the law. Right now, there are several lawsuits challenging how social media companies use their TOS to interpret their authority under the CDA. We can hope that a more conservative court would temper the way the social media companies use the law. Time will tell.

In the interim, I think we can rest assured that none of the currently existing major social media companies will allow former Pres. Trump to reengage on their social media platforms. The beautiful thing about this is that eventually another platform, be it Parler, Gab or another, will be able to open and provide social media interaction that allows former presidents and other prominent people to post their views on politics and current events without censorship. It may take a few months or a year to get there, but eventually the fact that large social media companies have censored a former U.S. president will result in greater competition. We have already seen companies such as Twitter and Facebook lose value on the stock market. Once a competing company establishes a large presence, there will be real competition. We then can regain much of our right to hear what our leaders have to say.

Todd McMurtry is a nationally known attorney. His practice focuses on defamation, social media law, professional malpractice, and business disputes. You can follow him on Twitter @ToddMcMurtry.


[/et_pb_text][/et_pb_column] [/et_pb_row] [/et_pb_section]