Crypto Gloom

Meta’s tuning changes mean more bad problems will be solved – Hypergrid Business

Meta’s tuning changes mean more bad problems will be solved – Hypergrid Business
(Image credit: Lawrence Pierce, courtesy of Adobe Firefly)

As a moderator myself, nothing sounds more disturbing than the idea of ​​a revised social media moderation policy presented with the caveat that more bad content will get through.

Recently, Mark Zuckerberg announced that Meta, the company that heralded the Metaverse, would roll back adjustments across its various platforms. He explicitly claimed, “…we’ll catch the less bad ones…”

You can see his presentation here.

This is especially threatening because Zuckerberg identifies bad things including drugs, terrorism, and child exploitation. He also specifically said that Meta would remove restrictions on topics such as immigration and gender. They will re-enable filters to reduce censorship. Oh, and they said they would finish fact checking.

This is a mess.

Arbitration is difficult. The challenges are manifold and related to the zeitgeist, the social nature of the times, which has become quite complex these days. It also depends on the platform. The scope of Facebook’s moderation problems is greater than: Hypergrid businessBut the core problem is the same. Good moderation maintains the online well-being of contributors and readers while respecting genuine alternative viewpoints.

to Hypergrid business We have discussion guidelines that direct our moderation. We primarily apply moderation principles to content that has the potential to cause personal harm, such as malicious ridicule or hate speech directed at a specific group or individual.

to Hypergrid businessMalicious ridicule, a kind of bad behavior, was driving away contributors. But allowing more vicious ridicule would not have improved the debate. We know this because more contributors posted more comments once discussion guidelines were enacted to remove abusive ridicule. So when Zuckerberg says Meta plans to do away with moderation restrictions on topics like gender and immigration, we know from experience that what’s bad will be vicious ridicule and hate speech against vulnerable and controversial groups, and that won’t improve the debate.

An unfortunate ploy in Meta’s new moderation policy is the use of the phrase “naive contributors” in the introductory video presentation. He says the meta platform’s moderation policy blocked “innocent contributors.” Although the word “innocent” generally conveys a neutral purity of positive disposition, intention, and action, Zuckerberg uses the word “innocent” when referring to contributors, whether victims or perpetrators of malicious comments. This confusing use of the word “innocent” is strategic linguistic misdirection. Zuckerberg tries to appear concerned while pandering to all senses.

But Zuckerberg’s emphasis isn’t limited to moderation filters. Rather, he’s focused on how Meta will end third-party fact-checking entirely. Zuckerberg provides a rationale for his position on claims that fact-checking is too biased and makes too many mistakes. He does not give any examples of what the alleged shortcomings are. Still, he puts some numbers to his concerns, saying that if Meta incorrectly censors just 1% of posts, the number could be in the millions.

Zuckerberg also claims that fact-checkers have destroyed more trust than they have built. really? Again, no actual examples are given. But as a thought experiment, wouldn’t a 99% success rate actually be reassuring to readers and contributors? Of course, he’s making the 1% statement a misleading hypothesis, suggesting an arbitrary percentage, so he’s just being disingenuous on the matter.

Facts are essential to gathering and sharing information. If you are not confident that you know the facts, you are entering territory filled with lies, exaggerations, assumptions, and wishful thinking. There are many ways to distort reality.

It’s fair to say that fact-checking may not live up to expectations. Facts are not always organized and ready to support ideas or beliefs. Fact-checking takes work, which means it costs the fact-checker money. Facts used in a misleading context raise doubts about their reliability. New facts may replace old facts. All is fair, but understanding reality is not easy. If that were the case, civilization would have developed much further by now.

But Zuckerberg has his own obvious biases about all this. To ensure the best information, there is no meta. Meta exists to monetize engagement on products like Facebook. Compare this to Wikipedia, which relies on donations and provides a source of information.

Zuckerberg argues against the idea that Meta is the arbiter of truth. However, Meta products are designed to attract global attention and have contributors from all over the world. The content of discussions on meta-platforms simultaneously influences the core beliefs and behaviors of millions of people. Treating fact-checking as a one-off feature is absurd. Individuals cannot easily access global information. Fact-checking is not only a transparent approach to verifying news and information at scale, but also an implicit responsibility for everyone or every entity that provides global sharing.

The facts themselves are not biased. So what Zuckerberg is really responding to is that fact-checking has shown to favor some political positions over others. And this is exactly what we expect from ethical discourse. In politics or life, not all viewpoints are equally valid. In fact, some perspectives are simply wish lists of ideological will. If Zuckerberg wants to address bias, he needs to start with himself.

As previously mentioned, Zuckerberg appears clearly uncomfortable with Meta drawing attention to fact-checking issues. Well, here’s a thought. Meta should not decide whether something is true or not. That’s what fact-checking services handle. This puts the burden of legitimacy on external sources. All that Meta must mediate is a contract with a fact-checking agency for fact-checking work. When Zuckerberg scoffs at and discontinues third-party fact-checking, he’s not simply insulating Meta from potential controversy. He separates the foundation and responsibility of meta contributors. As a result, in his own words, “…we will catch the less bad ones…”

What Zuckerberg proposes instead of fact-checking completely undermines the essential power of facts and relies on negotiation. Based on X’s community notes system, Meta only allows “approved” contributors to post challenges to a post. However, the notes they post will only be published if other “approved” contributors vote on whether they are helpful. The algorithm then further processes the ideological spectrum of all voting contributors to determine whether the note is ultimately published. Not surprisingly, it has been widely reported that the majority of users never see notes that edit content, regardless of the validity of the contributor findings. Zuckerberg advocates freedom of speech, but community notes are effective censorship in suppressing challenges to misinformation.

Clearly, it is increasingly up to us as individuals to uncover the facts that support our understanding of the reality of the world. But it takes effort and time. If our sources of information do not attempt to verify the validity of that information, our understanding of the world will absolutely become more biased. So the next time Zuckerberg trumpets candidly about his hands-off role in support of the First Amendment and unbiased sharing, what he is actually campaigning on is allowing the ocean of misinformation to expand exponentially at the expense of the inevitable targets of malicious ridicule. Zuckerberg’s bias is necessarily to encourage more discussion, a goal that would be greatly aided by less moderation for a platform with global reach. At that scale, the moderation that protects you is weakening. Remember what Zuckerberg himself said: “…we’ll catch something less bad…”

lawrence.pierce@hypergridbusiness.com'
Latest posts from Lawrence Pierce (See all)