Last week, Australia dropped its revised Combatting Misinformation and Disinformation Bill 2024, and it’s about two sandwiches short of a picnic. The bill appears to draw some of its inspiration from the EU’s Digital Services Act in terms of creating significant responsibilities and regulations. And if past is prologue, what happens in Australia doesn’t stay in Australia—such as when Australia passed its “link tax” bill, which taxed social media companies when users shared links to news articles. That link tax spread to Canada and has been actively considered in the United States. Canada is also considering replicating Australia’s eSafety Commission, despite various cases of significant censorial overreach.
But whatever the rationale or history here, this bill not only will restrict Australians’ free speech and access to different online services, but its influence may spread and threaten American speech as well.
Let’s first look at how the bill defines misinformation and disinformation, a challenge for any bill or organization in this field. The bill defines misinformation as content that is
- “reasonably verifiable as false, misleading, or deceptive;” and
- “is likely to cause or contribute to serious harm.”
The definition also carves out some space for satire and parody, professional news content, and “reasonable” dissemination of content for “academic, artistic, scientific, or religious” reasons. Disinformation uses the same definition and adds that there must be grounds to suspect that the content was shared with the intent to deceive others or otherwise involves inauthentic behavior.
There is enough to unpack in these definitions alone to fill multiple blogs, but I’ll focus on three points:
-
What is verifiably misleading? There are claims that are objectively true or false because we can verify through evidence and logic that is available for anyone to interrogate. But how does one objectively verify that something is misleading? Misleading content involves leaving out certain context, cherry-picking data, predicting potential future outcomes based on a limited amount of evidence, intermingling opinion and fact, etc. Misleadingness, then, is all about the various ways we debate and discuss issues, often based on incomplete information, opinion, and determining how to weigh various factors and arguments against one another. In other words, what is or is not considered misleading is often highly subjective.
For example, some view CNN as hard-hitting and fair journalism, while others believe it to be highly biased and misleading. The same is true for Fox News, the Washington Post, the New York Post, Joe Rogan, the Daily Wire, the Young Turks, and every other expressive organization. But these opinions are often based on subjective assessments of how well these organizations interrogate the evidence, highlight alternative viewpoints, and provide the correct context and framing to their discussion of current events.
For the government to claim that misinformation based on subjective misleadingness can be objectively verified is a fundamental contradiction. What the government is actually asserting here is that its view of what is misleading is correct, guaranteeing significant bias in how misinformation will be policed.
-
What is harm? The government says that it is limiting its regulation of misinformation to only the most harmful content. But the list of harms is pretty expansive. Some of the most abusable areas include any harm to the “efficacy of preventative health measures,” “vilification” of group of people (i.e., hate speech), and harm to the Australian economy or public confidence in the markets. Importantly, the bill only requires that the speech in question “contribute” to harm. It doesn’t need to actually cause that harm.
So, it’s not hard to imagine many types of political and social discussions that could contribute to these categories of harm. For example, even a relatively careful claim about the potential weakness of a newly developed vaccine could contribute to vaccine hesitancy that harms the efficacy of preventive health measures. Or citing criminal statistics about different groups of people could be accurate but viewed by some as out of context and contributing to the vilification of a group. Again, the current government’s view of what is harmful is all that will matter, inserting yet more bias into this process.
-
How do we determine intent without full information or due process? Like many other definitions of disinformation, the bill differentiates disinformation from misinformation by its intent to deceive. While this is common, it elides a difficulty in how to figure out the intent of online speech without full information. Indeed, the bill doesn’t require a high level of proof but only that there be “grounds to suspect” the deceptive intent. This makes it incredibly easy to define content as disinformation.
For example, the widely cited counter-disinformation dashboard Hamilton 68 claimed that Russian bots were spreading large amounts of disinformation on Twitter. However, Twitter itself was able to determine that the users accused of being Russian bots were mostly just average right-leaning or populist users. This definition weaponizes these sorts of sensational and false claims of disinformation, forcing tech companies to address the purported disinformation or otherwise rebut it for fear of being penalized for noncompliance.
There are other concerning elements to these definitions (e.g., how the government intends to define and police inauthentic behavior in a way that is better than companies already try to do or if the exception for professional news media from being considered misinformation is favoring certain elite speech over the speech of online activists or independent journalists), but that’s for a different blog. It’s also worth noting that the government at least tried to learn some lessons from the highly problematic first version of this bill, such as by removing a provision that would have protected government speech from being considered misinformation while the oppositions’ speech could be targeted as misinformation. But the bill’s broad definitions of misinformation and disinformation remain highly problematic and are likely going to be enforced in a biased manner depending on the party in charge of the government.
Indeed, going beyond just the definitions, the way the bill intends to police misinformation and disinformation should also give Australians pause. In the name of stopping harm, it grants broad powers to the Australian Communications and Media Authority to regulate how various tech companies moderate what is true, false, misleading, or harmful online. The bill applies well beyond just social media to companies of all sizes that provide search engines, connective media services, content aggregation, and media sharing services—though not internet service, text, or direct message providers. This list is still incredibly broad, ranging from video games to Google search and from Facebook to porn sites.
These companies will be required to create or uphold certain codes to combat misinformation, which in theory they have some control over. These codes should address how to moderate content, including
- removal of content via human reviewer or artificial intelligence (AI) tools;
- stopping ads or monetization of misinformation;
- allowing users to report misinformation;
- transparency into the source of a political ad;
- supporting fact-checking; and
- providing users with authoritative information.
Many tech companies already have some set of policies and practices in this area, but they may vary significantly. For example, Meta uses third-party fact-checkers while X uses Community Notes. Reddit provides the users of its many subreddits with upvotes and downvotes. More decentralized systems like Bluesky or Nostr may have few centralized tools but give users control over how they source their newsfeeds. Similarly, search engines, video game platforms, and other companies covered by this law may vary significantly in the tools and policies they have.
Under this bill, though, the government has the authority to step in and enforce its own misinformation standards if it feels the companies aren’t doing enough. And, of course, failure to sufficiently implement these regulations can cost a company up to 5 percent of its annual turnover. So, companies don’t really have flexibility but must moderate enough misinformation to satisfy the government. This could change as governments, politics, and current events change, leaving companies with little surety that they have ever done enough. Since different ways or systems of addressing misinformation can be deemed insufficient by the government, ultimately companies may be limited to providing those tools that are government-approved. For example, if the government doesn’t like Community Notes, it could effectively demand X adopt the government’s preferred solution, perhaps a fact-checking regime like Meta’s, even though some Australian fact-checkers have been embroiled in a row over bias. Given the regulatory sword hanging over their heads, one can expect social media companies to over-moderate, favor the government’s view of misinformation, and be limited in what misinformation tools they can provide to Australians.
Together with other regulatory requirements around risk management, transparency, and beyond, the bill not only will inject government bias into the moderation of misinformation, but it will also put companies in a no-win position. Companies can try to comply with these rules but only do so in Australia. This means an increasingly fragmented set of company policies and procedures to manage Australia’s regulations while also handling the growing number of similar laws set in other jurisdictions, such as the EU’s Digital Services Act. Alternatively, companies could take a least common denominator approach, changing their policies and procedures to more uniformly comply with many of the laws being passed around the world. This approach is simpler, but it means that Americans using the services of mostly American tech companies are effectively subject to misinformation rules set by foreign governments, a phenomenon known as the Brussels effect. Finally, these companies could just exit or limit their presence in the market if it becomes too costly to comply. We can see this in how Facebook and Instagram in Canada simply don’t allow links to news articles anymore after Canada forced Meta to pay for each time a user linked to a media article. Video-sharing site Rumble left and X is now being blocked in Brazil, and X is also under pressure in the EU. Apple and Meta aren’t rolling out some AI products in the EU due to onerous regulations.
The proposed misinformation bill will harm Australians’ speech, and its precedent may spread to other nations looking to restrict speech in the name of stopping misinformation. Together with growing regulatory pressure around the world, such policies may increasingly impact Americans’ speech as well. We need a broad coalition of civil society to rise to face the challenge of government speech suppression, pointing out the harms such government actions are having on societies worldwide. More than that, the US government must also be more active and vocal in defense of free expression, not merely standing up for the principle but also for the sake of American speech and companies.