CMO

Providing a safe experience is key for Meta

By Mia Garlick, Director of Public Policy for Meta for ANZ

Digital transformation and the revolutionisation of the internet have come with many advantages and advancements, but it’s also introduced new challenges for netizens, heightening the need for new rules.

Legislation has, unfortunately, not kept up. As millions of Australians continue to enjoy the benefits of a free and open internet, we’ve witnessed the good that comes with connecting people and, at times, the harm that can be caused when people misuse online platforms.

There is no simple answer, and as such, this conversation has raised important questions on how to hold digital platforms accountable for keeping its users safe, while protecting freedom of expression.

At Meta, we believe that businesses like ours should not be making decisions on our own, advocating for democratic governments to set new rules for the internet on areas like harmful content, privacy, data, and elections. 

In recent years, we have tripled our safety and security team to 40,000 people, removed billions of fake accounts (often created to cause harm), and built better tools to give people control over the experiences and interactions they have on social platforms. 

As part of this work, we continue to evolve our policies and create new products to ensure they reflect emerging digital and societal trends. We don't make these policy and product decisions alone, using insights and feedback taken from more than 800 safety partners globally. 

Recently, we also announced changes to our bullying and harassment policy to help protect people from mass harassment and intimidation, and now offer more protections to public figures, particularly women, people of colour and members of the LGBTQI community, who we know can face increased scrutiny. 

Accountability at the fore

We believe that online platforms, like any company with annual reports or financial results, should be held accountable. To ensure we’re held to account we publish figures on how we deal with harmful content, including the amount of it people actually see. 

As a result, we’ve more than halved the prevalence of hate speech on Facebook over the last four quarters - down to 0.03 per cent of content views, or around 3 views per every 10,000. We reduce the prevalence of violating content in a number of ways, including improvements in detection and enforcement and reducing problematic content in News Feeds. 

Of the hate speech we removed, we found 96 per cent before anyone reported it – up from just 23 per cent a few years ago. While we have further to go, the enforcement reports show that progress is being made. 

We recognise there can be value in requiring users to authentically represent who they are and already have various methods of finding and removing accounts used by people who misrepresent their age. 

We also invest significantly in detecting and removing fake accounts on Facebook. We removed 1.8 billion accounts in the last quarter alone. We believe that, on platforms like Facebook and Instagram, where people make direct connections with each other, the community is safer when people stand behind their opinions and actions. 

Recognising there is much more to do, and aligning to the theme of Australia’s e-Safety’s Commissioners Safer Internet Day, “Play it fair online”, we continue to engage with external experts, policy makers, parents and caregivers, and the broader industry in developing effective solutions to combat online abuse. Providing a safe experience for our community, in particular, young people, continues to be our top priority. With new rules for the whole industry, we can make the internet safer while maintaining the social and economic benefits of such platforms for all Australians. 

To learn more, visit https://australia.fb.com/onlinesafety/ or https://www.facebook.com/safety