News and Blogs


Putting Equity First in the Regulation of Digital Platforms

For Black History Month, the Institute hosts a policy series highlighting bold policy solutions in order to tackle anti-Black racism, focusing on the need for intergovernmental action. Each submission proposes a plan for governments to work together to tackle a problem; while serving as a guide for advocates working towards [what should be] our collective effort to eradicate anti-Black racism.

With almost an entire year of social lockdown under our belts, it’s no surprise that social media and other digital platforms have become our primary sources of entertainment, entrepreneurship, and solidarity. Yet, as the past few weeks have highlighted; social media and other digital platforms can be used for both good and bad.

The unprecedented banning of Trump from digital platforms like Twitter, Facebook and the like, following his incitement of the violence witnessed on Capitol Hill, has recently put platform regulation at center stage. Pundits and subject matter experts in Canada and abroad have been offering their take on a range of platform regulation issues from tax reform to antitrust legislation to content regulation. However, a key element underrepresented in many of these discussions is equity – how platform regulation may both protect and further discriminate against communities of colour, including Black communities, depending on whether and how it’s done.

Two issues stand out when looking at platform regulation with an equity lens: hate speech and algorithmic bias.

Hate Speech Goes Digital

In Canada, and much of the world, instances of online hate speech have surged over the past decade; yet little has been done to regulate it. A recent poll by the Canadian Race Relations Foundation and Abacus Data notes that 55% of Canadians said they have either experienced or witnessed racism online, with racialized Canadians noting much higher experiences of online racism than their counterparts. Just last month, Changich Baboth, a Black high school student at Vancouver’s Lord Byng Secondary School, reached a settlement in her complaint against the local school board for how it dealt with a 2018 incident, in which another student at the school created and uploaded a video online calling for the murder of Black people.

While there is a clear need to protect people – particularly, racialized communities - from online violence, a primary argument used in opposition of content regulation is its encroachment on free speech. Similar to our neighbours down south, freedom of expression is protected in Section 2 of the Canadian Charter of Rights and Freedoms. Yet, for communities at the receiving end of hate speech, there is often more than poetic license at stake.

Several major violent attacks on minority communities over the past few years have been linked to extremist rhetoric and hate speech circulated online. Prior to broadcasting the attacks on Facebook Live in 2019, New Zealand’s Christchurch shooter had a history of spewing racist views on digital platforms. Following the vicious Toronto van attack in April 2018, which left 10 dead and 18 injured, police identified posts made by the perpetrator on Facebook, 4chan and Reddit, in which he claimed to be an incel and expressed violent and misogynistic views. 

In fact, Section 1 of the Charter, in addition to sections of the Criminal Code of Canada permit “reasonable” limits upon free speech, which have been used in the past to restrict incidents of obscenity and hate speech. Though freedom of expression is a fundamental right in our democracy, so is the equal protection and equal benefit of the law for every individual. If one right dangerously encroaches on another, inequity will undoubtedly prevail.

Moving Beyond Self-Regulation to Responsible Regulation

By banning Trump from their platforms, Big Tech demonstrated the power they hold to rapidly moderate, edit and censor content on their platforms. Soon in Canada, Big Tech may be legislated to remove illegal or harmful content from their platforms within 24 hours; otherwise, they may face hefty fines. Countries in other regions like the European Union have already put those policies into action.

Though public policy is slowly but surely catching up to the pace of the digital world by regulating the speed of content removal, the authority to determine what kind of content is removed has mainly remained in the hands of the platforms themselves. Thus far, a majority of the content regulated by platforms is controlled by algorithms developed by data scientists. But how can we ensure that the designs and tools put in place to regulate online discrimination don’t end up perpetuating it?

Consider this: A 2019 U.S study found that Black people were 1.5 times more likely to have their content flagged on Twitter than their white counterparts. In addition, their posts were more than twice as likely to be flagged if it was written in African American Vernacular (AAV). In her Medium article, Black Lives Matter activist Didi Delgado starkly illustrates the racial biases ingrained in these algorithms as she described the challenges she and other racial justice advocates have faced, repeatedly getting locked out of their Facebook accounts as a result of algorithms deeming their non-violent content as hate speech rather than political expression. 

Algorithmic bias - systematic errors in the coding, collection, or selection of data that produces unintended or unanticipated discriminatory results - is a true cause for concern; one that may be discreetly undermining our civil rights, one biased code at a time. If Big Tech hopes to be part of the solution, firms must first prioritize equity in the programming and design of these systems, as well as actively and authentically improve the diversity of tech spaces, before regulatory changes can truly make headway. 

Let’s Keep Talking

Platform regulation is a complex challenge that will require support from governments, the private sector, independent organizations, academics, and activists to achieve. As suggested in the Public Policy Forum’s latest report, Harms Reduction: A Six-Step Program to Protect Democratic Expression Online, a third-party regulator is needed to ensure these laws are adhered to by digital private companies. Who will make up this third-party regulator is up for discussion; however, it’s imperative that the federal government prioritize equity in the establishment of such a regulator, as well as in the development of legislation aimed at regulating content online. 

More importantly, any private company or public policy response taken to address inappropriate and illegal online behaviour must put citizens first to attain a balanced, equitable and multi-pronged approach. Community associations, like the Black Professionals in Tech Network and Canadian Black Policy Network, can be leveraged to consult communities and offer opportunities for everyday citizens to engage in solutions to fight against the tangible outcomes of online hate speech.

As we continue to navigate this pandemic and maintain our social connections online, it is crucial that we probe deeper into the dark sides of the platforms we regularly visit. We must continue to apply pressure on the government, platform actors and regulators alike to take action before the severities of virtual hate rhetoric become our permanent reality.

Anna-Kay Russell is the Manager of Public Affairs at WoodGreen Community Services and co-founder of the Canadian Black Policy Network. Passionate about pathways to tangible equity, sustainability and governance, Anna-Kay is on a mission to collectively engage Black communities and allies in the public policy process to improve socioeconomic outcomes for Black Canadians.