Social media regulation

We have an ethical responsibility to regulate the Internet

Extremists%2C+neo-nazis+and+members+of+the+alt-right+hold+a+demonstration+in+Charlottesville%2C+Virginia.+Participators+had+used+Facebook+to+organize+the+event+and+spread+racist+messages.+According+to+a+survey+by+the+Anti-Defamation+League%2C+one-third+of+social+media+users+have+been+the+targets+of+online+hate+speech.+

Photo courtesy of The New York Times

Extremists, neo-nazis and members of the alt-right hold a demonstration in Charlottesville, Virginia. Participators had used Facebook to organize the event and spread racist messages. According to a survey by the Anti-Defamation League, one-third of social media users have been the targets of online hate speech.

Morgan Weir, Staff Reporter

Social media offers new-world opportunities to express ourselves freely, see new perspectives and share information to wide audiences. It also provides a breeding ground for terrorist recruitment, propaganda, extremism and hate speech. These risks, along with the monumental role that social media plays in our daily lives and in our society, call for an in-depth discussion of our ethical responsibility to regulate the Internet in order to protect its users. This discussion is incredibly complex and encompasses a lot of nuanced questions and components. In order to fully examine those components and engage in productive discussion, it’s necessary to define what regulation actually means and what responsibility we have to follow through with it. Within this context, regulation can be defined as policies, laws or rules set in place by an authority. The responsibility aspect comes in when we recognize that the duty of authority figures is to protect people which can only be done with strict and comprehensive legal codes.   

We need to regulate the Internet; it is now such a huge part of our culture, global politics, communication and the spread of information that it has become an integral necessity and a right in the modern world. However, given that it is used by billions of people from many different countries and cultures and backgrounds, our approach to regulation must be tactful and comprehensive. In order to accomplish this effectively, as many agencies and groups as possible need to get involved; national governments, global organizations, local communities, and corporations must all play their roles. Arguably the most important agency for making these regulations truly effective is the Internet giants themselves who must serve to create a safe and moral social common rather than focus on turning a profit. 

In order to effectively tackle this, it’s necessary to clearly lay out what is illegal and punishable on social media when it comes to Internet terrorism, extremism and hate speech. As it stands, there is no clear worldwide definition for online terrorism which proves to be a major obstacle for social media sites that operate globally. This affirms the importance of involving an intergovernmental panel of experts and international organizations like the United Nations in setting regulations rather than depending on individual nations to find their own solutions. 

Currently, much of the existing protections in place come from self-regulation from social media companies. All of the top platforms have some type of reporting feature for posts available to all users. Many of them, like Twitter and Youtube, rely on algorithms to find and delete sensitive content. In theory, this could be an effective and inexpensive way to keep people safe, but in practice it has caused more problems than solutions. In an article by the Daily Beast, for example, Youtube was criticized for pointing vulnerable young people to neo-nazi propaganda with its algorithm. A study by the Brookings Institute found that despite Twitter’s efforts to curb terrorism on its platform, ISIS has managed to maintain thousands of active accounts that spread propaganda and recruit vulnerable people. Facebook was criticized by the United Nations for its failed efforts to stop hate speech on its platform that contributed to genocide in Myanmar. These cases highlight the importance of including both governments and trained, third party content moderators in regulation efforts. 

Implementing these changes is so important because social media shapes the narratives of the globe and decides how cultural conversations develop. With that in mind, it’s necessary to question how regulation might impact the social movements, both productive ones and harmful ones, that come with the rise of the digital age. With the #metoo movement, for example, women tweeted their stories of sexual harassment. Within 24 hours of the initial tweet, over 4 million people had used the hashtag according to Facebook, which shows the power of social media to serve as a medium for global discourse and confronting societal issues. It has also become valuable in organizing protests and revolutions like those in Sudan, where the hashtag #BlueforSudan went viral, or both the Black Lives Matter movement and the Hong Kong protest where organizers have utilized the Internet as a tool to spread their message, organize protests and keep other participants informed and engaged in the movement. 

As much as social media has made it easier for people to grow movements in favor of democracy and equality, it has given white supremacists and extremists a platform to quickly and efficiently spread their ideology. During the 2017 white supremacist demonstrations in Charlottesville, Virginia, participators had used Facebook to spread the message about the event, and during the mass shootings at two mosques in Christchurch, the event was live-streamed to Facebook and even after the original video was removed, copies were being re-uploaded every second for hours after the shooting according to Youtube. In these cases, regulations would have to strike a delicate balance to foster social media’s potential to start movements for justice while also dealing with the harmful extremist ideas that can be spread. This can be done with the application of clear legal definitions of extremism and supremacy and an objective set of boundaries for hate speech.

One popular recent issue that has sparked discussions of Internet regulation is the uprise of “fake news” or propaganda on the Internet. This phenomenon is one that will continue to influence politics as it has in the last few years, particularly during the 2016 election. The Senate Intelligence Committee launched an investigation in response to reports of Russian meddling in the US election with Facebook caught in the middle. They found that Facebook, in an attempt to rise to the top of the digital marketing platforms, used personal data of its users so marketers could target people. Russia exploited this technology to influence voters during the 2016 election. It also found that Russia’s intelligence agency reached tens of millions of Americans, many of whom shared or reposted the Russian agency’s posts. The Pew Research Center found that 4 out of 10 Americans often get their news online, especially the younger generation between the ages of 18-29 years. With new generations becoming progressively reliant on social media to stay informed, regulations are necessary to keep propaganda from infiltrating our feed. 

The influence of social media sites on the culture of the next generation is powerful and demands that everyone is held accountable. With this truth comes the critical question of who exactly we are holding accountable. When it comes to social media in particular, there is debate over whether it should solely be the individuals who post or share harmful content who face the legal ramifications or if the Internet giants who host this content should face consequences. As private companies, social media sites have a right to regulate hateful or violent content. This also means that they have a duty to accept the consequences when they don’t do their job effectively. For any type of regulation to work, companies need to take responsibility. This doesn’t mean that individuals should be off the hook if there is evidence that they violated regulations, the law or company policy. At this point, the question isn’t about if we should regulate, it’s about how. 

Local communities can play a huge role, particularly when it comes to teaching young kids. They should implement social media and internet responsibility education into our schools which would be integrated into all classes for all grade levels, like social studies and health, rather than just being a single mandatory class you take once in your life. For their part, companies should outsource content monitoring to experts on major challenges like extremism and terrorism to ensure objectivity. They also need to add safety features like content warnings, efficient reporting methods, and age restrictions that are actually enforced. These companies should cooperate with international organizations like the UN, for example, who could start a taskforce with different countries represented made up of experts on Internet policy, extremism, terrorism and hate speech who could conduct research, create policy recommendations, make globally operating definitions for things and come up with a comprehensive plan for regulation. Lastly, local authority forces should work with social media companies and policy-makers to enforce all of the regulations consistently. 

The discussion around Internet regulation is incredibly complicated because ultimately, it’s not just a debate about whether we should do something in response to extremism, terrorism and other issues. It forces us to question who we trust, how we consume information and what free speech looks like in our ever-changing technological world. Regardless of where one may land on these questions, it is clear that there is an urgent, ethical need for Internet regulation.