Why Facebook — and all other social media platforms — can be dangerous

Anna Broderick Sinclair
6 min readNov 30, 2019

Everyone knows Facebook. As an online presence, it is home to free speech, the latest news, political rants, and debate over privacy in the internet era.

Mark Zuckerberg at the recent Congress hearing.

So, it’s no surprise that Facebook’s founder and chief executive officer, Mark Zuckerberg, is currently going through some controversial times. However, this isn’t something new. In particular, after the Russians attempted to interfere in the 2016 United States presidential election, through exploiting Facebook, Zuckerberg began to encounter backlash from both the public, and the government.

Therefore, it was also no coincidence that user mistrust of Facebook had spread rampantly. Since its founding in February 2004, Facebook quickly went from being “an online network of college students to a global gathering place for billions of users to connect with friends and share information” (Hoffman 1). Remember, Zuckerberg had initially built Facebook with the mission to help people connect and share. “He did not anticipate the company becoming an arbiter of free speech” (Hoffman 1). In other words, Zuckerberg never perceived Facebook’s future as becoming a symbol for free speech.

What was Facebook’s role in this? What was its next move going to be? Were new rules going to be put into place?

Let’s go back to history, shall we?

In February 2004, as a Harvard undergraduate student, Zuckerberg launched Facebook with the help of his few classmates — Eduardo Saverin, Dustin Moskovitz, and Chris Hughes. “When first created, Facebook was open only to Harvard students but it quickly expanded to include students from other top tier American universities, beginning with Stanford, Columbia, and Yale. At the end of its first year, the site had reached one million active users and by the end of 2005 it had six million. By 2006, Facebook opened up membership to anyone over the age of 13” (Hoffman 2). Today, Facebook has over 2 billion daily active users.

Two years ago, Facebook had updated its mission, stating its intent was to foster positive connection: “Give people the power to build community and bring the world closer together” (Hoffman 2). However, has that really happened or gotten better ever since? Had Facebook successfully served its mission? Is it doing so in 2019, and will it afterwards? If it’s doing its job, then why is Zuckerberg coming under direct scrutiny?

According to my personal observations, Facebook is still bringing the world closer together. However, I often see that happening with the older demographic (e.g. people over the age of 40), and not so much with the younger demographic (e.g. “millennials”). In particular, millennials and people in their 30s may actively still be using Facebook, but not as much as the other more popular social media platforms (e.g. Instagram, Twitter).

Although useful, one must bear in mind that, just like almost anything else in the world, Facebook does have its pros and cons. In particular, Facebook can be and has been used to ignite political movements, and spread awareness about other important cultural issues. For example, “In 2011, Facebook was used by activists in Tunisia and Egypt to spark political revolution and topple dictators. Around the globe, Facebook became a vital tool on small and large scales for grassroots organizing and political campaigning” (Hoffman 2). That’s definitely a good thing, but what about the safety of the users? In this specific case, users can definitely be harmed — through possible hacking and major cyber bullying.

Back in 2016, hacking and cyber bullying seemed to have become more widespread in Facebook, including the frustration over the most turbulent U.S. Presidential Election of the generation — which led to many Facebook members temporarily deactivating their accounts, and even permanently deleting them. But, something bigger was negatively impacting Facebook — fake news. “During the 2016 presidential campaigns, fake news became a rampant epidemic on Facebook and other leading social media platforms,” (Hoffman 3) and that led to a series of dangerous situations. In one incident, Facebook spread a false story about Democratic candidate Hilary Clinton and her campaign chairman, John Podesta, that they were running a child sex ring outside of the basement of a Washington, D.C., pizza parlor. “The story incited a gunman to take matters into his own hands to “self-investigate.” While no one was hurt in the incident, the story was indicative of the real-world implications of politically charged fake news stories. Less extreme but equally false stories impacted the way the voting population perceived candidates” (Hoffman 3).

During the rise of fake news, the failure to secure user data also became a major issue — why were users losing control of their privacy? Doesn’t Facebook condemn that? Well, this all emerged after the Cambridge Analytica scandal — “Cambridge Analytica, a British political consulting firm that specialized in data mining and data analysis, had harvested private information from more than 50 million Facebook users to target ads for the 2016 U.S. presidential election” (Hoffman 3). But, then again, Cambridge Analytica was not the one to blame for — not saying it wasn’t at fault, but it also wasn’t the only one who started harvesting users’ private information. Specifically, Facebook had historically shared users’ personal data. “For example, Facebook allowed Netflix and Spotify to read users’ private messages without their consent” (Hoffman 4).

Facebook has now become well-known for the notorious and infamous Cambridge Analytica scandal.

In 2016, Russian agents tried to cause a disagreement between voters by using fabricated Facebook accounts. During the campaign leading up to the 2016 elections, this made things more troublesome for Facebook. This was major propaganda, because 150 million Americans were reached through social media. What was Facebook going to do? What about the other social media platforms? Was proper action going to be taken, and was it taken?

Remember — fake news was not a new phenomenon. Historically, starting with the beginning of print, there have been countless attempts to spread false information — from newspapers and magazines all the way to social media. It has also played a role in American politics from the beginning of time. In particular, “The founding fathers were known to spread propaganda in order to muster support for the revolution and to get people to enlist” (Hoffman 4). However, with the rise of the Internet, came immense challenges to journalism. Although the Internet allowed greater access to information, it also still “enabled fake news and poor-quality information to flourish” (Hoffman 5). Unfortunately, this still happens today and probably seems inevitable at this point.

The issue of user privacy and fake news has also been faced by other popular social media platforms — Twitter, Instagram, and YouTube. Facebook had gained more attention for other reasons — spreading hate speech, being used as a platform for online bullying, perpetrating human rights violations, and spreading terrorist propaganda. “To counter the fake news epidemic, Facebook decided to implement stricter content monitoring practices. However, this willingness and ability to monitor content could cause Facebook to assume greater legal risks” (Hoffman 7). Because Facebook was capable of monitoring all of the content shared on its site, that meant it could find itself subjected to potential lawsuits. Due to this reality, challenges still remain. Unsurprisingly, not many people trust Facebook.

In order to move forward, Facebook needs to have a plan of action. Due to the potential risks Facebook poses, it still needs to be incredibly careful. From my experience, hate speech and bullying still co-exist on the platform. From this point forward, the only way the company can fully respond and recover is to take full responsibility for its actions — or else lawsuits will develop and, therefore, become more costly. Most importantly, it needs to ensure its users are safe and, that a user’s wellbeing won’t be harmed by the platform. So, further self-regulation isn’t a bad move — as long as no one is getting harmed. And — Facebook must respect its users’ privacy.

--

--

Anna Broderick Sinclair

My purpose is to encourage authenticity & open-mindedness. A safe space. This is how we will all reach our full potential, and create a more humble environment.