Social Media and the Florida School Shooting
Should Social Media Platforms Have Any Responsibility for the Content Posted on Their Sites?
As details have surfaced after last week’s tragic mass shooting in Parkland, Florida, we’ve learned that the perpetrator posted a number of messages on various social media outlets, including a warning on YouTube that simply said “Im (sic) going to be a professional school shooter,” posted on a video submitted by a Mississippi bail bondsman. Investigators say the gunman, Nikolas Cruz, also used Instagram extensively, uploading pictures of guns and bullets. Broward County Sheriff Scott Israel told reporters that the shooter had an extensive history of making “disturbing” posts to a variety of social media sites, and asked lawmakers to consider legislation that empowers law enforcement officers to question individuals who use hateful speech or make threats on social media, and to allow them to seek mental examinations for those persons.
How Social Media Outlets Currently Deal with Problematic Content
Until recently, social media companies have mostly taken a “hands-off policy” with respect to most content posted there. Though Google, Facebook and similar websites typically claim they don’t want to inhibit free speech and cite the 1st Amendment to suggest that they can’t limit what goes on their sites, Constitutional experts are unanimous in disagreeing, pointing out that the restrictions set forth in the U.S. Constitution limit only the actions of the government, and not those of private individuals or private companies. The U.S. Supreme Court has long recognized the right of non-governmental entities and individuals to place limits on speech, and has even allowed government restriction of speech where the is a “clear and present danger” to the public.
According to authorities, though most social media outlets have had content restriction policies in place for some time, the reality is that they have been remiss at enforcing those policies, often undermanning those units that conduct the monitoring of content. Facebook policy prohibits the posting of content that “is hate speech, threatening or pornographic; incites violence; or contains nudity or graphic or gratuitous violence.” YouTube also bans hate speech, as well as certain other content it deems inappropriate, such as how to make bombs or videos that involve mistreatment of animals.
In recent months, though, many social media giants have instituted new policies and procedures and have beefed up staff to address the proliferation of hate speech and violent content on their platforms. They have also increased development activity on software that would aid in recognizing and blocking forbidden content. YouTube and Google both employ algorithms to detect inappropriate content, and rely on user input to identify posts that violate content guidelines. According to the Wall Street Journal, YouTube has enlisted employees to screen thousands of hours of posted videos to determine whether subject matter is objectionable. In addition, Google has renewed efforts to improve its algorithm to better catch bogus or questionable posts.
Legal experts say that a big part of the challenge, for the social media outlets, is balancing an open forum with concern for the safety of citizens. That issue is complicated by the fact that many posts that have a harmful impact on others or that suggest potentially dangerous activity are not illegal in and of themselves. For example, the pictures that the killer posted on Instragram, of bullets and guns, are not illegal, and not necessarily suggestive of improper activity. Furthermore, it can be almost impossible to determine when a post or picture is tongue-in-cheek or indicative of a serious threat. It’s no secret that the Internet is considered by many to be a viable forum for expressing disagreement, often vehement discord. Social media companies also contend that they are often the victims of manipulation of their platforms, so that unknown users can post content that circumvents controls.
At the present time, while social media companies are bound by law to report any situation on their platform involving child pornography, there’s no similar requirement regarding threats or evidence of violence. The social media sites customarily leave that to website users, who can report any activity they perceive to be suspicious or problematic.
Many industry analysts believe that the social media companies have the technology and expertise to monitor all content on their sites, and should move forward as quickly as possible to implement measures that provide more protection to the public. Forensic psychologist Reid Meloy, who has consulted to the FBI, says that such a tool is “not only conceivable, but doable.” Meloy has argued that social media companies should not only monitor their content for potential threats, they should be required by law to notify authorities of anything that suggests the potential for violence or threats to public safety. Meloy says his experience shows that mass killers frequently disclose their plans in advance, and that social media offers the perfect platform, as it allows them access to a vast public without attracting too much attention. In his experience, people who would have told a close associate or family member in the past are now inclined to employ social media outlets to brag of their plans in advance.
In the Florida case, the bail bondsman who posted the video on which Nikolas Cruz foretold of his rampage notified YouTube that the comment was spam and had it removed. He also notified the FBI, though FBI officials admit they did not follow up on his tip. Facebook officials say they removed Instagram and Facebook pages under the shooter’s name, but not until after the incident.
Moving Your Business Forward
30 Wall Street, 8th Floor
New York City, NY 10005
100 Overlook Center, 2nd Floor
Princeton, NJ 08540
Phone: (866) 705-2668