After both the Paris terrorist attacks and mass shooting in San Bernadino, California, many lawmakers and individuals around the world are pushing Facebook (FB  ), Twitter (TWTR  ), and Google (GOOG  )-as well as other tech companies-to ramp up their efforts to report potential terrorists. Some efforts are stemming from legislation recently introduced to the United States Congress, while others come from individuals criticizing many social media sites' current policy for reporting and deleting profiles associated with advocating for or sympathizing with terrorist acts. Currently, both Facebook and Twitter rely on users to report content that violates their community standards-but each company's response time to each complaint varies greatly. Facebook is largely seen as stricter than Twitter, mainly due to its swift deletion of many ISIS-related Facebook accounts and pages while ISIS supporters were able to maintain access to over 46,000 Twitter accounts-some active, others less so-between September and December 2014, according to a study by the Brookings Institute.

While Twitter's CEO, Jack Dorsey, has made no public comment on terrorist group's ability to use the platform to communicate their agenda, a company spokesperson wrote, "Twitter continues to strongly support freedom of expression and diverse perspectives... but it also has clear rules governing what is permissible. Users may not make threats of violence or promote violence, including threatening or promoting terrorism." But, like many tech companies, their specific review policies remain vague in order to maintain secrecy over their proprietary technologies that, they say, would allow hackers to infiltrate encrypted data more easily than before.

For Facebook, the first step towards deleting an account is receiving a report from a current user, which then directs employees to review the content to make a decision. While the exact nature of Facebook's review process has not been made public, Monika Bickert, Facebook's head of global product policy, responded to criticism on Change.org by writing, "There is no place on Facebook for terrorists, terrorist propaganda or the praising of terror. This is not an easy job and we know we can make mistakes and are always working to improve our responsiveness and accuracy."

With these current policies, it can take several days for reported profiles to be deleted, but in her statement, Bickert adds, "If Facebook blocked all upsetting content, we would inevitably block the media, charities and others from reporting on what is happening in the world." The balance between censorship and combatting terrorism is a difficult line, especially in today's world where social media sites attempt to appeal to users around the world.

Currently, Facebook, Google, and Twitter insist they treat government complaints the same as citizen complaints unless the government provides a court order. In fact, each company publishes regular reports on how many inquiries they get from government officials in the hope of introducing more transparency. But lawmakers question the effectiveness of the current process, with the Senate recently introducing the "Requiring Reporting of Online Terrorist Activity Act" which would force tech companies to tell law enforcement if they "become aware of terrorist activity" on their sites.

But the recent proposal has come under fire for two concerns: the precedent it would set for foreign governments as well as the bill's vague wording. How would the government define a company being "aware" of terrorist acts, and how long would they have to report it? Would companies have to continually scan content or still rely on community reports? This remains most unclear, as the bill is structured on a current law that requires email providers like Google to scan all of their emails for child pornography.

Further, the precedent that an American law would set troubles many. If companies have to hand over data to the law enforcement agencies in the United States, what could stop other countries from doing the same? In a sense, this could lead to hundreds of different laws requiring sites to report "terrorism" but to fit each country's views and definition of what terrorism actually is. Indeed, one person's hateful propaganda could indeed be a form of free speech.

Additionally, increased legal requirements can provide an additional burden on these companies, requiring additional staff, technology, and training. While these may briefly push profit down as social media sites restructure policies to comply with laws, the long-run effect may drive more users to the sites which some view as unfriendly or triggering with free speech valued over almost anything other than a direct threat of violence.

And beyond laws that require firms to report suspicious activity to the local agencies, the U.S. Department of Homeland Security is attempting to revamp its visa application program to include social media background checks. Currently, only three pilot programs advise the use of social media in approving or denying visa requests but they are currently developing procedures to integrate social media checks on every visa applicant.

Taken together, both the government and public's pressure on tech companies to aid in identifying and preventing terrorist acts are converging and 2016 will likely bring some sort of reform to how companies interact with law enforcement agencies to report potential suspects.