Facebook's WhatsApp (FB  ) has long been applauded for its end-to-end encryption, which ensures user privacy. In the wake of user data breaches and privacy scandals at Facebook and other tech sites in the past year, both governments and consumers have recently demanded more transparency and content curation from social media sites and tech companies generally. Yet, criminals can also take advantage of security measures designed to protect regular digital citizens.

Child pornographers, for example, use encryption services for their own nefarious ends. Two Israeli NGOs have just released detailed investigations revealing that WhatsApp is being used as a front by many to distribute child pornography, given its encrypted quality and unscreened content. The illicit photos and videos are primarily distributed via WhatsApp groups that blatantly allude to the chat's pornographic content.

As a response to low-quality screening allegations, WhatsApp issued a statement: "WhatsApp has a zero-tolerance policy around child sexual abuse. We deploy our most advanced technology, including artificial intelligence, to scan profile photos and images in reported content, and actively ban accounts suspected of sharing this vile content. We also respond to law enforcement requests around the world and immediately report abuse to the National Center for Missing and Exploited Children. Sadly, because both app stores and communications services are being misused to spread abusive content, technology companies must work together to stop it."

Ever since the introduction of the group invite link feature in 2016, it is much easier to join groups in which one does not know any members, augmenting the growth of child pornography and drug transferring clusters.

What's more is that moderation is all automated, so there are loopholes to slip through WhatsApp's safeguards that are continuously being exploited, including the 256-member cap. In fact, WhatsApp only has 300 employees staffed to monitor illegal activity.

Though owned by Facebook, WhatsApp operates almost as a separate entity - meaning that Facebook has not devoted the same resources to monitoring activity on WhatsApp that it's given to other violations of policy. In fact, even though Facebook recently employed the use of a new machine learning tool to remove around 8.7 million photos featuring child nudity, WhatsApp's encryption feature makes the same software-based filtering impossible.

WhatsApp adopted the use of group links as a supposedly failsafe growth-hack measure, but did not install the necessary safety apparatus to monitor the plethora of negative content that comes with such a move.

The lax attitude towards WhatsApp's content moderation arises perhaps because WhatsApp knows its user base is largely situated in India, where regulatory screenings are relatively uncommon and there is less of a perceived need to screen content. The lack of oversight has contributed greatly to the spread of "fake news" and illicit content in the nation.