Introduction: Social media platforms like Facebook, Google, and X (formerly Twitter) are global giants that connect billions of people. However, with their expansive reach and complex financial models comes an uncomfortable question: are these platforms knowingly allowing bad actors—ranging from organized crime networks to child exploitation rings—to operate within their systems? While there’s no conclusive evidence proving intentional malfeasance, numerous factors suggest that these platforms may, at times, turn a blind eye to illegal or unethical activities, including the exploitation of children.
Financial Incentives and Growth Priorities Social media platforms are fundamentally businesses driven by user engagement and advertising revenue. The larger their user base, the more data they collect, and the more profitable their advertising operations become. This focus on growth over safety raises an essential question: are platforms intentionally lax when it comes to regulating accounts that might be linked to criminal organizations or illicit activities, simply because these accounts still generate revenue?
For example, Facebook has long been criticized for its delayed actions in dealing with accounts that spread misinformation or incite violence. Whistleblower Frances Haugen testified that Facebook had consistently chosen profits over the safety of its users, allowing harmful activities to persist on the platform longer than they should have. This logic extends to criminal activities—if suspicious accounts increase engagement and generate ad revenue, platforms may hesitate to remove them for financial reasons.
Exploitation of Children on Platforms Like Instagram The issues with child exploitation are even more disturbing. Instagram, owned by Meta, was recently revealed to have an algorithm that inadvertently promoted child sexual abuse material (CSAM). A report from the Stanford Internet Observatory showed how Instagram’s recommendation system actively connected users to these illegal networks, making it easier for predators to find and share content. Despite Instagram removing over 490,000 accounts that violated child safety policies, the problem persisted for a significant amount of time before Meta launched an internal investigation and task force.
This raises questions about whether the platform, despite its vast resources, was turning a blind eye to such activities due to the complexity of moderating billions of accounts or because of a reluctance to disrupt their ad-driven revenue model. Although Meta has since taken action, the fact that child exploitation was facilitated by Instagram’s own algorithm highlights how vulnerable these platforms are to being used for illegal activities.
Turning a Blind Eye to Criminal Activity While platforms like X and Google have taken steps to cooperate with law enforcement and combat illicit activities, such as terrorism or human trafficking, they’ve been slow in many instances. This delay has led to speculation that platforms may knowingly allow criminal networks to operate to some degree, especially if removing those accounts requires significant resources or could disrupt the platform’s revenue model.
For instance, money laundering through social media platforms and the use of shell companies to funnel illicit funds is not unheard of. Cartels and organized crime syndicates are increasingly using platforms to recruit individuals for money laundering operations. The platform itself may not actively participate in the laundering process, but by failing to enact stricter policies and oversight, they may indirectly enable these activities.
Whistleblower Revelations and Content Moderation Failures In addition to Haugen’s revelations about Facebook, other insiders have come forward from various platforms over the years, revealing weaknesses in content moderation and account monitoring systems. They often point to the pressure placed on platforms to grow their user bases and minimize friction for users. In some cases, this leads to delays in investigating flagged accounts, allowing them to continue operating even after they’ve been connected to illicit activities.
These revelations show that, while platforms may have policies on paper designed to combat illegal behavior, the reality is far more complicated. Algorithms used to flag suspicious activities are not foolproof, and human moderation teams are often overwhelmed by the sheer scale of content and user activity.
Government Investigations, Zuckerberg’s Testimonies, and the Exploitation of Vulnerable Users
Mark Zuckerberg, the CEO of Meta (formerly Facebook), has been pulled into Congress several times to answer critical questions about the platform’s failures in protecting vulnerable users, including children and adolescents. Congress has scrutinized Facebook and Instagram’s role in facilitating harmful behaviors, such as the promotion of eating disorders and child exploitation.
In 2021, internal documents from Facebook—revealed by whistleblower Frances Haugen—showed that Instagram had a particularly harmful effect on teenage girls, promoting content that worsened body image issues and exacerbated eating disorders. Congressional leaders raised concerns that Instagram’s algorithms promoted harmful content, particularly for young girls already struggling with anorexia and other mental health disorders. Facebook’s own research showed that Instagram made body image issues worse for 1 in 3 teenage girls, yet the platform continued to prioritize user engagement over safety.
Furthermore, Instagram was found to have facilitated the spread of child sexual abuse material (CSAM) through its recommendation algorithms, as discussed in a report by the Stanford Internet Observatory. The platform inadvertently connected users to networks trading in illegal content, leading to heightened scrutiny during Zuckerberg’s multiple congressional testimonies. Meta’s response has been criticized as reactive rather than proactive, with slow enforcement of safety measures and algorithm adjustments coming only after external pressure.
During his testimony before Congress, Zuckerberg has often defended Facebook’s practices, arguing that the company invests significant resources into safety measures and has removed millions of harmful accounts. However, lawmakers remain skeptical, pointing to the repeated cycles where Meta only takes action after reports from watchdogs or whistleblowers.
Congressional Concerns Over Child Exploitation and Eating Disorders
The issue of child exploitation has been a persistent focus of these congressional hearings. Lawmakers have questioned whether platforms like Instagram and Facebook prioritize profit over safety, particularly when it comes to vulnerable groups like children. The combination of CSAM networks and content promoting self-harm and eating disorders continues to be a troubling area of concern, with reports showing that platforms may unintentionally facilitate these dangerous behaviors through their recommendation algorithms.
In response to the growing concern, Zuckerberg has been pressed to implement stricter content moderation practices and invest more heavily in the prevention of harmful content, especially when it comes to protecting children from exploitation and mental health dangers. Despite these efforts, critics argue that these actions often come too late, after significant damage has been done to vulnerable populations.
Government Investigations and Partnerships While platforms are legally required to cooperate with law enforcement when it comes to terrorism or trafficking, there’s a glaring question about how much they prioritize enforcement. Despite ongoing collaborations between government agencies and tech companies to detect terrorist activities, platforms have been criticized for not doing enough to proactively remove bad actors. The recent Instagram scandal has intensified calls for industry-wide initiatives to combat child exploitation on social media, particularly when platforms profit from ad-driven models tied to user engagement.
The Delicate Balance of Regulation To be clear, operating a platform with billions of users across various jurisdictions is a complex task. Each country has its own laws regarding content, financial oversight, and privacy. This complexity creates potential loopholes that bad actors can exploit. Platforms like Facebook and Google often cite the difficulty of policing all users, especially given the sheer scale and global nature of their operations. However, the slow pace at which these companies react to removing bad actors raises questions about whether they could be doing more.
Conclusion: So, are social media platforms knowingly hiding accounts for bad actors? The answer isn’t straightforward. While there’s no direct evidence proving intentional complicity, the combination of financial incentives, regulatory delays, and whistleblower revelations suggests that platforms may be turning a blind eye. In a world where user growth and engagement often take precedence, it’s not hard to imagine that platforms may not be as aggressive as they could be in rooting out bad actors. Whether intentional or a consequence of their business model, the reality is that social media platforms are far from foolproof in protecting against illegal activities operating within their systems.
HEARING BEFORE THE UNITED STATES SENATE
COMMITTEE ON THE JUDICIARY Free Download


I.waxthed recently as Mr.. Zuckerberg made another delayed apology for FB and IG indiscretions or problems. When I saw the crackdown on Telegram owner which follow mere days afterward, I saw the connection.
It’s difficult to prove complicity by these large monopolies, but they do have a horrible record with regards to their policing of their own business, and their secret, often compromised relations with intelligence agencies. It is no secret that the bottom line will trump moral issues every time with big business.
This was a very engaging report you shared, John. A long read, but very interesting.
Thank you very much for your insightful comment! You’re absolutely right—the connections between these big tech companies and the actions taken afterward are hard to ignore, and it’s troubling to see how closely tied their interests are to intelligence agencies and other entities. It’s unfortunate that, time and time again, profits seem to outweigh the moral responsibility they should uphold.
We’ve shared more articles on our blog that expose the questionable practices of these big tech companies, and I believe it’s crucial to stay informed. I’m glad you found the report engaging, even with its length! It’s important to dig deep into these issues to fully understand what’s going on behind the scenes. Your thoughts are greatly appreciated—let’s keep shedding light on these matters. 😎
Always happy to support 🙏
AI can never be human. A human interaction is a considerate approach for solving human problems. Are not humans hilarious?
You’re absolutely right—AI can never replicate the full range of human experience. While it can be a useful tool, it lacks the empathy, creativity, and nuance that make human interaction so powerful. And yes, humans certainly have a unique way of finding humor in the most unexpected ways! 😎
It wouldn’t surprise me. I’ve reported posts which encourage misogyny and other views which aren’t illegal per se but which are antisocial and undesirable. I’ve had an auto-response come back within seconds saying that it doesn’t breach their community guidelines. I’ve taken it further, and got the same answer, then taken it even further and got no response. Misogyny doesn’t go against their community guidelines? So unsurprising that the activities you’ve mentioned don’t either. 🤨
Thank you for your thoughtful comment! Unfortunately, your experience with reporting posts and receiving a dismissive or inadequate response seems to echo what many users have experienced. Platforms like Facebook and Instagram often use automated systems for moderation, which can result in quick dismissals of reports, even when the content clearly breaches community standards in a broader, ethical sense. Misogynistic content and other harmful, yet legally “borderline” material often fall into a gray area, where platforms claim it doesn’t violate their guidelines, even though it promotes toxic behaviors.
This reflects the core issue we’re discussing in these articles: platforms might not prioritize user safety when it conflicts with engagement and profit. The fact that harmful behaviors like misogyny, child exploitation, or content that worsens mental health (such as promoting eating disorders) are overlooked or allowed to persist, underscores the need for greater accountability.
Your experience further emphasizes how inadequate content moderation can be when platforms rely too heavily on automated systems or prioritize engagement over ethical concerns. These platforms must do more to ensure that their spaces are safe and that harmful content doesn’t get swept under the rug due to a focus on profit.
Quite right. I knew it was down to AI, machines that don’t understand the subtleties of human behaviour, but when you escalate a matter you expect a human to get involved. Either it’s left with the robots, or the humans aren’t being impartial, as they should be. You really do wonder what is the point. The world’s going to hell – correction: I think we’re there already. 🤨
Whether or not these companies realize it, AI lacks the ability to fully grasp the nuances of human behavior, and it’s concerning when issues that clearly require human judgment are left in the hands of machines. When we escalate a matter, it’s reasonable to expect a human to step in and handle it with the appropriate level of care and impartiality.
Unfortunately, it seems that in many cases either the AI-driven systems are still making the calls, or the humans who are supposed to intervene aren’t applying the necessary impartiality, leading to a lot of frustration. This disconnect makes users wonder whether these platforms are truly committed to solving the issue or simply doing the bare minimum to keep operations running smoothly.
It’s worrying how often people report feeling like the world’s ethical standards are slipping, especially in the tech industry. The reliance on algorithms and AI to handle complex human interactions is a sign of where things are headed—and as you’ve pointed out, it often feels like we’ve already arrived at that place where impartiality and fairness are losing ground.
This response acknowledges the user’s frustration with AI’s limitations and touches on the broader ethical concerns regarding the use of automated systems over human judgment. Let me know if you’d like to tweak any part of it! 😎
Everything you say makes sense – unlike the AI to which your post refers. John Marrs has written a great book entitled ‘The Marriage Act’, in which he addresses the system used in a dystopian speculative future UK in which AI is allowed to judge the health or otherwise of people’s marriages-with catastrophic results. I reviewed it on my website recently – but if only those who allow AI into areas where it has no business being would get the message!
Thank you very much, Laura! I completely agree—while AI can be a powerful tool, there are areas where it simply cannot replace human understanding and empathy, especially when it comes to relationships and personal decisions. It’s crucial that we remain vigilant about where and how we allow AI to intervene, ensuring it supports us without overstepping into realms that require the nuance only human beings can provide.
Agreed – crucial. 🤔