human trafficking Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/human-trafficking/ Thomson Reuters Institute is a blog from , the intelligence, technology and human expertise you need to find trusted answers. Fri, 17 Apr 2026 05:49:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Tackling human trafficking at the 2026 FIFA World Cup /en-us/posts/human-rights-crimes/human-trafficking-2026-fifa-world-cup/ Thu, 16 Apr 2026 14:01:56 +0000 https://blogs.thomsonreuters.com/en-us/?p=70341

Key insights:

      • Big sporting events create perfect cover for sex trafficking — The World Cup’s massive crowds, temporary workers, and stretched local infrastructure make it easier for traffickers to blend in and exploit vulnerable people while staying largely out of sight.

      • Money trails and online ads are where traffickers slip up — Trafficking often leaves patterns, such as payments tied to commercial sex ads, round‑dollar peer‑to‑peer transactions, and repeat phone numbers or language across online ads. Banks and investigators can spot these red flags, if they know what to look for.

      • Early, cross‑sector collaboration is what actually makes a difference — The strongest prevention efforts happen before kickoff, when law enforcement, financial institutions, and nonprofits share intelligence, use formal information‑sharing tools, and build trusted local networks to respond quickly and protect victims.


As millions of soccer fans descend upon stadiums across North America for the 2026 FIFA World Cup in June and July, perpetrators of human rights crimes also are getting ready to operate in the shadows of host cities. Criminal networks are preparing to exploit the crowds, traffic, and chaos during the event by trafficking vulnerable individuals for commercial sex.

Human traffickers and organized crime groups often exploit major sporting events as opportunities to make quick money because the massive influx of visitors, temporary workers, and strained infrastructure creates perfect conditions for traffickers to operate while being largely undetected. At the same time, the stakeholders involved in countering this illegal activity — including law enforcement, civil society organizations, and financial institutions — stand ready to detect it, disrupt it, and protect vulnerable individuals who are exploited by criminal actors.

Indeed, close coordination and collaboration among these entities in advance of the games is key. To that end, the Association of Certified Anti-Money Laundering Specialists (ACAMS) and are collaborating on a virtual and live event series to support these planning counter-trafficking efforts among stakeholders in several local cities this Spring.

Why major sporting events attract human trafficking activity

Not surprisingly, large crowds draw business opportunities whether they are legitimate or illicit. Collaboration between public and private entities underscore spikes in human trafficking activity. For example, during a recent large sporting event in 2025, Special Services partnered with federal law enforcement and other partners to identify nine adult encounters & services offered, which led to the recovery of two juveniles from sex trafficking and three state arrests

Common industries that involve the exploitation of vulnerable individuals include hospitality, construction, illicit massage businesses, escort services, and adult content production. The chaos of events and large influx of people mask the reality that exploitation is happening and makes detection significantly more challenging during these high-traffic periods.


Human traffickers and organized crime groups often exploit major sporting events as opportunities to make quick money because the massive influx of visitors, temporary workers, and strained infrastructure creates perfect conditions for traffickers to operate while being largely undetected.


Critically, understanding human trafficking as a business model depends on the recruitment of vulnerable people and access to money flows. These aspects of the business are also where detection can occur. Financial institutions and money service businesses can identify suspicious transactions related to human trafficking by understanding and recognizing specific transactional patterns, including payments to commercial sex advertisement websites, round-dollar peer-to-peer transactions, and merchant services linked to illicit massage businesses.

This online footprint left by traffickers proves invaluable for detection. Investigators track advertisements across adult services websites, identifying criminal networks through repeated phone numbers, distinctive emojis, and similar wording that may appear across multiple cities. However, smaller-scale operations present significant challenges as well. When the trafficker is an intimate partner or family member with limited transaction volumes, detection becomes exponentially more difficult without external intelligence.

Collaboration is key for prevention and detection

The most critical element for combating human trafficking at major sporting events is collaboration among anti-trafficking experts and employers of these professionals. Effective prevention requires building strong partnerships before these major events occur. Specific actions that can be taken include:

Establishing multi-sector task forces — The most successful anti-trafficking efforts involve joint task forces that combine federal, state, and local law enforcement with trusted private sector partners and supportive nonprofits or non-government organizations (NGOs) that offer victim services. This toolkit for large scale public events and other anti-trafficking toolkits are excellent resources for local host cities to use to execute these partnerships. These collaborative mechanisms allow different entities to share information in a timely manner.

Leveraging information sharing mechanisms — Financial institutions can use Section 314(b) authority for peer-to-peer information sharing between banks. This allows financial institutions to piece together fragments of suspicious activity that individually might seem insignificant but collectively reveal trafficking networks. Large federal agencies are consumed by multiple priorities and benefit from information sharing through Section 314(a) and assistance from financial sector partners during special operations to act as a force multiplier. Law enforcement also can benefit from detailed Suspicious Activity Reports (SARs) that contain specific dollar amounts, clear timelines, behavioral observations, and explicit keywords like human trafficking.

Preparing host cities by building networks and outreach in advance — Some World Cup host cities have already established human rights plans with robust collaborative systems within local task forces, government awareness campaigns, QR codes that link to support services, and multidisciplinary safety plans.

In addition, anti-trafficking professionals across all sectors are accessible and willing to help. Resources include national hotlines, such as the , referral directories on website, and the for cases involving minors. The most important step is simply reaching out to establish connections before crises occur.

Preparing for a safer event

The 2026 World Cup presents a pivotal moment to strengthen collaborative efforts against human trafficking across North America’s host cities. By establishing robust information-sharing networks between financial institutions, law enforcement, NGOs, and host communities before the tournament begins, stakeholders can transform heightened awareness into meaningful action that protects vulnerable individuals.

While traffickers will undoubtedly attempt to exploit the inevitable chaos surrounding a major event like the World Cup, a coordinated, multi-sector response grounded in shared intelligence, victim-centered approaches, and proactive preparation can disrupt their operations and ensure that the world’s celebration of soccer doesn’t come at the cost of human dignity and freedom.


You can find out more abouthow organizations are trying to fight against human rights crimes here

]]>
How financial institutions can recognize human trafficking during the 2026 FIFA World Cup /en-us/posts/human-rights-crimes/recognizing-human-trafficking-world-cup/ Mon, 06 Apr 2026 12:17:34 +0000 https://blogs.thomsonreuters.com/en-us/?p=70170

Key takeaways:

      • Human trafficking is a financial crime — Without the financial system, human trafficking networks cannot operate at scale. Banks, compliance officers, money transmitters, and casinos are uniquely positioned to detect suspicious patterns.

      • The 2026 World Cup amplifies existing risks — With 5.5 million additional visitors expected in Mexico City alone, criminal networks will exploit the surge in cash flows, new customers, and cross-border movement.

      • Red flags are observable in financial behavior — Human trafficking networks often leave detectable financial footprints, which is why financial institutions must update monitoring systems and stay alert to unusual transaction spikes during the tournament.


MEXICO CITY — As the 2026 FIFA World Cup get ready to hold its tournament in June and July across three North American countries, anti-human trafficking experts are meeting as well and attempting to address the challenges facing the three host countries of the largest World Cup in history.

To that end, the Association of Certified Anti-Money Laundering Specialists (ACAMS), in partnership with , organized one such event, focused on the scourge of human trafficking that often surrounds large sporting events like the World Cup.

One speaker at the event noted an important clarification in the difference between human trafficking and human smuggling — two terms that are frequently confused yet carry vastly different legal and humanitarian implications. The key distinction lies in consent and the nature of the crime. In human smuggling, the individual being transported across borders consents to the movement, typically driven by socioeconomic necessity, and the offense is considered a crime against the state. Human trafficking, by contrast, is a crime committed directly against the victim, often involving exploitation through force, coercion, threats, or deception, and does not require the crossing of any international border.

The ACAMS event challenged the common belief is that human trafficking is exclusively sexual in nature. In fact, there are 10 additional forms of exploitation beyond sexual abuse, including slavery, forced labor or services, use of minors in criminal activities, forced marriage, servitude, labor exploitation, forced begging, illegal adoption of minors, organ trafficking, and illicit biomedical experimentation on human beings.


As the World Cup approaches, financial institutions’ compliance teams must recognize that the same operational conditions that make major sporting events exciting are precisely the conditions that money launderers and traffickers seek to exploit.


Still, sexual exploitation remains the dominant form of human trafficking. Indeed, it is the second most lucrative illicit business in the world after drug trafficking, with every 15 minutes of sexual abuse of a trafficking victim generating approximately $30.

Of course, without clients, there is no demand, said one speaker from the ÁGAPE Foundation, an organization that works to raise awareness against gender-based violence and human trafficking.

Financial sector as a key line of defense

When identifying human trafficking, it’s wisest to examine it from a financial perspective to find important indicators, according to several speakers. Indeed, the financial sector plays a critical role given its capacity to detect suspicious accounts and payments, shell companies, cash movements, digital platforms, and commercial operations.

For example, when a customer opens an account or conducts a transaction, certain red flags can be visible, including whether the customer needs to consult notes to answer basic questions such as their address or occupation, or that their responses are not spontaneous or natural. Also, another indicator is if the customer’s profile is inconsistent with the type or volume of transactions being conducted.

For financial institutions, there are other patterns that have triggered alerts in illicit activity in the past, including near-immediate deposits and withdrawals with no clear justification for the cash flow, or multiple individuals registered at the same address or linked to the same account.

Similarly, another red flag would be if there’s a high number of accounts opened from the same state or municipality with similar patterns, particularly in areas identified as origin points for trafficking networks; or, payment of multiple short-term rentals or payments abroad to unverifiable recruiters or employment agencies.

Financial institutions should be on the lookout for companies that file no tax returns or invoice simulated transactions, or that use of front men to open accounts or conduct operations.

Also, new businesses whose declared activity does not correspond to their financial operations should be flagged, as well as any frequent, large-volume purchases of condoms, lingerie, or women’s clothing inconsistent with the declared business activity.

Indicators at the 2026 World Cup

In the context of major sporting events such as the World Cup, existing risks are significantly amplified, several speakers pointed out. Sexual tourism, including the commercial sexual exploitation of children and adolescents, is a known and serious threat. Indicators that are relevant not only for the financial and banking sectors, but also for the real estate, tourism, transportation, hospitality, and restaurant industries including unusual accommodation requests, such as deactivating security cameras, delivering keys through third parties, or inquiring about the presence of neighboring guests.


When identifying human trafficking, it’s wisest to examine it from a financial perspective to find important indicators, and the financial sector plays a critical role given its capacity to detect suspicious accounts.


These industries should also be on the lookout for any adult or group of adults traveling with an unusually large number of minors, or individuals who travel in silence and are accompanied by someone who appears to exercise visible control over them.

As the World Cup approaches, financial institutions’ compliance teams must recognize that the same operational conditions that make major sporting events exciting — high transaction volumes, new customers, cross-border flows, and institutional attention diverted toward the event itself — are precisely the conditions that money launderers and traffickers seek to exploit.

For these compliance teams, monitoring systems must be updated, know-your-customer processes must go beyond documentation and reflect a genuine understanding of the client’s activity and context, and on-site verification visits must be conducted by personnel who know exactly what they are looking for.

The financial sector does not need to become an investigative body; however, it does need to remain alert, informed, and willing to report. Indeed, this is exactly what the compliance function exists for, and in the context of human trafficking, the cost of silence is measured not in fines or reputational damage, but in human lives.


Please add your voice to ’ flagship , a global study exploring how the professional landscape continues to change.

]]>
The child exploitation crisis online: Gaps in digital privacy protection /en-us/posts/human-rights-crimes/children-digital-privacy-gaps/ Wed, 04 Feb 2026 18:39:04 +0000 https://blogs.thomsonreuters.com/en-us/?p=69312

Key highlights:

      • Fragmented protection creates vulnerability —Current US privacy laws operate as a patchwork system without comprehensive national standards, leaving children and other users exposed to data exploitation across state lines and international borders.

      • Body data collection opens future manipulation potential —Virtual reality platforms collect granular biometric information through sensors that can reveal deeply sensitive information about users.

      • Use-based regulations outlast technology changes — Restricting harmful applications of data provides more durable protection than the current regulatory approach, which relies on categorizing rapidly evolving data types.


Virtual reality (VR), social media, and gaming companies have long avoided robust content moderation, largely out of concern over implementation costs and the risk of alienating users. This reluctance stems from platforms wanting to have the widest pool of users as possible. Yet, the shortsightedness of this decision has consequences, including insufficient protection of children and long-term cost to companies’ bottom-lines.

The child exploitation crisis in digital spaces requires better laws and a reimagining of how VR, gaming, and social medial companies balance privacy, safety, and accountability across diverse platform architectures, according to , an expert in child exploitation methods in digital spaces and Policy Advisor at the NYU Stern Center for Business and Human Rights.

Limitations of existing regulatory frameworks

The current regulatory landscape is insufficient to protect children online. The lack of a comprehensive national privacy law in the United States, the use of consent mechanisms, and the haphazard rollout of age verification all expose protection gaps and come with economic and psychological costs, according to Olaizola Rosenblat. For example, some of the dangers include:

Gaps in patchwork of regulations leave children vulnerable — Regulatory demands for child safety often collide with privacy protections, creating contradictory obligations that platforms cannot realistically satisfy. In the absence of unified standards, however, companies operate in a jurisdictional maze that leaves most users, including children, exposed to data exploitation across borders.

America’s regulatory landscape remains especially fragmented, with no comprehensive national privacy law to provide consistent protection. comes close to establishing meaningful safeguards, according to Olaizola Rosenblat, yet it still permits companies to collect data even after users opt out of the sale or sharing of their data.

digital privacy
Mariana Olaizola Rosenblat, of the NYU Stern Center for Business and Human Rights

Federal reform attempts, including the , collapsed amid conflicts between states demanding stronger protections and tech lobbyists aligned with conservative representatives seeking weaker standards. In addition, child-specific laws, such as the , provide protection only for those under 13, which leaves older minors and adults vulnerable.

“Once users turn 13, they fall off a regulatory cliff,” says Olaizola Rosenblat. “There is no federal child-specific data protection regime, and existing state-level safeguards are patchy and largely ineffective for teens.”

Internationally, the European Union’s (GDPR), although considered the gold standard for regulation, suffers from a persistent gap between its ambitious text and its uneven enforcement.

Age verification tensions — These regulatory shortcomings also are evident in debates over age verification. Protecting children requires collecting data to determine user age, yet privacy advocates frequently oppose such measures. Without pragmatic guidance acknowledging these inherent trade-offs, platforms often face contradictory obligations they cannot simultaneously fulfill.

Current consent frameworks offer little protection — Current consent mechanisms offer users an illusory choice that fails to protect children from data exploitation. Even relatively robust frameworks like the GDPR rely on consent models in which refusal means exclusion from digital spaces essential to modern life. This approach proves particularly inadequate for younger users. Indeed, that about one-third of Gen Z respondents expressed indifference to online tracking.

VR data collections may allow future exploitation

VR platforms differ fundamentally from traditional gaming spaces and social media platforms. Users with VR headsets embody avatars that move through thousands of interconnected experiences. While no actual touching occurs, the experiences feel visceral. Indeed, the psychological and physiological responses can mirror aspects of real-world experiences, which include sexual exploitation, even though no physical contact occurs.

Olaizola Rosenblat explains that the data collected from the sensors can open up the potential for future exploitation. “The inferences that can be drawn from your body-based data collected by these sensors is granular and often intimate,” she explains. “The power that gives to companies is pretty remarkable in terms of knowing things about you that you might not even know yourself.”

Recommended actions to address challenges

Addressing the child exploitation crisis in digital spaces requires coordinated action, according to Olaizola Rosenblat, and that needs to include:

Universal protection standards — Corporate action in partnership with legislators is necessary for effective reform that protect all users rather than fragmenting safeguards by age or vulnerability status. Current approaches that shield only younger children create dangerous gaps and leave adolescents and adults exposed once they age out of protected categories.

Enforce existing regulations — Even well-crafted legislation proves meaningless without robust enforcement mechanisms. Commitment by government agencies along with the appropriate levels of funding is the most meaningful approach to achieve desired outcomes.

Technology-agnostic use regulation — Rather than attempting to categorize rapidly evolving data types, companies in the VR, gaming, and social media sectors must work with legislators to restrict harmful uses of data such as manipulation, exploitation, and unauthorized surveillance, regardless of technical collection methods. Regulating data use — rather than the current method of regulation based on categories of data, which include personally identifiable information — is the right approach.

Public mobilization is essential — Citizens must understand that the stakes of data exploitation beyond corporate collection also include hacking vulnerabilities and manipulative deployment. Without consumer demand for better protection and the willingness for legislators to pass the laws, regulation will not happen.

The path forward

The digital exploitation of children demands immediate action that transcends partisan divides and corporate interests. Only through coordinated regulatory reform, meaningful enforcement, and sustained public pressure can we create digital spaces in which innovation thrives without sacrificing our privacy and safety. The cost of continued inaction grows steeper each day we delay.


You can find out more on how organizations and agencies are fighting child exploitation here

]]>
Human Layer of AI: How to hardwire human rights into the AI product lifecycle /en-us/posts/human-rights-crimes/human-layer-of-ai-hardwire-human-rights/ Tue, 27 Jan 2026 16:50:00 +0000 https://blogs.thomsonreuters.com/en-us/?p=69143

Key highlights:

      • Principles need a repeatable process —Responsible AI commitments become real only when companies systematize human rights due diligence to guide decisions from concept through deployment.

      • Policy and engineering teams should co-own safeguards — Ongoing collaboration between policy and technical teams can help translate ideals like fairness into concrete requirements, risk-based approaches, and other critical decisions.

      • Engage, anticipate, document, and improve continuously —Involving impacted communities, running regular foresight exercises (such as scenario workshops), and building strong documentation and feedback loops make human rights accountability durable, instead of a one-time check-the-box exercise.


More and more companies are adopting responsible AI principles that promise fairness, transparency, and respect for human rights, but these commitments are difficult to put into practice when it comes to writing code and making product decisions.

, a human rights and responsible AI advisor at Article One Advisors, works with companies to help turn human rights commitments into concrete steps that are followed across the AI product lifecycle. He says that the key to bridging the gap between principles and practice is embedding human rights due diligence into the framework that guides product development from concept to deployment.

Operationalizing human rights

Human rights due diligence involves a structured process that begins with immersion in the process of building the product and identifying its potential use cases, whether it is an early concept, prototype, or an existing product. This is followed by an exercise to map the stakeholders who could be impacted by the product, along with the salient human rights risks associated with its use.

From there, the internal teams collectively create a human rights impact assessment, which examines any unintended consequences and potential misuse. They then test existing safeguards in design, development, and how and to whom the product is sold. “Typically, a new product will have many positive use cases,” explains Natour. “The purpose of a is to find the ways in which the product can be used or misused to cause harm.” In Natour’s experience, the outcome is rarely a simple go or no-go decision. Instead, the range of decisions often includes options such as go with safeguards or go but be prepared to pull back.

Faris Natour, of Article One Advisors

The use of human rights due diligence in the AI product lifecycle is relatively new (less than a decade old) and as Natour explains, there are five essential actions that can work together as a system:

1. Encourage collaboration between policy and engineering teams

Inside most companies, responsible AI is split between policy teams, which may own the principles, and the engineering teams, which own the systems that bring those principles to life. Working with companies, Natour brings these two functions together through a series of workshops to create structured, ongoing collaboration between human rights and responsible AI experts and the technical teams to better co-develop responsible AI requirements.

In the early stages of the collective teams’ work, the challenges of turning principles into practice emerge quickly. For example, the scale of applications and use cases for an AI product can make it difficult to zero in on those uses that . Not all products or use cases need to be treated equally, says Natour, and companies should identify those that could potentially cause the most harm. Indeed, these most-harmful uses may involve a “consequential decision” such as in the legal, employment, or criminal justice fields, he says, adding that those products should be selected for deeper due diligence.

2. Consider the principles at each stage of the development process

Broad principles and values, such as fairness and human rights, should be considered at each stage of the lifecycle. For the principle of fairness, for example, teams may assess which communities will use this product and who will be impacted by those use cases. Then, teams should consider whether these communities are represented on the design and development teams working on the product, and if not, they need to develop a plan for ensuring their input.

3. Engage with impacted communities and rightsholders

Natour advocates for companies to actively engage with impacted communities and stakeholders, including those who are potential users or who may be affected by the product’s use. This could be the company’s own employees, for example, especially if the company is developing productivity tools to use internally in their workplace. Special consideration should be given to vulnerable and marginalized groups whose human rights might be at greatest risk.

External experts, such as Natour and his colleagues, hold focus groups with such stakeholders as . The feedback from focus groups can then be used to influence model design, product development, as well as risk mitigation and remediation measures. “In the end, knowing how users and others are impacted by your products usually helps you make a better product,” he states.

4. Establish responsible foresight mechanisms

To prevent responsible AI from becoming a one-time check-the-box exercise, Natour says he uses responsible foresight workshops and other mechanisms as a “way to create space for developers to pause, identify, and consider potential risks, and collaborate on risk mitigations.”

The workshops use personas and hypothetical scenarios to help teams identify and prioritize risks, then design concrete mitigations with follow-on sessions to review progress. Another approach includes developing simple, structured question sets that push product teams to pause and think about harm. For example, Natour explains how one of his clients includes the question: What would a super villain do with this product? in order to help product teams identify and safeguard against potential misuse.

5. Create documentation and feedback loops for accountability

As expectations around assurance rise from regulators, customers, and civil society, strong documentation and meaningful, accessible transparency are essential, says Natour.Clear, succinct, and accessible user-facing information about what a model does and does not do, about data privacy, and other key aspects can help users understand “what happens with their data, as well as the capabilities and the limitations of the tool they are using,” he adds.

Further, transparency should enable two-way communication, and companies should set up feedback loops to enable continuous improvement in the ways they seek to mitigate potential human rights risks.

The hardwired future

Effectively embedding human rights into the AI product lifecycle starts with a shared governance model between a company’s policy and engineering teams. Together they can collectively hardwire human rights into the way AI systems are imagined, built, and brought to market.


You can find more about human rights considerations around AI in our here

]]>
Human rights due diligence and mega sporting events /en-us/posts/human-rights-crimes/mega-sporting-events/ Thu, 22 Jan 2026 11:42:50 +0000 https://blogs.thomsonreuters.com/en-us/?p=69091

Key insights:

      • Effective human rights due diligence — Human rights can be hardwired into procurement by setting standards that include clear documentation thresholds, a code of conduct that bans forced labor and trafficking, a supplier assessment questionnaire, a locally informed worker safeguards addendum, and a risk-based vendor-grading rubric.

      • Procurement should feature human rights enforceable obligations — Further, human rights can be hardwired into commitments, such as request for proposals, vendor evaluation, and contract clauses.

      • Engaging unions and community groups early can lead to strong execution — Effective implementation relies on early stakeholder structures (unions, community groups, etc.), robust worker grievance mechanisms, and independent interviewers, complemented by AI-driven monitoring and continuous, rapid risk response.


Mega sporting events can have a significant impact on local economies, but they also pose substantial human rights risks, including labor exploitation, forced displacement, and sex trafficking. With the Super Bowl and Winter Olympics coming up next month, and the World Cup in summer, it’s crucial that organizations, communities, and governments prepare now to mitigate any human rights problems with these events.

As an advisor to host cities on human rights with more than a decade of experience now as the chief executive of , I have seen firsthand how the right commitments and responsible contracting practices can help mitigate these risks. By prioritizing human rights and adopting robust contracting practices, the cities that host these mega sporting events can ensure a positive legacy that extends beyond the event itself.

This was a recent topic at an event hosted by and the International Labor Organization as part of its in which representatives from host cities, civil society organizations, and governments came together to discuss best practices to turn commitments around human rights into action during the FIFA World Cup games later this year. As a participant in this event, Henekom shared our approach in translating high‑level human rights commitments into context‑specific safeguards in order to create the social architecture that aligns organizational practice with community needs.


January is National Human Trafficking Prevention Month in the United States.Check out our Human Rights Crimes resource center to learn how tostop and prevent human trafficking


Centering human rights by using rigorous contracting standards starts with local jurisdictions working with multidisciplinary stakeholders to embed strong and comprehensive policies and protocols at all stages of event planning. In my experience, an all-inclusive approach typically shares five elements:

      1. Clear thresholds in human rights documentation that are designed for speed of business.
      2. Code of conduct with essential ingredients, which include explicit bans on forced labor, trafficking, and other exploitation.
      3. Supplier assessment questionnaire (SAQ) that flags geographic and sector risk, such as temporary labor of food service employees.
      4. Worker safeguards addendum (WSA) that is built from local labor stakeholders who have lived concerns that help to translate the United Nations Guiding Principles on Business and Human Rights (UNGPs) into local realities.
      5. Risk-based grading rubric for vendors that weights SAQ and WSA responses and turns them into a contracting risk rating.

In my experience, implementing these policies and tools deeply within the organization means embedding requirements at three critical junctures: i) request for proposals (RFPs); ii) vendor evaluation as part of the selection process; and iii) contract clauses. First, when subject-matter experts draft RFPs, the workflow should force-check human rights and sustainability language (or auto-insert standard clauses). Second, during vendor evaluation, the human rights team grades each SAQ/WSA and assigns a risk-based score. Third, contracts must lock in enforceability with particular emphasis on audit rights, corrective action plans, termination for cause, access to remedy, and accountability mechanisms, such as payment withholding.

Vendor contract agreements between the host cities and primary contractors are the best vehicle to incorporate enforcement of these rights. Likewise, provisions for these rights should also be incorporated into contracts between primary contractors and any subcontractors.


Centering human rights by using rigorous contracting standards starts with local jurisdictions working with multidisciplinary stakeholders to embed strong and comprehensive policies and protocols at all stages of event planning.


Temporary labor at mega sporting events — which include individuals working in private security, souvenir sales, construction, janitorial, and food service — adds complexity but does not have to stifle efforts to honor decent work and other human rights. With a solid sourcing policy, vendors get practical tools and technical assistance to implement requirements quickly.

Common examples include building a checks-and-balances loop with worker centers to receive complaints, and data reporting to track hours, wages, recruitment fees, and grievance outcomes. The risk-based grading rubric for vendors ideally determines the monitoring intensity, frequency of site visits, and reporting cadence.

Effective approaches for implementation

Beyond contract language, the following three actions and tools to help instill accountability in human rights commitments are recommended:

Working with stakeholders from day one — To effectively safeguard human rights, it’s crucial to establish standing stakeholder structures, such as advisory councils and labor roundtables, in order to co-create standards and monitor progress with unions and community groups. By doing so, organizations can ensure workers’ voices are heard, issues are escalated, and commitments are translated into tangible results through collective action and remediation advice.

Centering workers and ensuring access to grievance mechanisms — Establishing on-site, back-of-house centers for workers with confidential and multilingual intake processes, along with clear resolution pathways, is an effective way to drive accountability and reinforce human rights commitments. Using trained, independent worker interviewers with unannounced access to ensure compliance across venues, shifts, and subcontractor tiers further adds to this accountability.

Together, these approaches provide a means for workers to report concerns, verify compliance with policy requirements, and ensure that human rights are respected throughout the supply chain.

Using AI to fortify accountability — AI offers powerful tools for detecting and preventing labor exploitation in supply chains through automated monitoring and pattern recognition. Likewise, natural language processing may be able to analyze hotline transcripts and grievance logs to identify trends.

Even with the best policies and accountability tools, however, risks still persist because operating and business conditions are dynamic. New suppliers are added late, or a hot day turns into potentially harmful working conditions. This makes human rights due diligence a continuous requirement with ongoing risk monitoring, fast incident response, and a humble posture to make it right quickly, transparently, and fairly.

If host cities want a legacy that lasts beyond the mega sporting events’ closing ceremony, it is critical to ensure that the people who made the spectacle possible were seen, protected, paid, and heard. Doing the right thing is strategy — contracts and worker-centered approaches are how it shows up on the ground.


You can find out more about how organizations are trying to fight against human rights crimes here

]]>
The global economy of “sextortion” /en-us/posts/government/global-economy-of-sextortion/ Fri, 09 Jan 2026 16:48:49 +0000 https://blogs.thomsonreuters.com/en-us/?p=69008

Key insights:

      • Sextortion has evolved into a global industry — This crime is being fueled by organized crime networks and human trafficking.

      • Victims exist on both sides— Often, vulnerable workers, who operate as forced labor in scam compounds abroad, are as much the victims as those people being scammed online and extorted for financial gain.

      • Digital literacy and cross-border cooperation are strong tools — Governments and law enforcement need to better educate the public about these scams and seek better collaboration to prevent exploitation and to dismantle organized crime networks.


A 17-year-old Michigan high school student after inadvertently sharing explicit photos with a Nigerian sextortion scammer after the scammer posed as a teenage girl on a fraudulent Instagram account. Also, a 16-year-old Kentucky high school student after he was blackmailed with an AI-generated nude image.

Sadly, these two families are victims of the more than 100,000 sextortion reports filed with the National Center for Missing and Exploited Children (NCMEC) since 2020, many now involving AI-generated imagery. These reports are part of the larger increase in , which typically targets males ages 14 to 17 and which has been on the rise since 2020. These tragic cases are part of a vast network of scams that stretches from the to criminal compounds in Asia and Africa.

Sextortion in the modern age

The FBI defines sextortion as a criminal act in which an offender blackmails a victim for payment under the threat of releasing sexually explicit material, such as a photo or video. The material may have been solicited through a romance scam or may be the product of generative AI (GenAI). Sextortion is the latest trend in a series of scams that generate billions of dollars for international criminal syndicates on the backs of forced labor in parts of the world with unstable governance and oversight. An average 800 CyberTipline reports submitted to NCMEC from 2022 to 2023 related to the sextortion of minors.

NCMEC notes that victims of sextortion scams to the CyberTipline and make use of the . Take It Down allows for anonymous requests to remove explicit images from participating platforms and social media companies. encourages changing passwords after scam activity and not responding to any requests for payment, even if threats are made.

Organized crime syndicates and cyber-scams

are the “definitive market leaders” in cyber-enabled fraud and online scams, which have been rapidly expanding since the COVID-19 pandemic, according to the United Nations’ Office on Drugs and Crime. In areas of Asia with weak governance, scam centers and fraud gangs run sophisticated operations that often front as industrial parks or casinos and hotels. and coerced into defrauding other victims online. The trafficked individuals often are lured with false promises of high-paying jobs and the ability to maximize their language skills.


Broad enforcement efforts have been relatively ineffective as scamming operations simply move within the country or offshore.


Once there, victims are forced into labor to commit financial fraud usually by enticing smartphone users to invest in cryptocurrency scams or engaging in sextortion (which sometimes includes forced sex trafficking to produce sexual content). It is unclear if teenagers are being targeted explicitly or if they are inadvertently targeted through broader, population-wide cyber-scams.

The Myanmar town of Laukkaing (also spelled Laukkai), the capital city of the Kokang Self-Administered Zone is considered the engine-room of forced labor scamming. In Myanmar’s Kokang region, have turned from narcotics to online scamming, operating casinos, and scam compounds possibly because these crimes are more lucrative and easier to operate at scale.

In October 2023, a deadly crackdown at Myanmar’s Crouching Tiger Villa (referred to as the 1020 Incident) was the beginning of the crumbling of mafia-led control in Laukkaing. The Chinese government launched coordinated attacks, which resulted in . The leader of the Ming family (which operated Crouching Tiger Villa) took his own life after being captured, but of his extended family with ties to organized crime and illegal activities in Myanmar were sentenced in Chinese courts in September 2025, including 11 who were sentenced to death.

An estimated US $1.4 billion was generated by the Ming family over 10 years through telecommunications fraud, illegal casinos, drug trafficking, and prostitution.

Inside offshore scam compounds

Beyond Southeast Asia, forced-scam operations have grown rapidly across the Mekong region. The, funded in part by , notes their study of CyberTipline reports and IP addresses point to a strong presence of scam compounds in Myanmar and Cambodia.

The financial impact of scam compounds is no small factor — ruling elites in these countries have a financial motivation to look the other way because of its high profitability. The in Cambodia are more than US $12.5 billion annually, or about half of the country’s formal GDP. Across Mekong countries (China, Myanmar, Laos, Thailand, Cambodia, and Vietnam), cyber-scam returns generate an estimated US $43.8 billion annually.


The financial impact of scam compounds is no small factor — ruling elites in these countries have a financial motivation to look the other way because of its high profitability.


Broad enforcement efforts have been relatively ineffective as scamming operations simply move within the country or offshore, and there are reports that these complex money laundering operations help move funds into the formal economy of countries with weak governance.

Despite the challenges in enforcement, some high-profile enforcement cases have helped to generate international coordination against cyber-scams and sextortion. A California teen’s death by suicide resulting from sextortion led to three years later. Interpol’s (July and August 2025) resulted in 260 arrests and more than 1,200 electronic device seizures in 14 African countries. The Association of Southeast Asian Nations (ASEAN) announced that as the main regional security concern last month. Domestically, the U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC) has issued sanctions on nine targets involved in scam operations in , and against (who is also associated with online scam centers).

Digital literacy as a solution

To truly begin to crack scam networks that operate in parts of the world with weak governance, of their citizens and support stronger cross-border investigation strategies. Stronger anti-money laundering frameworks can disrupt scam compounds more effectively than sting operations that just force the scam operation to move elsewhere.

It is critical that digital literacy is emphasized for online users who fall prey to sextortion and among job seekers lured into forced labor in scam compounds by fraudulent job advertisements. Cross-border collaboration among authorities, along with stronger enforcement and shared digital literacy, are the best defenses against this evolving threat.


You can find out more about our coverage of human trafficking, child exploitation, and forced labor at our Human Rights Crimes Resource Center here

]]>
Human Layer of AI: The crosswinds of AI, sustainability, and human rights enter the mainstream in 2026 /en-us/posts/sustainability/human-rights-enter-the-mainstream/ Thu, 08 Jan 2026 16:40:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=68962

Key takeaways:

      • Clean energy takes center stage in corporate AI initiativesAccess to cheap, low‑carbon power will become a core driver of AI competitiveness, especially in the US, where electricity costs are on the rise.

      • Corporate buyers of AI will exert new leverage over suppliers — Corporate buyers will increasingly use their purchasing power to push data center operators to align AI build‑outs with local climate, water, and community expectations — not just to supply more metrics.

      • AI’s human labor layer enters mainstream due diligence — AI labor supply chains will be brought into the mainstream supply chain and require human rights due diligence.


As we enter 2026, there are three main themes that many corporations will need to manage around issues of renewable energy, AI supplier behavior, and labor.

Theme 1: Renewables move to the center of corporate AI strategies

In 2026, AI competitiveness and energy policy will be tightly fused. With AI workloads driving up electricity demand amid datacenter buildouts, particularly in the United States, access to renewable energy sources in the form of abundant, cheap, low‑carbon power becomes a decisive factor in AI pricing and availability.Countries and companies that lock in this advantage early will shape AI deployment patterns for the rest of the decade.

“The economics of renewable energy are what is causing it to accelerate, even in the US,” says , an expert in sustainability and business. “Despite the political winds, the fact is that wind and solar are growing faster… because it is cheaper, better energy.”

In addition, countries and firms with large, subsidized renewable energy capabilities and flexible grids, such as China’s massive solar, wind, and hydro infrastructure, will have a low-cost advantage. (However, countries’ push for AI may counteract this by prompting governments to prioritize domestic AI stacks over purely cost‑optimized ones.) Yet, combining this asset , such as Kimi K2 and DeepSeek, it is not outside the realm of possibility that the country could emerge in the top spot in AI development and innovation.

Corporate pressure to increase AI adoption for efficiency combined with stakeholder expectations of investing in a low-carbon future will make renewables the center of corporate AI strategies. Increasingly, companies will be asked where their computers run, what energy mix powers them, how cost effective that energy mix is, and whether companies are effectively endorsing environmentally and socially harmful projects in host communities.

Theme 2: Local backlash forces suppliers and companies to confront AI’s impact

Over the last few years, big names among AI infrastructure providers have tried to take advantage of the AI revolution, in AI-related data centers, cloud systems, and other infrastructure with no end in sight over the next few years.

Despite the demand, local communities in which large data center construction projects are planned are pushing back. According to , $64 billion of data center projects in the US have been blocked or delayed amid local opposition since 2025. This opposition comes in part because of concerns regarding , strains on local water and natural resources, and the reduction of working farmland from data center rezoning attempts in rural communities.

In fact, AI data centers are pushing up electricity demand and fueling higher electricity prices for many US households. And, as retail electricity price increases over the next couple of years are likely to continue, it will be in part because of consuming more electricity.

As a result, the demand from stakeholders — in particular, those from local communities including local and state politicians — for increased transparency on the environment and social impacts of corporate AI services is likely to surge. In turn, corporate buyers of AI services will put pressure on the big AI service suppliers to provide more precision in the locations of such data systems as well as disclose more associated sustainability data, such as energy sources, grid impacts, and their level of community engagement where large AI infrastructure is based.

To deal with these competing priorities, boards of companies using AI services will need to reconcile AI cost‑cutting with their transition commitments by ensuring that cost advantages are not built on externalizing environmental and social harms.

Not surprisingly, in 2026, more boards will be drawn into explicit debates about whether AI‑driven cost savings justify exposure to higher community, political, and regulatory risk. This turns questions about data center locations and power contracts into mainstream agenda items.

Theme 3: The human layer of AI emerges as a centerpiece of the supply chain

The idea that AI is automating everything will sit uncomfortably alongside a growing recognition that large‑scale AI depends on a largely invisible workforce. Across the full AI life cycle of products — some of which rely on models that utilize labor in data collection, curation, annotation, labeling, evaluation, and content moderation — there are thousands of workers performing the tasks that make models safe, accurate, and usable.

As AI systems scale across sectors, demand for this human labor increases in volume and complexity, according to , a human rights expert at Article One Advisory. Indeed, much of it remains outsourced, precarious, or gig‑based (often in the Global South), with low pay, weak protections, and exposure to psychologically harmful content rampant. Civil society, unions, and regulators are beginning to connect AI innovation with labor rights and occupational health; and this reality makes the human layer of AI a frontline human rights issue rather than a technical detail.

The for AI‑related labor is likely to move from a niche concern to a mainstream pillar of corporate human rights due diligence. Companies will be under pressure to know what subcontractors and suppliers are doing to ensure human rights for individuals doing AI data enrichment and moderation work, under what conditions, and through which intermediaries.

Following the evolution of how conflict minerals or modern slavery have been integrated into supplier management, a shared view of AI labor supply chains by corporate procurement, legal, product management, and sustainability teams will materialize.

Forward into 2026

As AI becomes embedded in the infrastructure of daily life, companies will face mounting pressure to demonstrate that their AI strategies align with human rights and environmental commitments, not just efficiency gains. The convergence of these three themes signals that transparency in AI governance in 2026 will be inseparable from broader corporate governance and responsibility. And those organizations that treat these themes as compliance checkboxes rather than fundamental design principles will risk both reputational damage and operational disruption in an increasingly scrutinized landscape.

Companies that fear the exaggerated risk of attracting the ire of activists are underestimating the greater risk of losing the goodwill of customers, investors, and employees that they need,” Friedman adds.


You can find out more about how companies are managing issues of sustainability here

]]>
Strange intersections: The state of 21st century financial crime /en-us/posts/corporates/state-of-financial-crime/ Tue, 06 Jan 2026 16:01:04 +0000 https://blogs.thomsonreuters.com/en-us/?p=68951

Key insights:

      • Old laundering patterns have modern wrappers— Nefarious actors now cooperate to move value through mirror-trade commodity flows and sometimes crypto, blending legal transactions with illicit proceeds.

      • FinTech expands laundering options— Peer-to-peer apps, reloadable cards, kiosks, and virtual assets allow for the execution of many small conversion transactions that break up funds and blur clean-to-dirty movement.

      • Fraud scales cheaply in an AI era— As cash use drops, scams and extortion become lower-risk and easier to industrialize — sometimes through forced-labor scam operations — making verification and policy adaptation urgent.


When incentives align, strangers can become business partners. In the 21st century, traditional finance, banking, and cash payments have been disrupted by a watershed of technological advances for which we are all unprepared. This time of crisis and opportunity has created an unexpected alliance between FinTech firms and traditional banking institutions.

To fight financial crime, however, it is important to deal with the ever-evolving ways for currency to change forms and change hands across vast distances. This new way of moving money mirrors ancient systems of debt ledgers & interpersonal trust, often known as Hawala or Fei Chien. Criminals continue to innovate with both methods, creating unsettling partnerships.

The cartel-business partnership

Cartels, underground banking networks, and legitimate businesses now collaborate — sometimes unwittingly — to launder money by moving value through mirror-trade commodity flows and cryptocurrency, merging legal trade with illegal profits. Near-cash-style FinTech methods — such as peer-to-peer apps, reloadable cards, kiosks, and virtual assets — can expand laundering opportunities by enabling numerous small conversion transactions that fragment funds and obscure the movement of illicit money. As cash use declines, fraud, including scams and extortion (sometimes executed through forced-labor scam operations) becomes less risky and easier to scale in the AI era, underscoring the urgent need for verification and policy adaptation.

The flow of illicit cash also extends to digital assets. Some of the cash money that gets stuffed into bitcoin ATM-style kiosks is from the drug trade. Indeed, the U.S. Treasury Department’s Financial Crimes Enforcement Network (FinCEN) issued an alert on this topic as well and, while the two schemes seem distinct, we can speculate that some of the resulting Bitcoin, crypto, or other virtual assets went to underground bankers facilitating a mirror trade for a countryman.

What is old is new again

In the world of finance, the dawning of a new era of digital, on-demand, borderless transactions provides access to an exciting frontier of possibility. New coins, new blockchain tokenization uses, and new FinTech tools with cool names are all rising and falling faster than the price of bitcoin.

The players in this intersection have figured out that trade is profitable, and legal trade leading to illicit substance trade is even more profitable. Underground shipping, sanctions evasion, and dark web services for money laundering are all profitable by themselves, and when combined, they represent an illicit economic blitzkrieg.


Cartels, underground banking networks, and legitimate businesses now collaborate — sometimes unwittingly — to launder money by moving value through mirror-trade commodity flows and cryptocurrency.


Crypto is the new Hawala or Fei Chien because, with no bank or government involved, people can keep common copies of a ledger instead of relying on a hawaladar or Chinese underground banker to keep records. Virtual assets could facilitate the currency side of mirror trades, refilling a person’s coffers via digital transfer which can then be moved to an exchange and on to a local bank.

Commodities are the new cash because mirror trades are physically settled in commodities. For example, investment in source chemicals for drugs, negotiated at a discount, helps expand the illicit cartel business. Similarly, one-off items can be used for large-cash replacement transactions.

FinTech is the new money service business (MSB). We know that they are regulated the same but often serve different market segments, and many now exchange government fiat currency for one or more forms of cryptocurrency. Money laundering thrives on breaking up funds into smaller amounts to avoid reporting; therefore, a multitude of near-cash options like peer-to-peer payment apps, reloadable cards, and virtual assets help the launderer with this problem.

One might imagine that lower-tier street dealers could have several peer-to-peer payment app accounts for ease of use, because although the criminal is running an illicit business, it’s a business, nonetheless. Industry experts call these small payments conversion transactions because they usually come from a clean, legitimate payroll source but are converted to dirty funds when spent on an illicit substance or activity.

Fraud is low risk and AI fuels the fire

In this rapid-fire digital transaction world, fraud is the new mugging, complete with racketeering and slave labor farms. The profit margin on physical intimidation has gone down because people use cash less often, and many seldom carry it at all.

Due to digital innovation, communication technology, and AI, however, the barrier to entry for fraudulent theft, extortion, or scamming has gone down dramatically as well. Presumably, the margins are high because the ability to fraudulently communicate has become exponentially enabled by these tech advances. Fraud and scams are ubiquitous to the point of impeding legitimate business from communicating with customers effectively.


The players in this intersection have figured out that trade is profitable, and legal trade leading to illicit substance trade is even more profitable.


Further, slave labor has reared its ugly head in yet another strange intersection among these many things. Fraudsters in Southeast Asia build warehouses filled with tech and then force local people to operate scams and fraud schemes at scale. Aggregated funds from these efforts are sometimes moved via commodity or artifact, but often these funds are gathered from kiosks or peer-to-peer apps and then moved through cryptocurrency transactions until they become increasingly arduous to track.

Looking to the new dawn

It seems every few minutes brings us a new tool, a new opportunity, a new way to move money, and a new way to get scammed out of it all. This expanding capability is fueled by GenAI and even more advanced forms of AI. Business expands, productivity expands, and resources are consumed faster. Fraud is enabled, scaled, and seems to hang in the very air.

With the proliferation of digital, borderless, and AI-enabled everything, the human touch is more important than ever. Business owners note that requests for memorabilia and other tokens of physical value continue to rise. Cash will not go away, but its share of transactions is already diminished with the advent of crypto, new intersections in commodity exchange, and other person-to-person ways to settle accounts.

For the financial institutions, government agencies, and fintech firms that populate this world, creating informed best-practices and sensible policy documents is critical at this phase of innovation. Without a proactive approach we cannot hope to stay ahead of criminals and keep legitimate markets secure.


You can find out more about how organizations are using new methods to detect and prevent financial fraud here

]]>
Human Layer of AI: Protecting human rights in AI data enrichment work /en-us/posts/human-rights-crimes/ai-protecting-human-rights/ Fri, 19 Dec 2025 15:43:10 +0000 https://blogs.thomsonreuters.com/en-us/?p=68877

Key highlights:

      • Human rights risks are elevated for data enrichment workers — Data enrichment workers can face low and unstable pay, overtime pressure driven by buyer timelines, harmful content exposure with weak safeguards, limited grievance access, and uneven legal protections that hinder workers’ collective voice.

      • Human rights due diligence is essential for companies — Companies as buyers of these services must map subcontracting tiers, assess risk by employment model, document worker protections down to Tier-2 and Tier-3 suppliers, and audit and monitor their own rates, timelines, and payment terms to avoid reinforcing harm to workers.

      • Responsible contracting and remedy are a necessity — Contracts should embed shared responsibility, and include fair rates, predictable volumes, realistic deadlines, funded health & safety and mental‑health supports, effective grievance channels, and remediation.


Demand for data enrichment work has surged dramatically with the rapid development and expansion of AI technology. This work encompasses collecting, curating, annotating, and labeling data, as well as providing model training and evaluation — all of which are critical activities that improve how data functions in technological systems.

However, the workers performing these tasks currently operate under different employment models, according to from Article One Advisors, a corporate human rights advisory firm. Some workers are in-house employees at major AI developers, others work for business process outsourcing (BPO) companies, and many are independent contractors on gig platforms on which they bid for tasks and get paid per piece.

Human rights issues in data enrichment work

Data enrichment workers sit at the sharp end of the AI economy, yet many struggle to earn a stable, decent income. In particular, pay for gig workers often falls short of a living wage because tasks are sporadic, payments can be delayed, and compensation is frequently piece‑rate. Because work flows through , fees and margins get skimmed at each layer and shrink take‑home pay — another area of exploitation for today’s digital labor workforce.

In addition, another human rights issue at work is their right to rest, leisure, and family life and, in some places, even breaching guidance from the International Labour Organization (ILO) or local labor laws. Buyer purchasing practices with aggressive deadlines are a significant upstream driver of this overtime pressure.


National labor protections vary widely, and platform workers in particular often fall through regulatory gaps.


For many, the work itself carries health risks. Labeling and moderation can require repeated exposure to violent or graphic content, with well‑documented mental‑health impacts. Yet safeguards are uneven. Indeed, workers may lack protected breaks, task rotation, mental‑health support, adequate insurance, or the option to switch assignments. Even when content is not graphic, strain shows up as ergonomic problems, stress, and disrupted sleep.

When harm occurs, remedy can be hard to access. Platform-based work setups often provide no clear, trusted point of contact, and reports of retaliation deter complaints. Effective operational grievance mechanisms can be missing, and this leaves workers without credible paths to redress.

Finally, national labor protections vary widely, and platform workers in particular often fall through regulatory gaps. Because work is individualized and online, forming unions or works councils is harder. This weakens workers’ collective voice just where and when it is most needed to identify risks, negotiate improvements, and secure remedies.

Due diligence for companies buying data enrichment services is essential

When companies procure data enrichment services, they must recognize that respecting human rights extends throughout the entire value chain and not just with themselves and their direct suppliers. Companies creating trusted partnerships with their suppliers helps to identify issues before they become harmful and create mutual accountability for the humans behind the algorithms.

Article One Advisors’ Lloyd explains that the mandatory baseline starts with human rights due diligence, and can be found in areas such as:

      • Risk identification and assessment — The first step for companies is to identify and assess risksby understanding their suppliers’ model. This means knowing which groups of workers are full-time employees, contracted workers, or platform-based gig workers. Each model carries different risk profiles.
      • Subcontractor ecosystem mapping — Tracing the subcontracting chainto see how many layers exist between the supplier and the workers is essential. Fees and pressures compound at each tier of the value chain, says Lloyd.
      • Documentation of worker protections in Tier 2 and Tier 3 suppliers — Assessing and promoting worker protections for every layer of the value chain — which includes making sure the wage structures are clearly defined and equitable, health and safety measures are adequate, and protections for exposure to harmful content and effective grievance mechanisms exist — are baseline elements of human rights due diligence.
      • Examination of company’s own practices — Finally, it is necessary for companies to ensure that their own procurement standards and contracts are not reinforcing human rights harms. This includes companies confirming that their contract terms, timelines, and payment schedules are not inadvertently forcing suppliers to cut corners.

Responsible contracting and remedy mechanisms

Companies as buyers of data enrichment services also must instill shared responsibility in owning worker outcomes among themselves, BPOs, platforms, and model developers. Comprehensive, clear human-rights standards, living-income benchmarks, and shared responsibility are essential elements of good purchasing practices. More specifically, these require fair rates for work, predictable volume expectations, and realistic timelines to make sure suppliers do not push excessive hours. In addition, budgets should include cost-sharing for audits, key risk management measures (such as mental health support), and occupational health and safety controls.

Smart remediation turns harmful situations into improved conditions by providing back-pay for underpayment, medical and psychosocial care after exposure to harmful content, contract adjustments to remove perverse incentives, and time-bound corrective action plans co-designed with worker input. As a last resort when buyer and supplier need to part ways, a responsible exit is planned with notice, transition support, and no sudden contract termination that strands workers.

Similarly, grievance mechanisms for platform workers — who are often dispersed across geographics, classified as independent contractors, and lack line managers or union channels — need to be contractually documented. Effective grievance redressal needs to include confidential mechanisms and remediation processes, in-platform dispute tools, independent individuals to investigate complaints, multilingual facilitation, and joint buyer-supplier escalation paths to bridge gaps in labor-law protection and deliver credible remedies at scale, Lloyd notes.

Promoting quality through worker well-being

Protecting data enrichment workers is not only an ethical imperative but also essential for AI quality itself. When workers face excessive hours, inadequate pay, or harmful content exposure without proper support, the resulting stress and burnout directly impact data quality outcomes. Companies must recognize that responsibility for worker well-being and quality data outcomes extend throughout the entire value chain and does not solely rest with BPOs providers alone.


You can find more about the challenges companies and their workers face from forced labor in their supply chain here

]]>
How forced scamming compounds could be fueling child sextortion /en-us/posts/human-rights-crimes/forced-scamming-child-sextortion/ Thu, 23 Oct 2025 14:39:23 +0000 https://blogs.thomsonreuters.com/en-us/?p=68148

3 key takeaways:

      • The connection is detectable but required massive data analysis — IJM analyzed more than 1 million CyberTipline reports and matched them with mobile device data, ultimately linking sextortion reports to forced scamming sites in Cambodia, Myanmar, and Laos.
      • Forced scamming compounds exploit trafficking victims to commit crimes — Human trafficking victims are lured by fake job ads, then confined in guarded compounds where they’re coerced into running various online scams.
      • Coordinated multi-stakeholder action is urgently needed — Electronic service providers must improve account creation safeguards and detection methods, while law enforcement needs to better coordinate cross-border investigations.

New research by links hundreds of financially-motivated child sextortion reports to scam compounds in Cambodia, Myanmar, and Laos. Indeed, the research shows that significant effort was required to detect these linkages as IJM analyzed more than 1 million CyberTipline reports in the “Online Enticement” category from the National Center for Missing and Exploited Children (NCMEC).

“Our research provides the first clear evidence of this likely link, but to understand the true scale of the problem, there needs to be further urgent investigation into this troubling nexus by law enforcement, tech companies, and global governments,” says Eric Heintz, Senior Criminal Analyst at IJM.

Because forced scamming compounds now blend labor trafficking, high-volume online fraud, and financially-motivated child sextortion, it becomes critical that electronic service providers (ESPs) must harden account creation and improve detection of signals indicative of online fraud and sextortion. In addition, law enforcement must better coordinate their efforts at cross-border investigations and distinguish trafficked workers from criminal organizers.

Links between compound scamming and child sextortion

To illustrate the details of how these two fast-moving crime waves are converging online, forced scamming occurs when victims are trafficked into guarded compounds across Cambodia, Myanmar, and Laos, after responding to fake job-ads. These trafficked victims then are coerced into defrauding targets online as part of forced scams. These schemes employ deceit or trickery to defraud the online targets, often using scripted approaches, fake personas, or impersonation to elicit money or sensitive information.


You can read the IJM report here


The types of scams include romance, investment, crypto, fake loans, and impersonation scams, all of which are carried out from inside guarded compounds. Many times, trafficking victims endure confinement and abuse as part of being forced to perpetrate these scams on others.

These human trafficking victims are also trained on psychological manipulation tactics to lure in potential victims, including children in some instances, although it is not evident that children are being intentionally targeted.

Within some of the scam operations, if trafficking victims fail to elicit the desired outcome, such as an investment in a cryptocurrency fraud scheme, they are forced to pivot to sexualized chat and a request for images or a video call. The forced labor victims then use the collected sexual images to blackmail the scam target for money under threat of exposure. Since 2022, reports of such financially-motivated sextortion have surged globally and have disproportionately affected boys and young men, with devastating psychological harm including documented suicides.


Forced scamming is not just a fraud trend; rather, it is a human rights crisis that collides with child protection, cybercrime, and organized criminal groups across Southeast Asia and beyond.


Researchers from IJM combined large-scale ESP platform reports with mobile ad-tech telemetry to trace overlap between child sextortion and forced scamming. They analyzed nearly 1.2 million reports from NCMEC that covered 3.17 million IP addresses and paired them with more than 300 million advertiser ID rows, which included mobile devices used in these locations, the device’s latitude and longitude, and the IP address and date and timestamp (UTC) of an internet connection by the device. These were collected from 44 confirmed scam sites in Cambodia, Myanmar, and Laos, resulting in 493 reports tied to devices at 40 sites in these countries.

The strongest links centered in hotspots in Cambodia and Myanmar, and some IP addresses traced back to internet service providers in Thailand. This reflected cross-border routing and service reliance and occurred when activity originated in neighboring countries or special economic zones.

Required actions to protect children

Coordinated action by platforms and law enforcement is essential to expose, disrupt, and prosecute the intertwined machinery of forced scamming and financially-motivated child sextortion. ESPs, such as social media networks, messaging apps, email providers, cloud services, and dating platforms, submit CyberTipline Reports to NCMEC when they detect suspected child sexual exploitation. While this is helpful, more efforts are required, which include:

      • Cross-referencing account creation and activity with known scam hotspots and scripted patterns
      • Including precise timestamps, IP addresses, and geolocation context in CyberTipline submissions
      • Flagging and disrupting account creation that originates from suspicious infrastructure, beyond simple VPN indicators

At the same time, further law enforcement action is needed to improve disruption and prosecution of these networks, including:

      • Examining sextortion cases for signs of forced scamming, which may include scripts, crypto addresses, or investment lures
      • Studying evidence for indicators of sextortion as a tactic, such as the use of sexually explicit scripts or imagery
      • Considering that some suspects are themselves trafficked victims who have been coerced into scam operations
      • Using advertiser ID data and timestamp matching to pinpoint devices and compounds
      • Devising ways to coordinate cross-border law enforcement actions in hotspot countries, known scam regions, and local jurisdictions.

Forced scamming is not just a fraud trend; rather, it is a human rights crisis that collides with child protection, cybercrime, and organized criminal groups across Southeast Asia and beyond.

The nexus between forced scamming and financially-motivated sextortion of children is detectable — as demonstrated by IJM’s new research. Now is the time for action among ESP platforms, law enforcement, and NGOs to align data and coordinate cross-border responses to better identify devices, compounds, and networks in real time.


You can learn more about how organizations can reduce and mitigate child exploitation in the TR Institute’s human rights crime resource center

]]>