Human Capital Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/human-capital/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Fri, 17 Apr 2026 06:41:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Housing affordability in Mexico City: How the 2026 FIFA World Cup exposes a deeper urban crisis /en-us/posts/sustainability/housing-affordability-crisis-mexico/ Fri, 17 Apr 2026 06:04:56 +0000 https://blogs.thomsonreuters.com/en-us/?p=70429

Key takeaways:

      • The FIFA World Cup is a catalyst, not the root cause — Mexico City’s housing affordability crisis predates the coming tournament. Rental prices have been rising uncontrollably for years, displacing thousands of families annually. The World Cup will accelerate and amplify an already existing problem.

      • The 2024 rental reform is a step in the right direction, but it has significant limitations — Capping rent increases at the annual inflation rate was a necessary measure, but its impact has been limited by grey areas in the law.

      • The real battle is formalization — No housing regulation can be fully effective if a large portion of the market operates outside of it. Until authorities find ways to make formal rental agreements genuinely attractive and accessible for both landlords and tenants.


On the eve of the 23rd playing of the FIFA World Cup, Mexico stands as one of three host countries for one of the most significant sporting events in the world. It will feature matches in Mexico City, Guadalajara, and Monterrey, and it will be co-hosted alongside the United States and Canada.

Organizing such an event carries notable financial benefits, including a surge in tourism, job creation, and substantial foreign investment — all of which generate a local economic spillover that strengthens the national marketplace. At the same time, Mexico’s major capitals— especially its World Cup host cities — have been undergoing a level of urban transformation that has significantly altered the daily lives of its residents. Chief among these changes is the sharp rise in rental costs, which has been pushing residents toward the cities’ outskirts. According to government figures, are displaced each year due to the uncontrolled increase in housing prices in Mexico City alone.

Mexican authorities had to get to work

Legal changes to real estate regulation in Mexico City are not isolated, and what is implemented in the capital often sets a precedent for the rest of the country. Time and again, Mexico City has served as a laboratory for new policies, and when these are proven effective, they become models for nationwide reform.


According to government figures, more than 20,000 households are displaced each year due to the uncontrolled increase in housing prices in Mexico City alone.


That said, in August 2024 — after the city’s head of government noted that rentals costs in none of the boroughs of Mexico City fall below the city’s minimum wage, and that 9 out of 13 boroughs average rents that exceeded twice the minimum wage — the Official Gazette of Mexico City published a decree amending Articles 2448-D and 2448-F of the Civil Code for the Federal District, imposing limits on rent increases for residential properties. Previously, the monthly rent increase could not exceed 10% of the agreed-upon rent. That paragraph was amended to establish that rent increases shall never exceed the inflation rate reported by the Bank of Mexico for the previous year.

It is worth noting that the prior 10% cap was nearly three times the general annual inflation rate calculated by the Bank of Mexico in 2025, which stood at 3.69%.

More than a year after these reforms took effect, however, 2025 closed with an average increase in rental prices of . With the FIFA World Cup approaching, prices are expected to continue rising uncontrollably due to the influx of tourists drawn by the event. This concern is well-founded: Ahead of the 2022 World Cup in Qatar, empowered landlords to raise rents by more than 40%.

Mexico City’s rental reform also introduced additional measures. For example, a digital registry for lease agreements was established, to be immediately authorized and managed by the Government of Mexico City. Landlords now are required to register lease agreements within 30 days of their execution. Furthermore, landlords are prohibited from refusing to rent to tenants on the grounds that they have children or pets.

The registration requirement carries real consequences: Should a landlord fail to register a contract within the stipulated period, their ability to invoke legal protection mechanisms in the event of a dispute with a tenant becomes significantly more complicated.

Regardless of the efforts, it’s not all smooth sailing

That said, the reform contains certain grey areas that limit its scope. For instance, it only applies under specific conditions — most notably when a lease has been in place for three years or more. A landlord can effectively circumvent the cap by choosing not to renew an existing contract and instead requiring the tenant to sign a new one at a higher price.

A separate but equally significant obstacle to the reform’s effectiveness is the rapid growth of short-term rental platforms. In recent years, the proliferation of temporary accommodation services has steadily reduced the supply of traditional long-term rentals, as more properties are listed on platforms such as Airbnb, Vrbo, or others. Indeed, every 48 hours, three housing units in Mexico City are . And from a national perspective, the Tourism Gross Product reached approximately US $151.5 billion, equivalent to 8.7% of Mexico’s GDP.


Every 48 hours, three housing units in Mexico City are converted into Airbnb listings.


This problem is further compounded by the scale of informal rental arrangements. According to the National Housing Survey conducted by Mexico’s National Institute of Statistics and Geography (INEGI), there are more than 200,000 informal rental agreements in Mexico City — none of which involve formal contracts.

Forcing the real estate market into formalization

This brings us to the central challenge facing city authorities with regard to housing: The need to incentivize the formalization of the real estate market. This is already complicated by the country’s low tax culture and the requirement for landlords to enter a specific tax regime that raises their tax burden. Additionally, rental contracts are not only essential for protecting tenants’ rights, but they also are equally important for landlords — because without a legally binding agreement, there is no guarantee that the terms of any arrangement will be honored.

Paradoxically, the recent reform may actually push the informal market further underground. By requiring landlords to formally declare their rental income, the regulation inevitably creates a sense of heightened oversight — one that informal landlords may seek to evade rather than comply with.

To the authorities of Mexico City, the message is clear — punitive measures alone will not bring the informal market into the fold. Tax benefits for landlords who register their contracts, streamlined and accessible digital registration processes, and legal protections that make formal agreements genuinely advantageous for both parties could go a long way toward building trust in the system.

The 2026 FIFA World Cup will come and go, of course, but the people of Mexico City will remain. They deserve a housing market that works for them — not one that treats their homes as a commodity to be priced beyond their reach every time the world turns its attention to their city.


You can find out more about the

]]>
Tackling human trafficking at the 2026 FIFA World Cup /en-us/posts/human-rights-crimes/human-trafficking-2026-fifa-world-cup/ Thu, 16 Apr 2026 14:01:56 +0000 https://blogs.thomsonreuters.com/en-us/?p=70341

Key insights:

      • Big sporting events create perfect cover for sex trafficking — The World Cup’s massive crowds, temporary workers, and stretched local infrastructure make it easier for traffickers to blend in and exploit vulnerable people while staying largely out of sight.

      • Money trails and online ads are where traffickers slip up — Trafficking often leaves patterns, such as payments tied to commercial sex ads, round‑dollar peer‑to‑peer transactions, and repeat phone numbers or language across online ads. Banks and investigators can spot these red flags, if they know what to look for.

      • Early, cross‑sector collaboration is what actually makes a difference — The strongest prevention efforts happen before kickoff, when law enforcement, financial institutions, and nonprofits share intelligence, use formal information‑sharing tools, and build trusted local networks to respond quickly and protect victims.


As millions of soccer fans descend upon stadiums across North America for the 2026 FIFA World Cup in June and July, perpetrators of human rights crimes also are getting ready to operate in the shadows of host cities. Criminal networks are preparing to exploit the crowds, traffic, and chaos during the event by trafficking vulnerable individuals for commercial sex.

Human traffickers and organized crime groups often exploit major sporting events as opportunities to make quick money because the massive influx of visitors, temporary workers, and strained infrastructure creates perfect conditions for traffickers to operate while being largely undetected. At the same time, the stakeholders involved in countering this illegal activity — including law enforcement, civil society organizations, and financial institutions — stand ready to detect it, disrupt it, and protect vulnerable individuals who are exploited by criminal actors.

Indeed, close coordination and collaboration among these entities in advance of the games is key. To that end, the Association of Certified Anti-Money Laundering Specialists (ACAMS) and ¶¶ŇőłÉÄę are collaborating on a virtual and live event series to support these planning counter-trafficking efforts among stakeholders in several local cities this Spring.

Why major sporting events attract human trafficking activity

Not surprisingly, large crowds draw business opportunities whether they are legitimate or illicit. Collaboration between public and private entities underscore spikes in human trafficking activity. For example, during a recent large sporting event in 2025, ¶¶ŇőłÉÄę Special Services partnered with federal law enforcement and other partners to identify nine adult encounters & services offered, which led to the recovery of two juveniles from sex trafficking and three state arrests

Common industries that involve the exploitation of vulnerable individuals include hospitality, construction, illicit massage businesses, escort services, and adult content production. The chaos of events and large influx of people mask the reality that exploitation is happening and makes detection significantly more challenging during these high-traffic periods.


Human traffickers and organized crime groups often exploit major sporting events as opportunities to make quick money because the massive influx of visitors, temporary workers, and strained infrastructure creates perfect conditions for traffickers to operate while being largely undetected.


Critically, understanding human trafficking as a business model depends on the recruitment of vulnerable people and access to money flows. These aspects of the business are also where detection can occur. Financial institutions and money service businesses can identify suspicious transactions related to human trafficking by understanding and recognizing specific transactional patterns, including payments to commercial sex advertisement websites, round-dollar peer-to-peer transactions, and merchant services linked to illicit massage businesses.

This online footprint left by traffickers proves invaluable for detection. Investigators track advertisements across adult services websites, identifying criminal networks through repeated phone numbers, distinctive emojis, and similar wording that may appear across multiple cities. However, smaller-scale operations present significant challenges as well. When the trafficker is an intimate partner or family member with limited transaction volumes, detection becomes exponentially more difficult without external intelligence.

Collaboration is key for prevention and detection

The most critical element for combating human trafficking at major sporting events is collaboration among anti-trafficking experts and employers of these professionals. Effective prevention requires building strong partnerships before these major events occur. Specific actions that can be taken include:

Establishing multi-sector task forces — The most successful anti-trafficking efforts involve joint task forces that combine federal, state, and local law enforcement with trusted private sector partners and supportive nonprofits or non-government organizations (NGOs) that offer victim services. This toolkit for large scale public events and other anti-trafficking toolkits are excellent resources for local host cities to use to execute these partnerships. These collaborative mechanisms allow different entities to share information in a timely manner.

Leveraging information sharing mechanisms — Financial institutions can use Section 314(b) authority for peer-to-peer information sharing between banks. This allows financial institutions to piece together fragments of suspicious activity that individually might seem insignificant but collectively reveal trafficking networks. Large federal agencies are consumed by multiple priorities and benefit from information sharing through Section 314(a) and assistance from financial sector partners during special operations to act as a force multiplier. Law enforcement also can benefit from detailed Suspicious Activity Reports (SARs) that contain specific dollar amounts, clear timelines, behavioral observations, and explicit keywords like human trafficking.

Preparing host cities by building networks and outreach in advance — Some World Cup host cities have already established human rights plans with robust collaborative systems within local task forces, government awareness campaigns, QR codes that link to support services, and multidisciplinary safety plans.

In addition, anti-trafficking professionals across all sectors are accessible and willing to help. Resources include national hotlines, such as the , referral directories on website, and the for cases involving minors. The most important step is simply reaching out to establish connections before crises occur.

Preparing for a safer event

The 2026 World Cup presents a pivotal moment to strengthen collaborative efforts against human trafficking across North America’s host cities. By establishing robust information-sharing networks between financial institutions, law enforcement, NGOs, and host communities before the tournament begins, stakeholders can transform heightened awareness into meaningful action that protects vulnerable individuals.

While traffickers will undoubtedly attempt to exploit the inevitable chaos surrounding a major event like the World Cup, a coordinated, multi-sector response grounded in shared intelligence, victim-centered approaches, and proactive preparation can disrupt their operations and ensure that the world’s celebration of soccer doesn’t come at the cost of human dignity and freedom.


You can find out more aboutĚýhow organizations are trying to fight against human rights crimes here

]]>
Human layer of AI: How to build human-centered AI safety to mitigate harm and misuse /en-us/posts/human-rights-crimes/human-layer-of-ai-building-safety/ Mon, 09 Mar 2026 17:33:34 +0000 https://blogs.thomsonreuters.com/en-us/?p=69789

Key highlights:

      • Map risks before building— Distinguish between foreseeable harms that may be embedded in your product’s design and potential misuse by bad actors.

      • Safety processes need real authority— An AI safety framework is only credible if it has the power to delay launches, halt deployments, or mandate redesigns when human rights risks outweigh business incentives.

      • Triggers enable proactive intervention— Define clear, automatic review triggers such as product updates, geographic expansion, or emerging patterns in user reports to ensure your safety processes adapt as risks evolve rather than reacting after harm occurs.


In recent months, the human cost of AI has become impossible to ignore. after interacting with AI chatbots, while generative AI (GenAI) tools have been weaponized to create that digitally undress women and children. These tragedies underscore that the gap between stated values around AI and actual safeguards remains wide, despite major tech companies publishing responsible AI principles.

, a senior associate at , who works at the intersection of technology and human rights, argues that closing this gap requires companies to: i) systematically assess both foreseeable harms from intended AI use and plausible misuse by bad actors; and ii) build safety processes powerful enough to actually stop launches when risks to people outweigh commercial incentives.

Detailing the two-step framework for anticipating and addressing AI risks

To build effective AI safety processes, companies must first understand what they’re protecting against, then establish credible mechanisms to act on that knowledge.

Step 1: Mapping foreseeable arms and intentional misuse

When mapping AI risks during “responsible foresight workshops” with clients, Richard-Carvajal says she takes them through a process that identifies:

    • foreseeable harms that emerge from a product’s design itself. For example, algorithm-driven recommender systems — which often are used by social media platforms to keep users on the site — are designed to drive engagement through personalized content, and are well-documented in amplifying sensationalist, polarizing, and emotionally harmful content, according to Richard-Carvajal.
    • intentional misuse that involves bad actors who may weaponize technology beyond its purpose. Richard-Carvajal points to the example of Bluetooth tracking devices, which initially were designed to help people find lost items, but were quickly exploited by stalkers, who placed them in victims’ handbags in order to track their movements and in some cases, to follow them home.

Tactically, the role-playing use of “bad actor personas” by Richard-Carvajal and her colleagues can help clients imagine misuse scenarios and help ensure companies anticipate harm before it occurs rather than responding after people have been hurt.

Step 2: Building a credible AI safety process

Once risks are identified, Richard-Carvajal says she advises that companies identify mechanisms to address them.ĚýThe components of a legitimate AI safety framework mirror the structure of robust human rights due diligence by centering on the risks to people.

Indeed, Richard-Carvajal identifies core components of this framework, which include: i) hazard analysis and to anticipate both foreseeable harms and potential misuse; ii) incident response mechanisms that allow users to report problems; and iii) ongoing review protocols that adapt as risks evolve.

Continual evaluation of new emerging risks is needed

As AI capabilities advance and deployment contexts expand, companies must continuously reassess whether their existing safeguards remain adequate against evolving threats to privacy, vulnerable populations, human autonomy, and explainability. Richard-Carvajal discusses each one of these factors in depth.

Privacy — Traditional privacy mitigations, such as removing information that leads to identifying specific individuals, are no longer sufficient as AI systems can now re-identify individuals by linking supposedly anonymized data back to specific people or using synthetic training data that still enables re-identification. The rise of personalized AI — in which sensitive information from emails, calendars, and health data aggregates into comprehensive profiles shared across third-party providers — can create new privacy vulnerabilities.

Children — Companies must apply a heightened risk lens for vulnerable populations, such as children, because young users lack the same capacity as adults to critically assess AI outputs. Indeed, the growing concerns around AI usage and children are warranted because of AI-generated deepfakes involving real children are being created without their consent. In fact, Richard-Carvajal says that current guidance calls for specific child rights impact assessments and emphasizes the need to engage children, caregivers, educators, and communities.

Cognitive decay — A growing concern is that too much AI usage can harm human autonomy and contribute to a decline in critical thinking. This occurs when , and it has the potential to undermine their human rights in regard to work, education, and informed civic participation.

Meaningful explainability — Companies’ commitment to explainability as a core tenet of their responsible AI programs was always a challenge. As synthetic AI-generated data increasingly trains new models, explainability becomes even more critical because engineers may struggle to trace decision-making through these layered systems. To make explainability meaningful in these contexts, companies must disclose AI limitations and appropriate use contexts, while maintaining human-in-the-loop oversight for consequential decisions. Likewise, testing explanations should require engagement with actual rights holders instead of just relying on internal reviews.

Moving forward safely

While no universal checklist exists for AI safety, the systematic approach itself is non-negotiable. Success means empowering engineers to identify and address human-centered risks early, maintaining ongoing stakeholder engagement, and building safety processes that have genuine authority to delay launches, halt deployments, or mandate redesigns when human rights outweigh commercial pressures to ship products.

If your company builds or deploys AI, take action now: Give your engineers and risk teams the authority and resources to identify harms early, keep continuous engagement with affected people and independent stakeholders, and create governance that have the power to keep harm from happening.

Indeed, companies need to make sure these steps go beyond simple best practices on paper and make these protective processes operational, measurable, and enforceable before their next product release.


You can find more about human rights considerations around AI in our ongoingĚýHuman Layer of AI seriesĚýhere

]]>
Human Layer of AI: How to hardwire human rights into the AI product lifecycle /en-us/posts/human-rights-crimes/human-layer-of-ai-hardwire-human-rights/ Tue, 27 Jan 2026 16:50:00 +0000 https://blogs.thomsonreuters.com/en-us/?p=69143

Key highlights:

      • Principles need a repeatable process —ĚýResponsible AI commitments become real only when companies systematize human rights due diligence to guide decisions from concept through deployment.

      • Policy and engineering teams should co-own safeguards — Ongoing collaboration between policy and technical teams can help translate ideals like fairness into concrete requirements, risk-based approaches, and other critical decisions.

      • Engage, anticipate, document, and improve continuously —ĚýInvolving impacted communities, running regular foresight exercises (such as scenario workshops), and building strong documentation and feedback loops make human rights accountability durable, instead of a one-time check-the-box exercise.


More and more companies are adopting responsible AI principles that promise fairness, transparency, and respect for human rights, but these commitments are difficult to put into practice when it comes to writing code and making product decisions.

, a human rights and responsible AI advisor at Article One Advisors, works with companies to help turn human rights commitments into concrete steps that are followed across the AI product lifecycle. He says that the key to bridging the gap between principles and practice is embedding human rights due diligence into the framework that guides product development from concept to deployment.

Operationalizing human rights

Human rights due diligence involves a structured process that begins with immersion in the process of building the product and identifying its potential use cases, whether it is an early concept, prototype, or an existing product. This is followed by an exercise to map the stakeholders who could be impacted by the product, along with the salient human rights risks associated with its use.

From there, the internal teams collectively create a human rights impact assessment, which examines any unintended consequences and potential misuse. They then test existing safeguards in design, development, and how and to whom the product is sold. “Typically, a new product will have many positive use cases,” explains Natour. “The purpose of a is to find the ways in which the product can be used or misused to cause harm.” In Natour’s experience, the outcome is rarely a simple go or no-go decision. Instead, the range of decisions often includes options such as go with safeguards or go but be prepared to pull back.

Faris Natour, of Article One Advisors

The use of human rights due diligence in the AI product lifecycle is relatively new (less than a decade old) and as Natour explains, there are five essential actions that can work together as a system:

1. Encourage collaboration between policy and engineering teams

Inside most companies, responsible AI is split between policy teams, which may own the principles, and the engineering teams, which own the systems that bring those principles to life. Working with companies, Natour brings these two functions together through a series of workshops to create structured, ongoing collaboration between human rights and responsible AI experts and the technical teams to better co-develop responsible AI requirements.

In the early stages of the collective teams’ work, the challenges of turning principles into practice emerge quickly. For example, the scale of applications and use cases for an AI product can make it difficult to zero in on those uses that . Not all products or use cases need to be treated equally, says Natour, and companies should identify those that could potentially cause the most harm. Indeed, these most-harmful uses may involve a “consequential decision” such as in the legal, employment, or criminal justice fields, he says, adding that those products should be selected for deeper due diligence.

2. Consider the principles at each stage of the development process

Broad principles and values, such as fairness and human rights, should be considered at each stage of the lifecycle. For the principle of fairness, for example, teams may assess which communities will use this product and who will be impacted by those use cases. Then, teams should consider whether these communities are represented on the design and development teams working on the product, and if not, they need to develop a plan for ensuring their input.

3. Engage with impacted communities and rightsholders

Natour advocates for companies to actively engage with impacted communities and stakeholders, including those who are potential users or who may be affected by the product’s use. This could be the company’s own employees, for example, especially if the company is developing productivity tools to use internally in their workplace. Special consideration should be given to vulnerable and marginalized groups whose human rights might be at greatest risk.

External experts, such as Natour and his colleagues, hold focus groups with such stakeholders as . The feedback from focus groups can then be used to influence model design, product development, as well as risk mitigation and remediation measures. “In the end, knowing how users and others are impacted by your products usually helps you make a better product,” he states.

4. Establish responsible foresight mechanisms

To prevent responsible AI from becoming a one-time check-the-box exercise, Natour says he uses responsible foresight workshops and other mechanisms as a “way to create space for developers to pause, identify, and consider potential risks, and collaborate on risk mitigations.”

The workshops use personas and hypothetical scenarios to help teams identify and prioritize risks, then design concrete mitigations with follow-on sessions to review progress. Another approach includes developing simple, structured question sets that push product teams to pause and think about harm. For example, Natour explains how one of his clients includes the question: What would a super villain do with this product? in order to help product teams identify and safeguard against potential misuse.

5. Create documentation and feedback loops for accountability

As expectations around assurance rise from regulators, customers, and civil society, strong documentation and meaningful, accessible transparency are essential, says Natour.ĚýClear, succinct, and accessible user-facing information about what a model does and does not do, about data privacy, and other key aspects can help users understand “what happens with their data, as well as the capabilities and the limitations of the tool they are using,” he adds.

Further, transparency should enable two-way communication, and companies should set up feedback loops to enable continuous improvement in the ways they seek to mitigate potential human rights risks.

The hardwired future

Effectively embedding human rights into the AI product lifecycle starts with a shared governance model between a company’s policy and engineering teams. Together they can collectively hardwire human rights into the way AI systems are imagined, built, and brought to market.


You can find more about human rights considerations around AI in our here

]]>
Human rights due diligence and mega sporting events /en-us/posts/human-rights-crimes/mega-sporting-events/ Thu, 22 Jan 2026 11:42:50 +0000 https://blogs.thomsonreuters.com/en-us/?p=69091

Key insights:

      • Effective human rights due diligence — Human rights can be hardwired into procurement by setting standards that include clear documentation thresholds, a code of conduct that bans forced labor and trafficking, a supplier assessment questionnaire, a locally informed worker safeguards addendum, and a risk-based vendor-grading rubric.

      • Procurement should feature human rights enforceable obligations — Further, human rights can be hardwired into commitments, such as request for proposals, vendor evaluation, and contract clauses.

      • Engaging unions and community groups early can lead to strong execution — Effective implementation relies on early stakeholder structures (unions, community groups, etc.), robust worker grievance mechanisms, and independent interviewers, complemented by AI-driven monitoring and continuous, rapid risk response.


Mega sporting events can have a significant impact on local economies, but they also pose substantial human rights risks, including labor exploitation, forced displacement, and sex trafficking. With the Super Bowl and Winter Olympics coming up next month, and the World Cup in summer, it’s crucial that organizations, communities, and governments prepare now to mitigate any human rights problems with these events.

As an advisor to host cities on human rights with more than a decade of experience now as the chief executive of , I have seen firsthand how the right commitments and responsible contracting practices can help mitigate these risks. By prioritizing human rights and adopting robust contracting practices, the cities that host these mega sporting events can ensure a positive legacy that extends beyond the event itself.

This was a recent topic at an event hosted by ¶¶ŇőłÉÄę and the International Labor Organization as part of its in which representatives from host cities, civil society organizations, and governments came together to discuss best practices to turn commitments around human rights into action during the FIFA World Cup games later this year. As a participant in this event, Henekom shared our approach in translating high‑level human rights commitments into context‑specific safeguards in order to create the social architecture that aligns organizational practice with community needs.


January is National Human Trafficking Prevention Month in the United States.ĚýCheck out our Human Rights Crimes resource center to learn how toĚýstop and prevent human trafficking


Centering human rights by using rigorous contracting standards starts with local jurisdictions working with multidisciplinary stakeholders to embed strong and comprehensive policies and protocols at all stages of event planning. In my experience, an all-inclusive approach typically shares five elements:

      1. Clear thresholds in human rights documentation that are designed for speed of business.
      2. Code of conduct with essential ingredients, which include explicit bans on forced labor, trafficking, and other exploitation.
      3. Supplier assessment questionnaire (SAQ) that flags geographic and sector risk, such as temporary labor of food service employees.
      4. Worker safeguards addendum (WSA) that is built from local labor stakeholders who have lived concerns that help to translate the United Nations Guiding Principles on Business and Human Rights (UNGPs) into local realities.
      5. Risk-based grading rubric for vendors that weights SAQ and WSA responses and turns them into a contracting risk rating.

In my experience, implementing these policies and tools deeply within the organization means embedding requirements at three critical junctures: i) request for proposals (RFPs); ii) vendor evaluation as part of the selection process; and iii) contract clauses. First, when subject-matter experts draft RFPs, the workflow should force-check human rights and sustainability language (or auto-insert standard clauses). Second, during vendor evaluation, the human rights team grades each SAQ/WSA and assigns a risk-based score. Third, contracts must lock in enforceability with particular emphasis on audit rights, corrective action plans, termination for cause, access to remedy, and accountability mechanisms, such as payment withholding.

Vendor contract agreements between the host cities and primary contractors are the best vehicle to incorporate enforcement of these rights. Likewise, provisions for these rights should also be incorporated into contracts between primary contractors and any subcontractors.


Centering human rights by using rigorous contracting standards starts with local jurisdictions working with multidisciplinary stakeholders to embed strong and comprehensive policies and protocols at all stages of event planning.


Temporary labor at mega sporting events — which include individuals working in private security, souvenir sales, construction, janitorial, and food service — adds complexity but does not have to stifle efforts to honor decent work and other human rights. With a solid sourcing policy, vendors get practical tools and technical assistance to implement requirements quickly.

Common examples include building a checks-and-balances loop with worker centers to receive complaints, and data reporting to track hours, wages, recruitment fees, and grievance outcomes. The risk-based grading rubric for vendors ideally determines the monitoring intensity, frequency of site visits, and reporting cadence.

Effective approaches for implementation

Beyond contract language, the following three actions and tools to help instill accountability in human rights commitments are recommended:

Working with stakeholders from day one — To effectively safeguard human rights, it’s crucial to establish standing stakeholder structures, such as advisory councils and labor roundtables, in order to co-create standards and monitor progress with unions and community groups. By doing so, organizations can ensure workers’ voices are heard, issues are escalated, and commitments are translated into tangible results through collective action and remediation advice.

Centering workers and ensuring access to grievance mechanisms — Establishing on-site, back-of-house centers for workers with confidential and multilingual intake processes, along with clear resolution pathways, is an effective way to drive accountability and reinforce human rights commitments. Using trained, independent worker interviewers with unannounced access to ensure compliance across venues, shifts, and subcontractor tiers further adds to this accountability.

Together, these approaches provide a means for workers to report concerns, verify compliance with policy requirements, and ensure that human rights are respected throughout the supply chain.

Using AI to fortify accountability — AI offers powerful tools for detecting and preventing labor exploitation in supply chains through automated monitoring and pattern recognition. Likewise, natural language processing may be able to analyze hotline transcripts and grievance logs to identify trends.

Even with the best policies and accountability tools, however, risks still persist because operating and business conditions are dynamic. New suppliers are added late, or a hot day turns into potentially harmful working conditions. This makes human rights due diligence a continuous requirement with ongoing risk monitoring, fast incident response, and a humble posture to make it right quickly, transparently, and fairly.

If host cities want a legacy that lasts beyond the mega sporting events’ closing ceremony, it is critical to ensure that the people who made the spectacle possible were seen, protected, paid, and heard. Doing the right thing is strategy — contracts and worker-centered approaches are how it shows up on the ground.


You can find out more about how organizations are trying to fight against human rights crimes here

]]>
Human Layer of AI: The crosswinds of AI, sustainability, and human rights enter the mainstream in 2026 /en-us/posts/sustainability/human-rights-enter-the-mainstream/ Thu, 08 Jan 2026 16:40:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=68962

Key takeaways:

      • Clean energy takes center stage in corporate AI initiatives — Access to cheap, low‑carbon power will become a core driver of AI competitiveness, especially in the US, where electricity costs are on the rise.

      • Corporate buyers of AI will exert new leverage over suppliers — Corporate buyers will increasingly use their purchasing power to push data center operators to align AI build‑outs with local climate, water, and community expectations — not just to supply more metrics.

      • AI’s human labor layer enters mainstream due diligence — AI labor supply chains will be brought into the mainstream supply chain and require human rights due diligence.


As we enter 2026, there are three main themes that many corporations will need to manage around issues of renewable energy, AI supplier behavior, and labor.

Theme 1: Renewables move to the center of corporate AI strategies

In 2026, AI competitiveness and energy policy will be tightly fused. With AI workloads driving up electricity demand amid datacenter buildouts, particularly in the United States, access to renewable energy sources in the form of abundant, cheap, low‑carbon power becomes a decisive factor in AI pricing and availability.ĚýCountries and companies that lock in this advantage early will shape AI deployment patterns for the rest of the decade.

“The economics of renewable energy are what is causing it to accelerate, even in the US,” says , an expert in sustainability and business. “Despite the political winds, the fact is that wind and solar are growing faster… because it is cheaper, better energy.”

In addition, countries and firms with large, subsidized renewable energy capabilities and flexible grids, such as China’s massive solar, wind, and hydro infrastructure, will have a low-cost advantage. (However, countries’ push for AI may counteract this by prompting governments to prioritize domestic AI stacks over purely cost‑optimized ones.) Yet, combining this asset , such as Kimi K2 and DeepSeek, it is not outside the realm of possibility that the country could emerge in the top spot in AI development and innovation.

Corporate pressure to increase AI adoption for efficiency combined with stakeholder expectations of investing in a low-carbon future will make renewables the center of corporate AI strategies. Increasingly, companies will be asked where their computers run, what energy mix powers them, how cost effective that energy mix is, and whether companies are effectively endorsing environmentally and socially harmful projects in host communities.

Theme 2: Local backlash forces suppliers and companies to confront AI’s impact

Over the last few years, big names among AI infrastructure providers have tried to take advantage of the AI revolution, in AI-related data centers, cloud systems, and other infrastructure with no end in sight over the next few years.

Despite the demand, local communities in which large data center construction projects are planned are pushing back. According to , $64 billion of data center projects in the US have been blocked or delayed amid local opposition since 2025. This opposition comes in part because of concerns regarding , strains on local water and natural resources, and the reduction of working farmland from data center rezoning attempts in rural communities.

In fact, AI data centers are pushing up electricity demand and fueling higher electricity prices for many US households. And, as retail electricity price increases over the next couple of years are likely to continue, it will be in part because of consuming more electricity.

As a result, the demand from stakeholders — in particular, those from local communities including local and state politicians — for increased transparency on the environment and social impacts of corporate AI services is likely to surge. In turn, corporate buyers of AI services will put pressure on the big AI service suppliers to provide more precision in the locations of such data systems as well as disclose more associated sustainability data, such as energy sources, grid impacts, and their level of community engagement where large AI infrastructure is based.

To deal with these competing priorities, boards of companies using AI services will need to reconcile AI cost‑cutting with their transition commitments by ensuring that cost advantages are not built on externalizing environmental and social harms.

Not surprisingly, in 2026, more boards will be drawn into explicit debates about whether AI‑driven cost savings justify exposure to higher community, political, and regulatory risk. This turns questions about data center locations and power contracts into mainstream agenda items.

Theme 3: The human layer of AI emerges as a centerpiece of the supply chain

The idea that AI is automating everything will sit uncomfortably alongside a growing recognition that large‑scale AI depends on a largely invisible workforce. Across the full AI life cycle of products — some of which rely on models that utilize labor in data collection, curation, annotation, labeling, evaluation, and content moderation — there are thousands of workers performing the tasks that make models safe, accurate, and usable.

As AI systems scale across sectors, demand for this human labor increases in volume and complexity, according to , a human rights expert at Article One Advisory. Indeed, much of it remains outsourced, precarious, or gig‑based (often in the Global South), with low pay, weak protections, and exposure to psychologically harmful content rampant. Civil society, unions, and regulators are beginning to connect AI innovation with labor rights and occupational health; and this reality makes the human layer of AI a frontline human rights issue rather than a technical detail.

The for AI‑related labor is likely to move from a niche concern to a mainstream pillar of corporate human rights due diligence. Companies will be under pressure to know what subcontractors and suppliers are doing to ensure human rights for individuals doing AI data enrichment and moderation work, under what conditions, and through which intermediaries.

Following the evolution of how conflict minerals or modern slavery have been integrated into supplier management, a shared view of AI labor supply chains by corporate procurement, legal, product management, and sustainability teams will materialize.

Forward into 2026

As AI becomes embedded in the infrastructure of daily life, companies will face mounting pressure to demonstrate that their AI strategies align with human rights and environmental commitments, not just efficiency gains. The convergence of these three themes signals that transparency in AI governance in 2026 will be inseparable from broader corporate governance and responsibility. And those organizations that treat these themes as compliance checkboxes rather than fundamental design principles will risk both reputational damage and operational disruption in an increasingly scrutinized landscape.

Companies that fear the exaggerated risk of attracting the ire of activists are underestimating the greater risk of losing the goodwill of customers, investors, and employees that they need,” Friedman adds.


You can find out more about how companies are managing issues of sustainability here

]]>
Human Layer of AI: Protecting human rights in AI data enrichment work /en-us/posts/human-rights-crimes/ai-protecting-human-rights/ Fri, 19 Dec 2025 15:43:10 +0000 https://blogs.thomsonreuters.com/en-us/?p=68877

Key highlights:

      • Human rights risks are elevated for data enrichment workers — Data enrichment workers can face low and unstable pay, overtime pressure driven by buyer timelines, harmful content exposure with weak safeguards, limited grievance access, and uneven legal protections that hinder workers’ collective voice.

      • Human rights due diligence is essential for companies — Companies as buyers of these services must map subcontracting tiers, assess risk by employment model, document worker protections down to Tier-2 and Tier-3 suppliers, and audit and monitor their own rates, timelines, and payment terms to avoid reinforcing harm to workers.

      • Responsible contracting and remedy are a necessity — Contracts should embed shared responsibility, and include fair rates, predictable volumes, realistic deadlines, funded health & safety and mental‑health supports, effective grievance channels, and remediation.


Demand for data enrichment work has surged dramatically with the rapid development and expansion of AI technology. This work encompasses collecting, curating, annotating, and labeling data, as well as providing model training and evaluation — all of which are critical activities that improve how data functions in technological systems.

However, the workers performing these tasks currently operate under different employment models, according to from Article One Advisors, a corporate human rights advisory firm. Some workers are in-house employees at major AI developers, others work for business process outsourcing (BPO) companies, and many are independent contractors on gig platforms on which they bid for tasks and get paid per piece.

Human rights issues in data enrichment work

Data enrichment workers sit at the sharp end of the AI economy, yet many struggle to earn a stable, decent income. In particular, pay for gig workers often falls short of a living wage because tasks are sporadic, payments can be delayed, and compensation is frequently piece‑rate. Because work flows through , fees and margins get skimmed at each layer and shrink take‑home pay — another area of exploitation for today’s digital labor workforce.

In addition, another human rights issue at work is their right to rest, leisure, and family life and, in some places, even breaching guidance from the International Labour Organization (ILO) or local labor laws. Buyer purchasing practices with aggressive deadlines are a significant upstream driver of this overtime pressure.


National labor protections vary widely, and platform workers in particular often fall through regulatory gaps.


For many, the work itself carries health risks. Labeling and moderation can require repeated exposure to violent or graphic content, with well‑documented mental‑health impacts. Yet safeguards are uneven. Indeed, workers may lack protected breaks, task rotation, mental‑health support, adequate insurance, or the option to switch assignments. Even when content is not graphic, strain shows up as ergonomic problems, stress, and disrupted sleep.

When harm occurs, remedy can be hard to access. Platform-based work setups often provide no clear, trusted point of contact, and reports of retaliation deter complaints. Effective operational grievance mechanisms can be missing, and this leaves workers without credible paths to redress.

Finally, national labor protections vary widely, and platform workers in particular often fall through regulatory gaps. Because work is individualized and online, forming unions or works councils is harder. This weakens workers’ collective voice just where and when it is most needed to identify risks, negotiate improvements, and secure remedies.

Due diligence for companies buying data enrichment services is essential

When companies procure data enrichment services, they must recognize that respecting human rights extends throughout the entire value chain and not just with themselves and their direct suppliers. Companies creating trusted partnerships with their suppliers helps to identify issues before they become harmful and create mutual accountability for the humans behind the algorithms.

Article One Advisors’ Lloyd explains that the mandatory baseline starts with human rights due diligence, and can be found in areas such as:

      • Risk identification and assessment — The first step for companies is to identify and assess risksĚýby understanding their suppliers’ model. This means knowing which groups of workers are full-time employees, contracted workers, or platform-based gig workers. Each model carries different risk profiles.
      • Subcontractor ecosystem mapping — Tracing the subcontracting chainĚýto see how many layers exist between the supplier and the workers is essential. Fees and pressures compound at each tier of the value chain, says Lloyd.
      • Documentation of worker protections in Tier 2 and Tier 3 suppliers — Assessing and promoting worker protections for every layer of the value chain — which includes making sure the wage structures are clearly defined and equitable, health and safety measures are adequate, and protections for exposure to harmful content and effective grievance mechanisms exist — are baseline elements of human rights due diligence.
      • Examination of company’s own practices — Finally, it is necessary for companies to ensure that their own procurement standards and contracts are not reinforcing human rights harms. This includes companies confirming that their contract terms, timelines, and payment schedules are not inadvertently forcing suppliers to cut corners.

Responsible contracting and remedy mechanisms

Companies as buyers of data enrichment services also must instill shared responsibility in owning worker outcomes among themselves, BPOs, platforms, and model developers. Comprehensive, clear human-rights standards, living-income benchmarks, and shared responsibility are essential elements of good purchasing practices. More specifically, these require fair rates for work, predictable volume expectations, and realistic timelines to make sure suppliers do not push excessive hours. In addition, budgets should include cost-sharing for audits, key risk management measures (such as mental health support), and occupational health and safety controls.

Smart remediation turns harmful situations into improved conditions by providing back-pay for underpayment, medical and psychosocial care after exposure to harmful content, contract adjustments to remove perverse incentives, and time-bound corrective action plans co-designed with worker input. As a last resort when buyer and supplier need to part ways, a responsible exit is planned with notice, transition support, and no sudden contract termination that strands workers.

Similarly, grievance mechanisms for platform workers — who are often dispersed across geographics, classified as independent contractors, and lack line managers or union channels — need to be contractually documented. Effective grievance redressal needs to include confidential mechanisms and remediation processes, in-platform dispute tools, independent individuals to investigate complaints, multilingual facilitation, and joint buyer-supplier escalation paths to bridge gaps in labor-law protection and deliver credible remedies at scale, Lloyd notes.

Promoting quality through worker well-being

Protecting data enrichment workers is not only an ethical imperative but also essential for AI quality itself. When workers face excessive hours, inadequate pay, or harmful content exposure without proper support, the resulting stress and burnout directly impact data quality outcomes. Companies must recognize that responsibility for worker well-being and quality data outcomes extend throughout the entire value chain and does not solely rest with BPOs providers alone.


You can find more about the challenges companies and their workers face from forced labor in their supply chain here

]]>
The Human Layer of AI: How to build human rights into the AI lifecycle /en-us/posts/sustainability/ai-human-layer-building-rights/ Mon, 24 Nov 2025 16:33:36 +0000 https://blogs.thomsonreuters.com/en-us/?p=68546

Key takeaways:

      • Build due diligence into the process — Make human-rights due diligence routine from the decision to build or buy through deployment by mapping uses to standards, assess severity and likelihood, and close control gaps to prevent costly pullbacks and reputational damage.

      • Identify risks early on — Use practical methods to identify risks early by engaging end users and running responsible foresight and bad headlines

      • Use due diligence to build trust — Treat due diligence as an asset and not a compliance box to tick by using it to de‑risk launches, uncover user needs, and build durable trust that accelerates growth and differentiates the product with safety-by-design features that matter to buyers, regulators, and end users.


AI is reshaping how we work, govern, and care for one another. Indeed, individuals are turning to cutting-edge large language models (LLMs) to ask for emotional help and support in grieving and coping during difficult times. “Users are turning to chatbots for therapy, crisis support, and reassurance, and this exposes design choices that now touch the right to information, privacy, and life itself,” says , co-Founder & Principal at , a management consulting firm that specializes in human rights and responsible technology use.

These unexpected uses of AI are reframing risk because in these instances, safeguards cannot be an afterthought. Analyzing who might misuse AI alongside determining who will benefit from its use must be built into the design process.

To put this requirement into practice, a human rights lens must be applied across the entire AI lifecycle from the decision to build or buy to deployment and use, to help companies anticipate harms, prioritize safeguards, and earn durable trust without hampering innovation.

Understanding human rights risks in the AI lifecycle

Human rights risks can surface at every phase of the AI lifecycle. In fact, they have emerged in efforts to train these frontier LLMs in content moderation functions and now, are showing up elsewhere. For example, data enrichment workers who refine training data, and data center staff, who power these systems, are most likely to face labor risks. Often located in lower‑income markets with weaker protections, they face low wages, unsafe conditions, and limits on other freedoms.

During the development phase, biased training sets and the probabilistic nature of models can generate misinformation or hallucinations, and these can further undermine rights to health and political participation. Likewise, design choices often can translate into discriminatory outcomes.

Unfortunately, the use of AI-enabled tools also can compound these harms. Powerful models can be misused for fraud or human trafficking. In addition, deeper integration with sensitive data can heighten privacy and security risks.

A surprising field pattern exacerbates the risk when people increasingly use AI for therapy‑like support and disclose issues related to emotional crises and self‑harm. In particular, this intimacy widens product and policy obligations, which include age‑aware safeguards and clear limits on overriding protections.

Why human rights due diligence is urgent

That’s why human rights due diligence must start with people, not the enterprise. By embedding human rights due diligence into the lifecycle of AI, development teams can begin to understand the technology and its intended uses, then map those uses to international standards. Next, a cross functional team gathers to weigh benefits alongside harms and to consider unintended uses. Primarily, they need to answer the question, “What happens if this technology gets in the hands of a bad actor?”

From there, the process demands an analysis of severity — which assesses scale, scope, and remediation, and the likelihood of each use. The final step involves evaluating current controls across supply chains, model design, deployment, and use-phases to identify gaps.

The biggest barrier in layering in a human rights lens into to AI is the need for speed to market. The races to put out minimally viable products accompanied by competitive pressure can eclipse robust governance, yet early due diligence may prevent costly pullbacks and bad headlines. Article One’s Poynton notes that no one wants to see their product on the front page for enabling stalking or spreading disinformation. Building safeguards early “ensures that when it does launch, it has the trust of its users,” she adds.

How to embed safeguards without slowing teams

The most efficient path in translating human rights into the AI product lifecycle is to turn policy principles, goals, and ambitions into actionable steps for the engineers and the product teams. This requires the “engineers to analyze how they do their work differently to ensure these principles live and breathe in AI-enabled products,” Poynton explains. More specifically, this includes:

Identifying unexpected harms — One of the most critical, yet difficult components of the human rights impact assessment is brainstorming potential harms. Poynton recommends two ways to make this happen: First, engage with end users to help identify potential harms by asking, “What are some issues that we may not be considering from the perspectives of accessibility, trust, safety and privacy?” Second, run responsible foresight workshops at which individuals play the parts of bad actors to better identify harms and uncover mitigation strategies quickly. Pair that with a bad headlines exercise that can be used to anticipate front‑page failures. Then, ship with these protections in place, pre‑launch.

Implementing concrete controls — Embedding safety-by-design should cover both content and contact, a lesson from gaming in which grooming risks require more than just filters. Build age‑aware and self‑harm protocols, including parental controls and principled policies on overrides. Govern sales and access with customer vetting, usage restrictions, and clear abuse‑response pathways. In the supply chain, set supplier standards for enrichment and data center work that include fair wages, safe conditions, freedom of association, and grievance channels.

Treating due diligence as value-creating, not box-checking — Crucially, frame due diligence as an asset rather than a liability. “Make your product better and ensure that when it does launch, it has the trust of its users,” Poynton adds.

Additional considerations

Addressing equity must be front and center. Responsible strategies include diversifying training sets without exploiting communities and giving buyers clear provenance statements on data scope and limits.

Bridging the digital divide is equally urgent. Bandwidth and device gaps risk amplifying inequality if design and deployment assume privileged contexts. In the workplace, Poynton stresses that these impacts will be compounded, from entry-level to expert roles.

Finally, remember that AI’s environmental footprint is a human rights issue. “There is a human right to a clean and healthy environment,” Poynton notes, adding that energy and water demands must be measured, reduced, and sited with respect for local communities, even as AI helps accelerate the clean energy transition. This is a proactive mandate.


You can find out more about the ethical issues facing AI use and adoption here

]]>
Supply chain risk: New developments in corporate human rights responsibility /en-us/posts/human-rights-crimes/supply-chain-risk/ Thu, 04 Sep 2025 17:03:05 +0000 https://blogs.thomsonreuters.com/en-us/?p=67477

Key insights:

      • An uneven regulatory environment — While some nations (like Chile, Thailand, and South Korea) are implementing ambitious new laws for corporate human rights and environmental due diligence, the EU’s CSDDD, which is facing significant delays and potential weakening, creates a fragmented regulatory environment.

      • Litigation risk is increasing — Companies are facing growing litigation risk from both greenwashing claims and climate-related human rights cases, and this highlights the legal and reputational dangers of failing to meet stakeholder demands.

      • Disclosure risk is an issue — Companies are facing a dilemma in that while proactive compliance and transparent reporting can build trust and avoid disputes, voluntary disclosures can also become legally binding and expose them to liability.


The year 2025 has brought changes to the global landscape of supply chain risk management and corporate responsibility for human rights. Countries such as Chile, South Korea, and Thailand are actively considering or are drafting and introducing ambitious new rules that raise the bar for corporate accountability. At the same time, the Europe Union’s Corporate Sustainability Due Diligence Directive (CSDDD), which was once seen as a benchmark for responsible supply chain management, has faced delays and significant pushback.

In the meantime, companies face challenges in how they should move forward without harmonization and specificity in the regulatory and legal landscape.

Positive and negative legislative steps

So far this year, several countries have advanced strong new laws to hold corporations accountable for human rights and environmental impacts, says , Senior Policy Associate at , which is a global non-governmental organization that helps communities defend their environmental and human rights.

For example, the National Congress of Chile is considering bills that propose requiring corporations of a certain size to implement and report on due diligence efforts with respect to human rights, the environment, and climate change; and the Thailand’s Ministry of Justice is drafting a mandatory due diligence law to ensure products are free from exploitative labor and environmental harm, Berry explains.


Advocates in human rights climate cases contend that the adverse effects of climate change undermine fundamental human rights, including the rights to life, health, food, water, and liberty.


South Korea’s new legislation enacts corporate requirements through mandates in comprehensive due diligence, a Victim Support Fund, and robust grievance procedures. “The legislation is progressive because of its broad applicability, even to financial sector actors,” Berry says. “It requires accountability for human right due diligence at every stage of a supply chain, and it mandates that business enterprises of a certain size are equipped to proactively respond to grievances and facilitate remedy where harm is found.”

This legislation is significant because it would require actors across global supply chains to engage with human rights and environmental abuses regardless of whether impacts are deemed financially material.

Other regions may be moving the other way on corporate accountability, however. Recent developments in the CSDDD may have weakened its impact, with the first occurring in early 2025 and referred to as the stop the clock directive that has delayed implementation by a year. In addition, a proposal was made to raise company size thresholds, meaning that fewer companies would be mandated to report under CSDDD if this change takes effect.

The final requirements are unlikely to be defined until early 2026 “because of the lengthy legislative process in the European Union,” says , Partner at Gibson Dunn. “The directive must be negotiated and agreed upon by three EU bodies, which are the European Commission, the European Parliament, and the Council (representing Member States). Each body needs to develop and present its own proposal, and only after all proposals are on the table, which is expected by the end of October 2025, will the trilateral negotiations begin.”

Companies in a tough spot

Litigation risk, referred to in sustainability circles as greenwashing, is growing for corporations as civil society organizations become more active in bringing claims related to environmental claims and human rights abuses. Consumers, especially younger generations like Gen Z, are increasingly expecting higher standards and greater transparency from businesses.

Community participants are also active in bringing climate-related litigation. Advocates in human rights climate cases contend that the adverse effects of climate change undermine fundamental human rights, including the rights to life, health, food, water, and liberty. Indeed, high-profile lawsuits, such as , illustrate the expanding global threat of legal action for companies that fail to meet stakeholder expectations.

“There was recently a case in Germany where a Peruvian farmer was trying to get damages from a German utility provider,” Fromholzer explains. The farmer argued that the utility provider’s greenhouse gas emissions contributed to the melting of glaciers in Peru and that this threatened the farmer’s hometown with flooding. While the claim was not successful, climate change groups hailed it as a win because the judges stated that energy companies could be held responsible for the costs caused by their carbon emissions.

Today, many corporations find themselves in a difficult position as they navigate mounting risks from both proactive and reactive approaches to sustainability and human rights reporting and compliance. On one hand, adopting proactive compliance strategies and robust grievance mechanisms can help companies avoid costly disputes and build stakeholder trust, but it is not without danger. Public disclosures of this information — even voluntarily — can later become legally binding and expose companies to liability.


Companies face challenges in how they should move forward without harmonization and specificity in the regulatory and legal landscape.


Yet, adopting proactive compliance strategies could offer advantages to those companies facing evolving regulatory requirements and social expectations. By implementing robust grievance mechanisms and addressing risks early, businesses can avoid costly litigation, reputational damage, and regulatory penalties.

“Accountability mechanisms and grievance mechanisms aren’t scary,” Berry says. “They help to harmonize relationship with these communities… rather than approach these issues defensively [and] litigiously, why not approach them proactively?” Indeed, early action can often future-proof operations and build trust with stakeholders.

At the same time, publishing corporate statements on a voluntary basis without the specifics of final legal requirements in legislation holds risk as well. Fromholzer cautions that in the case of CSDDD, voluntary disclosures made today may become legally binding statements required by the EU’s Corporate Sustainability Reporting Directive (CSRD) that could be used in future litigation.

A published statement based on the requirements of CSRD “is now a legally binding statement which you really must be able to defend at the risk of liability,” Fromholzer says. “It is no longer marketing but now is part of the annual accounts with all the liability attached to it… [companies] are cornered from both sides. Again, one is the greenwashing approach, and the other one is the legally binding nature of the statements they are now forced to make.”

Recommended steps for companies

Either way, companies implementing robust grievance mechanisms and publishing accurate statements backed by auditable data with assurance is a pathway forward through the complex terrain of risk. To effectively address human rights and environmental risks, Berry suggests considering the expectations outlined by for lawmakers advancing human rights and environmental due diligence laws.

To begin, companies should conduct a comprehensive mapping of their entire supply chain, including subsidiaries and business partners, to identify and evaluate potential risks. Then, companies must create and publicly release comprehensive due diligence policies that align with international standards, such as the and . Finally, companies must implement effective grievance systems that provide accessible, safe, and responsive channels for stakeholders to raise concerns and seek redress. Maintaining ongoing dialogue with affected communities and rights-holders cultivates trust and guarantees their substantive involvement in business decision-making.

Once this implementation phase is complete, companies should regularly monitor their operations and publicly report on both adverse impacts and the effectiveness of remediation efforts. They should also consider assurance by a third party for risk mitigation.

Despite ongoing changes and uncertainties in global legislation, the movement toward greater corporate accountability continues to gain momentum. By aligning their practices and obtaining assurance for corporate reporting, companies can stay ahead of regulatory developments, build trust with stakeholders, and reduce their risk exposure.


You can find out more about how companies are navigating disclosure and reporting rules here

]]>
New study reveals Gen Z purchasing power could be a force for ethical labor /en-us/posts/human-rights-crimes/gen-z-purchasing-power/ Fri, 11 Jul 2025 13:53:25 +0000 https://blogs.thomsonreuters.com/en-us/?p=66538

Key insights:

      • Gen Z’s purchasing power — By 2030, Gen Z will represent 17% of retail spending in the US, significantly influencing industries such as apparel, tea, and coffee to adopt ethical labor practices.

      • Ethical consumerism — Fully 81% of Gen Z consumers have changed their purchasing decisions based on brand actions or reputation, with 53% participating in economic boycotts.

      • Willingness to pay more — More than half of Gen Z consumers are willing to pay more for products made without forced labor, despite financial and accessibility constraints.


The United States accounted for more than one-fifth of the world’s imports of goods that were at risk of being made with forced labor, according from earlier this year. In addition, the U.S. Department of Labor recognizes 478 instances of forced and child labor among different goods and nations, and makers and purveyors of coffee, tea, footwear, and some components of apparel.

With the apparel and footwear industry , and the coffee and tea industry valued at in 2024, any change in demand because of fluctuating economic factors or product attributes, including concerns over the use of modern slavery in companies’ supply lines, could impact these industries.

And one important economic factor that could influence more ethical practices is the growing purchasing power of Gen Z individuals (those born between 1995 to 2012). Indeed, by 2030, Gen Zers will represent in the US.

Now, produced by the in collaboration with ¶¶ŇőłÉÄę indicates that this shift is already underway. A large majority (81%) of Gen Z individuals, who currently comprise about one quarter of the US population, have changed their decision to buy a product based on brand actions or overall reputation, according to the research. Likewise, state in March that theyĚýhave, will, or are participating in a current economic boycottĚý— the most of any generation in the US.

More evidence suggests this trend is not going away any time soon. In fact, Gen Z is leading the way, showing a for sustainable brands (63%) and a higher willingness to pay more (73%) when compared to other generations. And the numbers for the apparel industry demonstrate this as well. A report that more than one-quarter of their wardrobe is second-hand, which is more than double the rate of the general consumer population.

Gen Zers will change their habits to protect workers

In addition, another study from the Dynamic Sustainability Lab that examined Gen Z’s purchasing habits related to products made with ethical labor reveals several key insights that highlight the growing power of Gen Z buyers.

For example, Gen Z consumers value purchasing apparel, tea, and coffee produced without forced labor, yet these consumers in this group face financial and accessibility constraints when purchasing. Indeed, they rank cost, affordability, and the quality of their products as the top factors influencing their purchasing decisions.

Further, 80% of participants who ranked cost and affordability as the top factor influencing their purchasing decisions also said they are willing to pay more for products with ethical considerations. And when it came to the awareness of modern slavery as a problem in the production of apparel, tea, and coffee, 91% said they were at least somewhat aware.

Specifically, more than 6 out of 10 survey respondents indicated that forced labor was a problem in tea and coffee production, and 8 out of 10 Gen Z consumers indicated that forced labor was an issue for apparel production. In addition, 81% have changed their purchasing decision because of a brand action or decision, while almost 70% said the purchase decision change was entirely or partly because of ethical labor considerations.

Have you changed a purchasing decision because of brand action or reputation?

purchasing power

At the same time, only 43% of Gen Z respondents can name a brand that’s using forced labor. This suggests the need for greater transparency of supply chain operations on the part of makers and suppliers of consumer goods.

Recommended actions for companies

Almost all (96%) of Gen Z survey respondents said they believe their generation can drive corporate change through consumer power. Companies can leverage this knowledge by doubling down on increasing transparency and building awareness of their efforts. Some steps companies can take toward that include:

Make detailed policies on ethical sourcing available publicly — Companies should begin by conducting a comprehensive review of their current sourcing practices to identify areas for improvement. Once a thorough understanding is established, they can draft clear and detailed ethical sourcing policies that reflect their commitment to eliminating forced labor and promoting fair practices throughout their supply chain. These policies should then be translated into accessible language and made available on the company website.

Publish an independent audit or conduct a human rights impact assessment — To demonstrate accountability and transparency, companies can commission an independent third-party audit of their supply chain operations. This audit should assess the company’s compliance with ethical labor standards and identify any instances of forced labor. The results of the audit should be made publicly available, accompanied by an action plan outlining steps the company will take to address any problems uncovered.

Additionally, conducting a human rights impact assessment will help companies understand the broader social implications of their business practices and identify areas for improvement. This process involves engaging stakeholders, including workers, in order to gather insights and ensure stakeholders’ rights are prioritized.

Obtain a forced labor-free certification — Success in pursuing and achieving certification will require companies to undergo rigorous evaluations and demonstrate their commitment to maintaining forced labor-free operations. Companies should initiate the process by aligning their practices with the standards set by recognized certifying bodies. This may involve revising supplier contracts, implementing robust monitoring systems, and providing training for staff and suppliers on ethical labor practices.

Gen Z is emerging as a strong force in driving ethical consumerism, with their increasing purchasing power influencing industries to adopt more transparent and fair labor practices. As they prioritize ethical considerations, Gen Z’s demand for transparency and accountability from brands offers a significant opportunity for companies to align with these values and foster consumer trust.

This will be of increasing importance as current US tariff policies may potentially result in institutional buyers such as retailers and brands having to source from global producers and manufacturers in new and emerging geographies.


About the Dynamic Sustainability Lab

The is a non-partisan think tank and research organization which examines the opportunities as well as risks and unintended consequences resulting from the adoption of new technologies, new strategies or policies and our growing dependence on foreign-sourced resources and supply chains used in energy, climate and sustainability transitions.

Directed by , the Pontarelli Professor in the Maxwell School of Citizenship and Public Affairs at Syracuse University, the DSL focuses on providing interdisciplinary scientific approaches that support both governments and businesses through the lens of markets, policies, and national security — what they call, Dynamic Sustainability.


You can find out more about the challenges of fighting against forced labor in global supply chains here

]]>