Social justice Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/social-justice/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Fri, 17 Apr 2026 06:41:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Housing affordability in Mexico City: How the 2026 FIFA World Cup exposes a deeper urban crisis /en-us/posts/sustainability/housing-affordability-crisis-mexico/ Fri, 17 Apr 2026 06:04:56 +0000 https://blogs.thomsonreuters.com/en-us/?p=70429

Key takeaways:

      • The FIFA World Cup is a catalyst, not the root cause — Mexico City’s housing affordability crisis predates the coming tournament. Rental prices have been rising uncontrollably for years, displacing thousands of families annually. The World Cup will accelerate and amplify an already existing problem.

      • The 2024 rental reform is a step in the right direction, but it has significant limitations — Capping rent increases at the annual inflation rate was a necessary measure, but its impact has been limited by grey areas in the law.

      • The real battle is formalization — No housing regulation can be fully effective if a large portion of the market operates outside of it. Until authorities find ways to make formal rental agreements genuinely attractive and accessible for both landlords and tenants.


On the eve of the 23rd playing of the FIFA World Cup, Mexico stands as one of three host countries for one of the most significant sporting events in the world. It will feature matches in Mexico City, Guadalajara, and Monterrey, and it will be co-hosted alongside the United States and Canada.

Organizing such an event carries notable financial benefits, including a surge in tourism, job creation, and substantial foreign investment — all of which generate a local economic spillover that strengthens the national marketplace. At the same time, Mexico’s major capitals— especially its World Cup host cities — have been undergoing a level of urban transformation that has significantly altered the daily lives of its residents. Chief among these changes is the sharp rise in rental costs, which has been pushing residents toward the cities’ outskirts. According to government figures, are displaced each year due to the uncontrolled increase in housing prices in Mexico City alone.

Mexican authorities had to get to work

Legal changes to real estate regulation in Mexico City are not isolated, and what is implemented in the capital often sets a precedent for the rest of the country. Time and again, Mexico City has served as a laboratory for new policies, and when these are proven effective, they become models for nationwide reform.


According to government figures, more than 20,000 households are displaced each year due to the uncontrolled increase in housing prices in Mexico City alone.


That said, in August 2024 — after the city’s head of government noted that rentals costs in none of the boroughs of Mexico City fall below the city’s minimum wage, and that 9 out of 13 boroughs average rents that exceeded twice the minimum wage — the Official Gazette of Mexico City published a decree amending Articles 2448-D and 2448-F of the Civil Code for the Federal District, imposing limits on rent increases for residential properties. Previously, the monthly rent increase could not exceed 10% of the agreed-upon rent. That paragraph was amended to establish that rent increases shall never exceed the inflation rate reported by the Bank of Mexico for the previous year.

It is worth noting that the prior 10% cap was nearly three times the general annual inflation rate calculated by the Bank of Mexico in 2025, which stood at 3.69%.

More than a year after these reforms took effect, however, 2025 closed with an average increase in rental prices of . With the FIFA World Cup approaching, prices are expected to continue rising uncontrollably due to the influx of tourists drawn by the event. This concern is well-founded: Ahead of the 2022 World Cup in Qatar, empowered landlords to raise rents by more than 40%.

Mexico City’s rental reform also introduced additional measures. For example, a digital registry for lease agreements was established, to be immediately authorized and managed by the Government of Mexico City. Landlords now are required to register lease agreements within 30 days of their execution. Furthermore, landlords are prohibited from refusing to rent to tenants on the grounds that they have children or pets.

The registration requirement carries real consequences: Should a landlord fail to register a contract within the stipulated period, their ability to invoke legal protection mechanisms in the event of a dispute with a tenant becomes significantly more complicated.

Regardless of the efforts, it’s not all smooth sailing

That said, the reform contains certain grey areas that limit its scope. For instance, it only applies under specific conditions — most notably when a lease has been in place for three years or more. A landlord can effectively circumvent the cap by choosing not to renew an existing contract and instead requiring the tenant to sign a new one at a higher price.

A separate but equally significant obstacle to the reform’s effectiveness is the rapid growth of short-term rental platforms. In recent years, the proliferation of temporary accommodation services has steadily reduced the supply of traditional long-term rentals, as more properties are listed on platforms such as Airbnb, Vrbo, or others. Indeed, every 48 hours, three housing units in Mexico City are . And from a national perspective, the Tourism Gross Product reached approximately US $151.5 billion, equivalent to 8.7% of Mexico’s GDP.


Every 48 hours, three housing units in Mexico City are converted into Airbnb listings.


This problem is further compounded by the scale of informal rental arrangements. According to the National Housing Survey conducted by Mexico’s National Institute of Statistics and Geography (INEGI), there are more than 200,000 informal rental agreements in Mexico City — none of which involve formal contracts.

Forcing the real estate market into formalization

This brings us to the central challenge facing city authorities with regard to housing: The need to incentivize the formalization of the real estate market. This is already complicated by the country’s low tax culture and the requirement for landlords to enter a specific tax regime that raises their tax burden. Additionally, rental contracts are not only essential for protecting tenants’ rights, but they also are equally important for landlords — because without a legally binding agreement, there is no guarantee that the terms of any arrangement will be honored.

Paradoxically, the recent reform may actually push the informal market further underground. By requiring landlords to formally declare their rental income, the regulation inevitably creates a sense of heightened oversight — one that informal landlords may seek to evade rather than comply with.

To the authorities of Mexico City, the message is clear — punitive measures alone will not bring the informal market into the fold. Tax benefits for landlords who register their contracts, streamlined and accessible digital registration processes, and legal protections that make formal agreements genuinely advantageous for both parties could go a long way toward building trust in the system.

The 2026 FIFA World Cup will come and go, of course, but the people of Mexico City will remain. They deserve a housing market that works for them — not one that treats their homes as a commodity to be priced beyond their reach every time the world turns its attention to their city.


You can find out more about the

]]>
The shadow over the bench: Legalweek 2026’s most important session had nothing to do with AI /en-us/posts/government/legalweek-2026-judicial-threats/ Thu, 26 Mar 2026 17:12:25 +0000 https://blogs.thomsonreuters.com/en-us/?p=70142

Key takeaways:

      • Violence against judges is escalating — Targeted shootings, coordinated harassment campaigns, and threats that now routinely follow judges to their homes and families.

      • The rhetoric driving the escalation is coming from the highest levels of government — The absence of any public denunciation from the Department of Justice is highlighting the source of the problem.

      • Will the violence itself become part of judicial rulings? — The endgame of judicial intimidation isn’t that judges stop ruling, it’s that the threat of violence becomes a silent presence in the deliberation itself.


NEW YORK — Those attendees who came to the recentĚý to talk about AI, agentic workflows, and the business of legal technology, also were treated to a session that will likely stay with attendees and had nothing to do with AI.

In that session, four federal judges took the stage; but they were not there to talk about pricing models or AI adoption. They were there to talk about staying alive.

Setting the stage

Jason Wareham, CEO of IPSA Intelligent Systems and a former U.S. Marine Corps judge advocate, introduced the session — a panel of four sitting United States District Court judges — by speaking of how the rule of law once seemed resolute, yet how that faith in that has been shaken, year after year. He worked hard to frame his observations as nonpartisan, a matter of institutional fragility rather than political allegiance. It was a generous framing, but it was one that would not survive the weight of the ensuing discussion.

The Honorable Esther Salas of the District of New Jersey said that the reason she was there has a name. On July 19, 2020, a disgruntled, extremist attorney who had a case before her court arrived at her home during a birthday celebration. He shot and killed her twenty-year-old son, Daniel Anderl. He shot and critically wounded her husband. She has spent the years since on a mission to protect her judicial colleagues from the same fate.

The new normal

Next, the Honorable Kenly Kiya Kato of the Central District of California described what has changed. Judges’ rulings are still based on the Constitution, on precedent, and on the facts; but what’s different is the small voice in the back of a judge’s head. That voice, often coming after a judge issued a decision that they now have to fight against, asks: What will happen after this? It is now expected, Judge Kato explained, that a high-profile order will bring threats. When two colleagues in her district issued prominent decisions, her first thought was for their safety. That is not how it has been historically.

The Honorable Mia Roberts Perez of the Eastern District of Pennsylvania asked how we got here, pointing to language from the highest levels of government: judges called monsters, a U.S. Department of Justice declaring war on rogue judges, and recently politicians bringing justice’s families into the conversation.

Judge Salas pushed even further. She acknowledged the instinct to frame the problem as bipartisan, but said the current moment is not apples to apples. It is apples to watermelons. The spike in threats since 2015, she argued, traces directly to rhetoric from political leaders using language never before deployed against the bench.


The federal judiciary is looking to break annual records for threats [against judges], and there is an absence of any public denunciation from the Attorney General or the DOJ.


The evidence is not abstract, nor are the victims, and the panel walked through it. Judge John Roemer of Wisconsin, zip-tied to a chair and assassinated in his home. Associate Judge Andrew Wilkinson of Maryland shot dead in his driveway while his family was inside. Judge Steven Meyer of Indiana and his wife Kimberly, shot through their own front door after attackers first posed as a food delivery, then returned days later claiming to have found the couple’s dog. Judge Meyer has just undergone his fifth surgery since the attack.

All of these incidents happened at the judges’ homes.

Judge Salas then played a voicemail, one of thousands that federal judges receive. It was less than 30 seconds long, but it did not need to be longer. While names had been redacted, what remained was a torrent of threats and obscenities, graphic, sexual and violent, delivered with the confidence of someone who does not expect consequences. Some judges receive hundreds of these after a single ruling, often from people with no case before them at all.

The shadow over the courts

Throughout the session, there was a presence the panelists circled but rarely named directly. A shadow that shaped every observation about escalating threats, every reference to rhetoric from the top down, every mention of language never before used by political leaders, of action or inaction the likes of which would have been unthinkable just several years ago. The specifics were spoken. The name, largely, was not.

It didn’t have to be.

Judge Kato said that what was perhaps the most disheartening aspect of all this is that these threats are getting worse. The people who know better are not doing better. Indeed, she said her children think about these problems every day. What will happen to mom today? Will someone come to the house? These are questions children should not have to carry. They did not sign up for this, and neither did the judges.

In 2026, Judge Salas noted, the federal judiciary is looking to break annual records for threats. She also noted the absence of any public denunciation from the Attorney General or the DOJ. The silence, she said, says a lot.

Not surprisingly, the implications extend beyond the judges themselves. As Judge Salas noted, if judges have to weigh their safety alongside the law, ordinary people don’t stand a chance. If one party is stronger, better funded, or more willing to threaten, then the scales tip.

That is the endgame of judicial intimidation. It’s not that judges stop ruling, but that the violent and the powerful — indeed, the people least fit to hold the scales — can tilt them at will.

That concern echoed an earlier warning from Judge Karoline Mehalchick of the Middle District of Pennsylvania. Judge Mehalchick said that judicial intimidation feeds on misunderstanding. When the public no longer grasps why judges must be insulated from pressure or conversely, mistakes independence for partisanship, the threat environment becomes easier to justify, easier to ignore, and harder to reverse.


What is perhaps the most disheartening aspect of all this is that these threats are getting worse, and the people who know better are not doing better.


In his 2024 year-end report, U.S. Supreme Court Chief Justice John Roberts identified four threats to judicial independence: violence, intimidation, disinformation, and threats to defy lawfully entered judgements. The panel discussed this report as prophecy fulfilled. Public confidence in the judiciary has plummeted since 2021, and the reasons are complex. The judges insisted they are still doing their jobs the right way, but the violence is spreading anyway.

What survives

Judge Salas asked the audience to watch their thoughts. Are they negative and destructive, or positive and uplifting? Can we start loving more? She ended by sending love and light to everyone in the room.

The judges were visibly emotional on the stage.

The words were beautiful. They were also, in the context of everything that had just been described — the killings, the voicemails, the zip ties, the pizza deliveries masking a threat under a murdered son’s name — resting in a shadow that no amount of love and light could fully dispel on their own.

The room responded with a standing ovation.

Thousands of people came to Legalweek 2026 to talk about the future of legal technology. For one morning, four judges reminded them that none of it matters if the people charged with administering justice cannot do so safely.

So, while the billable hour may survive and the associate will adapt, the harder question, the one that should keep the legal industry awake at night, is whether the bench will hold.


You can find more ofĚýour coverage of Legalweek eventsĚýhere

]]>
Scaling Justice: Unlocking the $3.3 trillion ethical capital market /en-us/posts/ai-in-courts/scaling-justice-ethical-capital/ Mon, 23 Mar 2026 17:12:28 +0000 https://blogs.thomsonreuters.com/en-us/?p=70042

Key takeaways:

      • An additional funding stream, not a replacement — Ethical capital has the potential to supplement existing access to justice infrastructure by introducing a justice finance mechanism that can fund cases with measurable social and environmental impact.

      • Technology as trust infrastructure — AI and smart technologies can provide the governance scaffolding required for ethical capital to flow at scale, including standardizing assessment, impact measurement, and oversight.

      • Capital is not scarce; allocation is — The true bottleneck is not the availability of funds; rather it’s the disciplined, investment-grade legal judgment required to evaluate risk, ensure compliance, and measure impact in a way that makes justice outcomes investable.


Kayee Cheung & Melina Gisler, Co-Founders of justice finance platform Edenreach, are co-authors of this blog post

Access to justice is typically framed as a resource problem — the idea that there are too few legal aid lawyers, too little philanthropic funding, and too many people navigating civil disputes alone. This often results in the majority of individuals who face civil legal challenges doing so without representation, often because they cannot afford it.

Yet this crisis exists alongside a striking paradox. While 5.1 billion people worldwide face unmet justice needs, an estimated $3.3 trillion in mission-aligned capital — held in donor-advised funds, philanthropic portfolios, private foundations, and impact investment vehicles — remains largely disconnected from solutions.

Unlocking even a fraction of this capital could introduce a meaningful parallel funding stream — one that’s capable of supporting cases with potential impacts that currently fall outside traditional funding models. Rather than depending on charity or contingency, what if justice also attracted disciplined, impact-aligned investment in cases themselves, in addition to additional funding that could support technology?

Recent efforts have expanded investor awareness of justice-related innovation. Programs like Village Capital’s have helped demystify the sector and catalyze funding for the technology serving justice-impacted communities. Justice tech, or impact-driven direct-to-consumer legal tech, has grown exponentially in the last few years along with increased investor interest and user awareness.

Litigation finance has also grown, but its structure is narrowly optimized for high-value commercial claims with a strong financial upside. Traditional funders typically seek 5- to 10-times returns, prioritizing large corporate disputes and excluding cases with significant social value but lower monetary recovery, such as consumer protection claims, housing code enforcement, environmental accountability, or systemic health negligence.

Justice finance offers a different approach. By channeling capital from the impact investment market toward the justice system and aligning legal case funding with established impact measurement frameworks like the , it reframes certain categories of legal action as dual-return opportunities, covering financial and social.

This is not philanthropy repackaged. It’s the idea that measurable justice outcomes can form the basis of an investable asset class, if they’re properly structured, governed, and evaluated.

Technology as trust infrastructure

While mission-aligned capital is widely available, the ability to evaluate legal matters with the necessary rigor remains limited. Responsibly allocating funds to legal matters requires complex expertise, including legal merit assessment, financial risk modeling, regulatory compliance, and impact evaluation. Cases must be considered not only for their likelihood of success and recovery potential, but also for measurable social or environmental outcomes.

Today, that assessment is largely manual and capacity-bound by small teams. The result is a structural bottleneck as capital waits on scalable, trusted evaluation and allocation.

Without a way to standardize and responsibly scale analysis of the double bottom line, however, justice funding remains bespoke, even when resources are available.

AI-enabled systems can play a transformative role by standardizing assessment frameworks and supporting disciplined capital allocation at scale. By encoding assessment criteria, decision pathways, and compliance safeguards and then mapping case characteristics to impact metrics, technology can enable consistency and allow legal and financial experts to evaluate exponentially more matters without lowering their standards.

And by integrating legal assessment, financial modeling, and impact alignment within a governed tech framework, justice finance platforms like can function as the connective tissue. Through the platform, impact metrics are applied consistently while human experts remain responsible for final determinations, thereby reducing friction, increasing transparency, and supporting auditability.

When incentives align

It’s no coincidence that many of the leaders exploring justice finance models are women. Globally, women experience legal problems at disproportionately higher rates than men yet are less likely to obtain formal assistance. Women also control significant pools of global wealth and are more likely to . Indeed, 75% of women believe investing responsibly is more important than returns alone, and female investors are almost twice as likely as male counterparts to prioritize environmental, social and corporate governance (ESG) factors when making investment decisions, .

When those most affected by systemic barriers also shape capital allocation decisions, structural change becomes more feasible. Despite facing steep barriers in legal tech funding (just 2% goes to female founders), women represent in access-to-justice legal tech, compared to just 13.8% across legal tech overall.

This alignment between lived experience, innovation leadership, and capital stewardship creates an opportunity to reconfigure incentives in favor of meaningful change.

Expanding funding and impact

Justice financing will not resolve the justice gap on its own. Mission-focused tools for self-represented parties, legal aid, and court reform remain essential components of a functioning justice ecosystem. However, ethical capital represents an additional structural layer that can expand the range of cases and remedies that receive financial support.

Impact orientation can accommodate longer time horizons, alternative dispute resolution pathways, and remedies that extend beyond monetary damages. In certain matters, particularly those involving environmental harm, systemic consumer violations, or community-wide injustice, capital structured around impact metrics may identify and enable solutions that traditional litigation finance models do not prioritize.

For example, capital aligned with defined impact frameworks may support outcomes that include remediation programs, compliance reforms, or community investments alongside financial recovery. These approaches can create durable benefits that outlast a single judgment or settlement.

Of course, solving deep-rooted inequities and legal system complexity requires more than new tools and new investors. It requires designing capital pathways that are repeatable, accountable, and aligned with measurable public benefit.

Although justice finance may not be a fit for every case and has yet to see widespread uptake, it does have the potential to reach cases that currently fall through the cracks — cases that have merit, despite falling outside traditional litigation finance models and legal aid or impact litigation eligibility criteria.


You can find other installments of our Scaling Justice blog series here

]]>
How AI-powered access to justice is impacting unauthorized practice of law regulations /en-us/posts/government/ai-impacts-unauthorized-practice-of-law/ Mon, 02 Feb 2026 17:55:20 +0000 https://blogs.thomsonreuters.com/en-us/?p=69263

Key insights:

      • Courts and the legal profession need to show leadership — Given their specialized knowledge of the needs of litigants and of courts, courts need to take the lead in determining definitions of the unauthorized practice of law.

      • 3 paths forward to workable regulatory solutions — Recent discussions and research around this subject offered three paths toward modernizing UPL definitions.

      • Uncertainty harms users and innovation — Fear of UPL can drive self-censorship and market exits, even as litigants continue to use publicly available GenAI tools.


Today, many Americans experience legal issues but lack proper access to legal representation. At the same time, AI tools capable of providing legal information are rapidly evolving and already in widespread use. Between these two facts lies a critical definitional problem that courts and state bars must urgently address: How to define the unauthorized practice of law (UPL) in way that doesn’t further curtail access to justice.

This discussion is not theoretical. It directly determines whether AI-based legal services can operate, how they should be regulated, and ultimately whether AI can help unrepresented or self-represented litigants gain meaningful access to justice. This issue was explored in more depth during a recent webinar from theĚý, a joint effort by the National Center for State CourtsĚý(NCSC) and the Thomson Reuters Institute (TRI).

The need for clear definitions

During the webinar, Alaska Supreme Court Administrative Director Stacey Marz noted that “there is no uniform definition of the practice of law” and that UPL regulations represent “a real varied continuum of scope and clarity.” This variation makes compliance challenging for technology providers, especially as they navigate 50 different state standards.

UPL generally occurs when someone “not licensed as an attorney attempts to represent or perform legal work on behalf of another person,” explained Cathy Cunningham, Senior Specialist Legal Editor at ¶¶ŇőłÉÄę Practical Law.

Marz added that such legal advice typically involves “applying the law, rules, principles, and processes to specific facts and circumstances of that individual client — and then recommending a course of action.”

The challenge, however, is that AI can appear to do exactly this, yet the regulatory framework remains unclear about whether and how this should be permitted and how consumers can be protected.

3 paths forward

During the recent webinar, panelists discussed several different approaches to UPL regulations, noting that a and outlined three approaches that state courts could take, including:

Path 1: Explicitly enabling tools with regulatory framework — UPL statutes can be revisited to explicitly allow purpose-built AI legal tools to operate without threat of UPL enforcement, provided they meet certain requirements. Prof. Dyane O’Leary, Director of Legal Innovation & Technology at Suffolk University, emphasized that consumer-facing AI legal tools are already being used for tailored legal advice, arguing that some oversight is better than “just letting these tools continue to operate and hoping consumers aren’t harmed by them.”

Path 2: Creating regulatory sandboxes — Courts could establish temporary experimental zones in which AI legal service providers can operate under controlled conditions while regulators gather data about efficacy and safety through feedback and research, with an eye toward informing future regulation reform.

Path 3: Narrowing UPL to human conduct — Clarifying that existing UPL rules apply only to humans who may hold themselves out as attorneys in tribunals or courtrooms or creating legal documents under the guise of being a human attorney, effectively would leave AI-powered legal tools clearly outside UPL restrictions and open up a “new pocket of the free market” for consumers.

Utah Courts Self-Help Center Director Nathanael Player referenced Utah Supreme Court Standing Order Number 15, which established their regulatory sandbox using a fundamentally different standard: Not whether services match what lawyers provide, but rather “is this better than the absolute nothing that people currently have available to them?”

Prof. O’Leary reframed the comparison itself, suggesting that instead of comparing consumers who use AI tools to consumers with an attorney, the framework should be “consumers that use legal AI tools, and maybe consumers that otherwise have no support whatsoever.”

The personhood puzzle

“AI, at this time, does not have legal personhood status,” said Practical Law’s Cunningham. “So, AI can’t commit unauthorized practice of law because AI is not a person.”

However, Player pushed back on this reasoning, clarifying that “AI does have a corporate personhood. There is a corporation that made the AI, [and] the corporation providing that does have corporate personhood.” He added, however, that “it’s not clear, I don’t think we know whether or not there is… some sort of consequence for the provision of ChatGPT providing legal services.”


You can view here


This ambiguity creates what might be called the personhood gap, a zone of legal uncertainty with serious consequences for both innovation and access to justice.

Colin Rule, CEO at online dispute resolution platform ODR.com, explained that “one of the major impacts of UPL is, actually self-censorship.” After receiving a UPL letter from a state bar years ago, he immediately exited that market. This pattern repeats across the legal tech landscape, leaving companies hesitant to innovate.

Rule’s bottom line resonates with anyone trying to build solutions in this space. “As a solution provider, what I want is guidance,” Rules explained. “Clarity is what I need most… that’s my number one priority.”

Moving forward: Clarity over perfection

The legal profession needs to lead on this issue, and that means state bars and state supreme courts must take action now. The tools are already in use, and the question is not whether AI will play a role in legal services, but rather whether that role will be defined by thoughtful regulation or by default.

The solution is for the judiciary to provide clear guidance on what services can be offered, by whom, and under what conditions. To do that, courts much first acknowledge that for most people, the choice is not between an AI tool and a lawyer but between an AI tool and nothing. Given that, states must walk a path that will both encourage innovation and protect consumers.

To this end, legal professionals and courts should experiment with these tools, understand their trajectory as well as their current limitations, and work collaboratively with developers to create frameworks that prioritize consumer protection without stifling innovation that could genuinely expand access to justice.


You can find out more about how courts and legal professionals are dealing with the unauthorized practice of law here

]]>
Human Layer of AI: How to hardwire human rights into the AI product lifecycle /en-us/posts/human-rights-crimes/human-layer-of-ai-hardwire-human-rights/ Tue, 27 Jan 2026 16:50:00 +0000 https://blogs.thomsonreuters.com/en-us/?p=69143

Key highlights:

      • Principles need a repeatable process —ĚýResponsible AI commitments become real only when companies systematize human rights due diligence to guide decisions from concept through deployment.

      • Policy and engineering teams should co-own safeguards — Ongoing collaboration between policy and technical teams can help translate ideals like fairness into concrete requirements, risk-based approaches, and other critical decisions.

      • Engage, anticipate, document, and improve continuously —ĚýInvolving impacted communities, running regular foresight exercises (such as scenario workshops), and building strong documentation and feedback loops make human rights accountability durable, instead of a one-time check-the-box exercise.


More and more companies are adopting responsible AI principles that promise fairness, transparency, and respect for human rights, but these commitments are difficult to put into practice when it comes to writing code and making product decisions.

, a human rights and responsible AI advisor at Article One Advisors, works with companies to help turn human rights commitments into concrete steps that are followed across the AI product lifecycle. He says that the key to bridging the gap between principles and practice is embedding human rights due diligence into the framework that guides product development from concept to deployment.

Operationalizing human rights

Human rights due diligence involves a structured process that begins with immersion in the process of building the product and identifying its potential use cases, whether it is an early concept, prototype, or an existing product. This is followed by an exercise to map the stakeholders who could be impacted by the product, along with the salient human rights risks associated with its use.

From there, the internal teams collectively create a human rights impact assessment, which examines any unintended consequences and potential misuse. They then test existing safeguards in design, development, and how and to whom the product is sold. “Typically, a new product will have many positive use cases,” explains Natour. “The purpose of a is to find the ways in which the product can be used or misused to cause harm.” In Natour’s experience, the outcome is rarely a simple go or no-go decision. Instead, the range of decisions often includes options such as go with safeguards or go but be prepared to pull back.

Faris Natour, of Article One Advisors

The use of human rights due diligence in the AI product lifecycle is relatively new (less than a decade old) and as Natour explains, there are five essential actions that can work together as a system:

1. Encourage collaboration between policy and engineering teams

Inside most companies, responsible AI is split between policy teams, which may own the principles, and the engineering teams, which own the systems that bring those principles to life. Working with companies, Natour brings these two functions together through a series of workshops to create structured, ongoing collaboration between human rights and responsible AI experts and the technical teams to better co-develop responsible AI requirements.

In the early stages of the collective teams’ work, the challenges of turning principles into practice emerge quickly. For example, the scale of applications and use cases for an AI product can make it difficult to zero in on those uses that . Not all products or use cases need to be treated equally, says Natour, and companies should identify those that could potentially cause the most harm. Indeed, these most-harmful uses may involve a “consequential decision” such as in the legal, employment, or criminal justice fields, he says, adding that those products should be selected for deeper due diligence.

2. Consider the principles at each stage of the development process

Broad principles and values, such as fairness and human rights, should be considered at each stage of the lifecycle. For the principle of fairness, for example, teams may assess which communities will use this product and who will be impacted by those use cases. Then, teams should consider whether these communities are represented on the design and development teams working on the product, and if not, they need to develop a plan for ensuring their input.

3. Engage with impacted communities and rightsholders

Natour advocates for companies to actively engage with impacted communities and stakeholders, including those who are potential users or who may be affected by the product’s use. This could be the company’s own employees, for example, especially if the company is developing productivity tools to use internally in their workplace. Special consideration should be given to vulnerable and marginalized groups whose human rights might be at greatest risk.

External experts, such as Natour and his colleagues, hold focus groups with such stakeholders as . The feedback from focus groups can then be used to influence model design, product development, as well as risk mitigation and remediation measures. “In the end, knowing how users and others are impacted by your products usually helps you make a better product,” he states.

4. Establish responsible foresight mechanisms

To prevent responsible AI from becoming a one-time check-the-box exercise, Natour says he uses responsible foresight workshops and other mechanisms as a “way to create space for developers to pause, identify, and consider potential risks, and collaborate on risk mitigations.”

The workshops use personas and hypothetical scenarios to help teams identify and prioritize risks, then design concrete mitigations with follow-on sessions to review progress. Another approach includes developing simple, structured question sets that push product teams to pause and think about harm. For example, Natour explains how one of his clients includes the question: What would a super villain do with this product? in order to help product teams identify and safeguard against potential misuse.

5. Create documentation and feedback loops for accountability

As expectations around assurance rise from regulators, customers, and civil society, strong documentation and meaningful, accessible transparency are essential, says Natour.ĚýClear, succinct, and accessible user-facing information about what a model does and does not do, about data privacy, and other key aspects can help users understand “what happens with their data, as well as the capabilities and the limitations of the tool they are using,” he adds.

Further, transparency should enable two-way communication, and companies should set up feedback loops to enable continuous improvement in the ways they seek to mitigate potential human rights risks.

The hardwired future

Effectively embedding human rights into the AI product lifecycle starts with a shared governance model between a company’s policy and engineering teams. Together they can collectively hardwire human rights into the way AI systems are imagined, built, and brought to market.


You can find more about human rights considerations around AI in our here

]]>
Human Layer of AI: The crosswinds of AI, sustainability, and human rights enter the mainstream in 2026 /en-us/posts/sustainability/human-rights-enter-the-mainstream/ Thu, 08 Jan 2026 16:40:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=68962

Key takeaways:

      • Clean energy takes center stage in corporate AI initiatives — Access to cheap, low‑carbon power will become a core driver of AI competitiveness, especially in the US, where electricity costs are on the rise.

      • Corporate buyers of AI will exert new leverage over suppliers — Corporate buyers will increasingly use their purchasing power to push data center operators to align AI build‑outs with local climate, water, and community expectations — not just to supply more metrics.

      • AI’s human labor layer enters mainstream due diligence — AI labor supply chains will be brought into the mainstream supply chain and require human rights due diligence.


As we enter 2026, there are three main themes that many corporations will need to manage around issues of renewable energy, AI supplier behavior, and labor.

Theme 1: Renewables move to the center of corporate AI strategies

In 2026, AI competitiveness and energy policy will be tightly fused. With AI workloads driving up electricity demand amid datacenter buildouts, particularly in the United States, access to renewable energy sources in the form of abundant, cheap, low‑carbon power becomes a decisive factor in AI pricing and availability.ĚýCountries and companies that lock in this advantage early will shape AI deployment patterns for the rest of the decade.

“The economics of renewable energy are what is causing it to accelerate, even in the US,” says , an expert in sustainability and business. “Despite the political winds, the fact is that wind and solar are growing faster… because it is cheaper, better energy.”

In addition, countries and firms with large, subsidized renewable energy capabilities and flexible grids, such as China’s massive solar, wind, and hydro infrastructure, will have a low-cost advantage. (However, countries’ push for AI may counteract this by prompting governments to prioritize domestic AI stacks over purely cost‑optimized ones.) Yet, combining this asset , such as Kimi K2 and DeepSeek, it is not outside the realm of possibility that the country could emerge in the top spot in AI development and innovation.

Corporate pressure to increase AI adoption for efficiency combined with stakeholder expectations of investing in a low-carbon future will make renewables the center of corporate AI strategies. Increasingly, companies will be asked where their computers run, what energy mix powers them, how cost effective that energy mix is, and whether companies are effectively endorsing environmentally and socially harmful projects in host communities.

Theme 2: Local backlash forces suppliers and companies to confront AI’s impact

Over the last few years, big names among AI infrastructure providers have tried to take advantage of the AI revolution, in AI-related data centers, cloud systems, and other infrastructure with no end in sight over the next few years.

Despite the demand, local communities in which large data center construction projects are planned are pushing back. According to , $64 billion of data center projects in the US have been blocked or delayed amid local opposition since 2025. This opposition comes in part because of concerns regarding , strains on local water and natural resources, and the reduction of working farmland from data center rezoning attempts in rural communities.

In fact, AI data centers are pushing up electricity demand and fueling higher electricity prices for many US households. And, as retail electricity price increases over the next couple of years are likely to continue, it will be in part because of consuming more electricity.

As a result, the demand from stakeholders — in particular, those from local communities including local and state politicians — for increased transparency on the environment and social impacts of corporate AI services is likely to surge. In turn, corporate buyers of AI services will put pressure on the big AI service suppliers to provide more precision in the locations of such data systems as well as disclose more associated sustainability data, such as energy sources, grid impacts, and their level of community engagement where large AI infrastructure is based.

To deal with these competing priorities, boards of companies using AI services will need to reconcile AI cost‑cutting with their transition commitments by ensuring that cost advantages are not built on externalizing environmental and social harms.

Not surprisingly, in 2026, more boards will be drawn into explicit debates about whether AI‑driven cost savings justify exposure to higher community, political, and regulatory risk. This turns questions about data center locations and power contracts into mainstream agenda items.

Theme 3: The human layer of AI emerges as a centerpiece of the supply chain

The idea that AI is automating everything will sit uncomfortably alongside a growing recognition that large‑scale AI depends on a largely invisible workforce. Across the full AI life cycle of products — some of which rely on models that utilize labor in data collection, curation, annotation, labeling, evaluation, and content moderation — there are thousands of workers performing the tasks that make models safe, accurate, and usable.

As AI systems scale across sectors, demand for this human labor increases in volume and complexity, according to , a human rights expert at Article One Advisory. Indeed, much of it remains outsourced, precarious, or gig‑based (often in the Global South), with low pay, weak protections, and exposure to psychologically harmful content rampant. Civil society, unions, and regulators are beginning to connect AI innovation with labor rights and occupational health; and this reality makes the human layer of AI a frontline human rights issue rather than a technical detail.

The for AI‑related labor is likely to move from a niche concern to a mainstream pillar of corporate human rights due diligence. Companies will be under pressure to know what subcontractors and suppliers are doing to ensure human rights for individuals doing AI data enrichment and moderation work, under what conditions, and through which intermediaries.

Following the evolution of how conflict minerals or modern slavery have been integrated into supplier management, a shared view of AI labor supply chains by corporate procurement, legal, product management, and sustainability teams will materialize.

Forward into 2026

As AI becomes embedded in the infrastructure of daily life, companies will face mounting pressure to demonstrate that their AI strategies align with human rights and environmental commitments, not just efficiency gains. The convergence of these three themes signals that transparency in AI governance in 2026 will be inseparable from broader corporate governance and responsibility. And those organizations that treat these themes as compliance checkboxes rather than fundamental design principles will risk both reputational damage and operational disruption in an increasingly scrutinized landscape.

Companies that fear the exaggerated risk of attracting the ire of activists are underestimating the greater risk of losing the goodwill of customers, investors, and employees that they need,” Friedman adds.


You can find out more about how companies are managing issues of sustainability here

]]>
Human Layer of AI: Protecting human rights in AI data enrichment work /en-us/posts/human-rights-crimes/ai-protecting-human-rights/ Fri, 19 Dec 2025 15:43:10 +0000 https://blogs.thomsonreuters.com/en-us/?p=68877

Key highlights:

      • Human rights risks are elevated for data enrichment workers — Data enrichment workers can face low and unstable pay, overtime pressure driven by buyer timelines, harmful content exposure with weak safeguards, limited grievance access, and uneven legal protections that hinder workers’ collective voice.

      • Human rights due diligence is essential for companies — Companies as buyers of these services must map subcontracting tiers, assess risk by employment model, document worker protections down to Tier-2 and Tier-3 suppliers, and audit and monitor their own rates, timelines, and payment terms to avoid reinforcing harm to workers.

      • Responsible contracting and remedy are a necessity — Contracts should embed shared responsibility, and include fair rates, predictable volumes, realistic deadlines, funded health & safety and mental‑health supports, effective grievance channels, and remediation.


Demand for data enrichment work has surged dramatically with the rapid development and expansion of AI technology. This work encompasses collecting, curating, annotating, and labeling data, as well as providing model training and evaluation — all of which are critical activities that improve how data functions in technological systems.

However, the workers performing these tasks currently operate under different employment models, according to from Article One Advisors, a corporate human rights advisory firm. Some workers are in-house employees at major AI developers, others work for business process outsourcing (BPO) companies, and many are independent contractors on gig platforms on which they bid for tasks and get paid per piece.

Human rights issues in data enrichment work

Data enrichment workers sit at the sharp end of the AI economy, yet many struggle to earn a stable, decent income. In particular, pay for gig workers often falls short of a living wage because tasks are sporadic, payments can be delayed, and compensation is frequently piece‑rate. Because work flows through , fees and margins get skimmed at each layer and shrink take‑home pay — another area of exploitation for today’s digital labor workforce.

In addition, another human rights issue at work is their right to rest, leisure, and family life and, in some places, even breaching guidance from the International Labour Organization (ILO) or local labor laws. Buyer purchasing practices with aggressive deadlines are a significant upstream driver of this overtime pressure.


National labor protections vary widely, and platform workers in particular often fall through regulatory gaps.


For many, the work itself carries health risks. Labeling and moderation can require repeated exposure to violent or graphic content, with well‑documented mental‑health impacts. Yet safeguards are uneven. Indeed, workers may lack protected breaks, task rotation, mental‑health support, adequate insurance, or the option to switch assignments. Even when content is not graphic, strain shows up as ergonomic problems, stress, and disrupted sleep.

When harm occurs, remedy can be hard to access. Platform-based work setups often provide no clear, trusted point of contact, and reports of retaliation deter complaints. Effective operational grievance mechanisms can be missing, and this leaves workers without credible paths to redress.

Finally, national labor protections vary widely, and platform workers in particular often fall through regulatory gaps. Because work is individualized and online, forming unions or works councils is harder. This weakens workers’ collective voice just where and when it is most needed to identify risks, negotiate improvements, and secure remedies.

Due diligence for companies buying data enrichment services is essential

When companies procure data enrichment services, they must recognize that respecting human rights extends throughout the entire value chain and not just with themselves and their direct suppliers. Companies creating trusted partnerships with their suppliers helps to identify issues before they become harmful and create mutual accountability for the humans behind the algorithms.

Article One Advisors’ Lloyd explains that the mandatory baseline starts with human rights due diligence, and can be found in areas such as:

      • Risk identification and assessment — The first step for companies is to identify and assess risksĚýby understanding their suppliers’ model. This means knowing which groups of workers are full-time employees, contracted workers, or platform-based gig workers. Each model carries different risk profiles.
      • Subcontractor ecosystem mapping — Tracing the subcontracting chainĚýto see how many layers exist between the supplier and the workers is essential. Fees and pressures compound at each tier of the value chain, says Lloyd.
      • Documentation of worker protections in Tier 2 and Tier 3 suppliers — Assessing and promoting worker protections for every layer of the value chain — which includes making sure the wage structures are clearly defined and equitable, health and safety measures are adequate, and protections for exposure to harmful content and effective grievance mechanisms exist — are baseline elements of human rights due diligence.
      • Examination of company’s own practices — Finally, it is necessary for companies to ensure that their own procurement standards and contracts are not reinforcing human rights harms. This includes companies confirming that their contract terms, timelines, and payment schedules are not inadvertently forcing suppliers to cut corners.

Responsible contracting and remedy mechanisms

Companies as buyers of data enrichment services also must instill shared responsibility in owning worker outcomes among themselves, BPOs, platforms, and model developers. Comprehensive, clear human-rights standards, living-income benchmarks, and shared responsibility are essential elements of good purchasing practices. More specifically, these require fair rates for work, predictable volume expectations, and realistic timelines to make sure suppliers do not push excessive hours. In addition, budgets should include cost-sharing for audits, key risk management measures (such as mental health support), and occupational health and safety controls.

Smart remediation turns harmful situations into improved conditions by providing back-pay for underpayment, medical and psychosocial care after exposure to harmful content, contract adjustments to remove perverse incentives, and time-bound corrective action plans co-designed with worker input. As a last resort when buyer and supplier need to part ways, a responsible exit is planned with notice, transition support, and no sudden contract termination that strands workers.

Similarly, grievance mechanisms for platform workers — who are often dispersed across geographics, classified as independent contractors, and lack line managers or union channels — need to be contractually documented. Effective grievance redressal needs to include confidential mechanisms and remediation processes, in-platform dispute tools, independent individuals to investigate complaints, multilingual facilitation, and joint buyer-supplier escalation paths to bridge gaps in labor-law protection and deliver credible remedies at scale, Lloyd notes.

Promoting quality through worker well-being

Protecting data enrichment workers is not only an ethical imperative but also essential for AI quality itself. When workers face excessive hours, inadequate pay, or harmful content exposure without proper support, the resulting stress and burnout directly impact data quality outcomes. Companies must recognize that responsibility for worker well-being and quality data outcomes extend throughout the entire value chain and does not solely rest with BPOs providers alone.


You can find more about the challenges companies and their workers face from forced labor in their supply chain here

]]>
Innovation in action: How one Arizona city is redefining public safety for a growing community /en-us/posts/government/redefining-public-safety/ Fri, 06 Jun 2025 15:03:13 +0000 https://blogs.thomsonreuters.com/en-us/?p=66190 Once a small farming community known as the “hay shipping capital of the world,” Gilbert, Arizona is now the state’s fifth-largest municipality and a thriving hub for the aerospace, defense, and biotech industries. With Gilbert’s designation as the , this high-growth community has made strategic investment in its public safety infrastructure, training, and workplace culture to support its rapid growth.

Purpose-built public safety training facility

The Town of Gilbert’s $86 million, prepares current and future first responders in their local environment, blending element-specific training and immersive technology. The training prepares first responders for a wide range of potential scenarios they may encounter across the nearly 200,000 calls for service received by Gilbert each year. Prior to the facilities implementation, the town’s first responders spent more than 4,000 hours off post each year, traveling to receive training in different communities. Not only can these first responders now train locally, but these facilities are also a hub for training volunteers and aspiring first responders from throughout the state and region.

public safety

For example, the Gilbert Fire Department hosts a three-week internship program as a part of their firefighter recruitment program, in addition to co-hosting with the Gilbert Police Department a four-day for high school-aged girls interested in the public safety field. A Cadet Program and train adults and volunteers who may be interested in becoming professional firefighters or volunteering in post-incident recovery, respectively. The Gilbert Police Department created the Gilbert Police Regional Academy following the implementation of this space, a multi-community collaborative recruit training program that serves Phoenix-area partner agencies. The police department anticipates bringing more than 200 new recruits into the department by 2030.

Cutting-edge training tools and technology

The purpose-built nature of the space allows police and firefighters to train in realistic environments. The facility includes a 46,000-square-foot pair of indoor shooting ranges that support low-light and vehicle-based training in a lead-free, soundproof environment. The facility utilizes a pressure system to eliminate smoke within sixty seconds of rounds being fired and even – with nearly 12,000 pounds of brass bullet casings and 6,000 pounds of frangible powder having been recycled over the past two years. Access to a high-quality indoor training facility in the harsh desert climate has led to a high demand for this space from surrounding law enforcement agencies, as well.

A simulated rail incident on the public safety training center campus allows the fire department to train with the specific type of equipment that transports nearly 70% of hazardous materials in the US. Also, a serpentine driving track provides public safety personnel with a safe environment to learn and refine emergency vehicle driving techniques and pursuit tactics, such as utilizing or deploying grapplers.

public safety

In addition to providing enhanced facilities for on-the-scene first responders, the Town of Gilbert has invested in facilities for those professionals who provide crucial support off-scene in their Emergency Operations Center (EOC) and 911 Dispatch center. There has been , ensuring that they receive the appropriate benefits and mental health resources to manage the high stress and trauma of triaging a wide variety of emergencies. From 2023 to 2024, Gilbert’s EOC was remodeled at a cost of $7.7 million and the 911 Dispatch center was remodeled, doubled in size, and equipped with cutting-edge technology at a cost of $11 million.

The now emphasizes wellness in design by incorporating art and function. While functionally, the center more than doubled in size to 10,500 square feet with 19 dispatch consoles and room for five more; technology-wise, the center also features Cloud CAD, Next Gen 911, updated radio consoles, and flex-use training spaces with the actual consoles that future dispatchers will utilize.

Further, elements promoting art and wellness such as circadian rhythm lighting, solar tubes that bring in natural lighting, biophilia through plants and green walls, and a high-tech air purification system make a stressful environment more physically inviting.

The Town of Gilbert has demonstrated significant care for the well-being of its employees, too. Off of the Dispatch floor are wellness rooms with red-light and massage therapy machines and a fitness center. The town’s allows new parents to bring their infants to work for the first six months of their life, with specialized baby rooms and nursing suites available just steps away from parents. This not only relieves stress for new parents, but also helps them better connect as a team.

Immersive and inclusive public safety training

Gilbert’s Police, Fire, and Parks & Recreation Departments are , which ensures that 80% or more of the employees within these departments are trained to communicate with and respond to community members on the autism spectrum. The Police Department also utilizes to train first responders in real-time decision making, as well as through a specialized Autism Awareness V-VICTA program developed in partnership with the Southwest Autism Research and Resource Center.

public safety

The Police Department also offers their , a confidential program by which family members or caregivers can provide law enforcement with information to build a confidential profile for individuals on the autism spectrum, which may include a photo, information about sensory triggers, fears, interests, and communication preferences. Further, the Gilbert Police and Fire Departments jointly host each Spring, offering family members an opportunity to introduce children on the autism spectrum to first responders in a low-stakes, non-emergency environment. Individuals can ride along in a fire truck, experience a police traffic stop, see fire gear up close, and more.

The goal of these educational efforts is to elevate the level of service which first responders offer to all community members and to create opportunities for individuals on the autism spectrum to bond with first responders before a crisis occurs.

Indeed, the Town of Gilbert’s mission is to . If this were not evident enough from its thoughtful and intentional investments in public safety infrastructure and community-facing partnerships, the forthcoming is a living embodiment of this mission. The Advocacy Center, slated to open in 2026, is a place for crime victims to recover from trauma and navigate the justice process. The facility will feature trauma-informed design in its architectural design and consider user experience first and foremost. The space will offer forensic interview rooms, private counseling rooms, group therapy spaces, victim advocate space, and more.

Through this type of intentional investment and innovation, the Town of Gilbert is not only preparing for the future needs of its own population, but it’s setting a high standard for the future of public safety.

Photos courtesy of the Town of Gilbert


You can find out more about howĚýlaw enforcement is using advanced technology in their efforts to fight crime and serve the publicĚýhere

]]>
Preserving ethical business: What should corporations do during this period of perceived human rights de-prioritization? /en-us/posts/human-rights-crimes/preserving-ethical-business-human-rights-de-prioritization/ Tue, 29 Apr 2025 14:48:55 +0000 https://blogs.thomsonreuters.com/en-us/?p=65722 In the first quarter of 2025, the administration of new President Donald J. Trump has cut US foreign aid by ; and in late February, the Trump administration paused enforcement of the Foreign Corruption Practices Act (FCPA) for 180 days while the new U.S. Attorney General reviews existing FCPA actions and issues new guidance for .

Both of these moves reinforce the perception that there are signs of a global rollback in human rights, underscored by the European Union moving to reduce corporate accountability in human rights due diligence.

“Corruption is an enabler of human rights violations, [and] the rollbacks reduce accountability for bribery,” according to human rights experts and of FTI Consulting. Indeed, a reduction in accountability could embolden companies and potentially increase human rights abuses, they explain.

Risks of relaxing FCPA compliance

Over the years, many multinational companies have invested significantly in developing robust internal compliance programs to adhere to FCPA requirements. Weakening these frameworks could lead companies to divert resources away from maintaining compliance, which could allow bad actors to exploit the reduced scrutiny and result in increased fraud, misconduct, and human rights abuses.


Join us for a free online Webinar: World Day Against Trafficking in Persons to learn more about the complexities of human trafficking, the impact on victims, and effective strategies for prevention and intervention


“While these rollbacks in the US may indicate a temporary decrease in regulatory pressure within, it is essential for companies to recognize that global regulatory trends are moving towards greater corporate accountability,” not less, says Wong and Cobb. US companies operating internationally must adhere to these emerging standards, and the pause on domestic FCPA enforcement does not eliminate companies’ legal and reputational risks.

Wong and Cobb point out that FCPA enforcement has historically been cyclical, and companies reducing compliance efforts now might find themselves unprepared when enforcement resumes. Indeed, the statute of limitations for FCPA violations is five years for anti-bribery offenses and six years for accounting violations.

Recommendations for companies to navigate uncertainty

As businesses face a shifting regulatory landscape, navigating the path forward requires both immediate action and strategic foresight. The following guidance from Wong and Cobb offer a framework for maintaining ethical business practices and stakeholder confidence while adapting to evolving global standards.

In the short term, for instance, companies must adopt proactive strategies to prepare for the shifting landscape created by these rollbacks, including:

      • Monitoring global regulatory trends — Companies should actively track global regulatory developments to stay ahead of compliance requirements, even if these do not originate from the United States.
      • Engaging with stakeholders — It is crucial to maintain open communication with investors and stakeholders regarding ongoing anti-corruption and human rights commitments. This engagement ensures transparency and reinforces the company’s dedication to ethical practices.

In addition, companies should that the company maintains a zero-tolerance policy for bribery and corruption. In addition, companies should keep open anonymous hotlines to report potential ethics violations in order to prevent the erosion of a culture of ethics, which often takes years of effort to build. Likewise, companies need to continue monitoring their third-party vendors, consultants, or suppliers because over the past decade, about 90% of FCPA enforcement resolutions have involved third-party representatives or consultants engaged in corruption.

Meanwhile, Cobb and Wong also suggest that companies focus on aligning with international standards and best practices. Adhering to well-recognized international frameworks is crucial to remain competitive. For example, the UN’s Guiding Principles offers a flexible approach to keeping ethics practices around human rights, according to Wong. Likewise, Cobb suggests that companies voluntarily embrace the EU’s Corporate Sustainability Due Diligence and its Corporate Sustainability Reporting Directive, once the amendments are finalized, as robust options for compliance reporting.

Regardless of whether these rollbacks had occurred, the overarching recommendation is for companies to maintain robust corporate compliance and human rights risk management programs. This proactive approach not only prepares companies for potential regulatory changes but also positions them as leaders in ethical business practices on the global stage.

By continuing to prioritize compliance and human rights, companies can navigate the evolving regulatory landscape effectively, ensuring long-term business success and sustainability.


You can find more information on how organizations are managing their regulatory obligations here

]]>
Fighting fraud in nonprofits: How best to protect donor dollars /en-us/posts/government/fighting-fraud-nonprofits/ Thu, 13 Feb 2025 02:45:54 +0000 https://blogs.thomsonreuters.com/en-us/?p=64802 Nonprofit organizations play a vital role in meeting societal needs, from disaster relief to education and healthcare. However, the very attributes that make nonprofits’ missions so critical — such as their focus and their reliance on public trust — also make them vulnerable to fraud.

Worse yet, fraud within the nonprofit sector can divert resources from those in need, damage donor confidence, and tarnish the reputation of charitable organizations that have been victimized by fraud. Understanding common fraud schemes and how to protect yourself as a donor is essential to ensuring that contributions are used effectively.

Common types of fraud in nonprofits

There are several common types of fraud that nonprofits encounter and understanding how these methods can prevent nonprofits from fulfilling their missions is important. Often, nonprofit fraud involves more complex motives, as these organizations face unique challenges in fraud prevention. With limited resources for oversight and a heavy reliance on trust, nonprofits are particularly vulnerable to fraud, including risks similar to those seen inside corporations.

While insider corporate fraud is typically driven by personal financial gain — such as falsifying reports for bonuses or stock manipulation — nonprofit fraud tends to involve issues like mismanagement, misallocation of funds, or misleading claims about an organization’s impact. The plethora of reasons behind fraud in this sector make it more difficult to target, making it all the more necessary that donors be vigilant. Some common types of fraud that affect nonprofits includes:

      • Cybercrime and phishing scams — Nonprofits manage sensitive donor and volunteer data, making them vulnerable to cyberattacks. Fraudsters may employ phishing tactics through email or other online channels to steal personal information, putting both the organization and its supporters at risk.
      • Embezzlement — Employees or volunteers may divert funds intended for the nonprofit’s mission for their own personal use. Embezzlement can undermine an organization’s ability to provide services and harm its reputation.
      • Fundraising scams — Fraudsters frequently take advantage of crises, such as natural disasters, by impersonating legitimate nonprofits to solicit donations. These scams prey on the generosity of donors and divert funds from legitimate causes.
      • Vendor fraud — Fraudulent billing schemes are common, with vendors overcharging for services or delivering sub-par products while invoicing for higher-quality ones. These fraudulent practices drain funds that should be used for charitable purposes.

Real-world cases of nonprofit fraud

To understand this real-world impact of nonprofit fraud, a look at some notable cases can highlight how fraud can undermine a nonprofit’s finances, mission, and public trust. In fact, to best grasp the real-world impact of nonprofit fraud, it’s helpful to examine some high-profile cases in which organizations encountered major challenges. These examples highlight the risks and consequences of fraud within the nonprofit sector.

The Red Cross and Haiti — In 2010, the Red Cross faced significant scrutiny following the Haiti earthquake. Reports revealed that a large portion of the funds raised for disaster relief never reached the victims directly. This case highlights how even well-known nonprofits can face fraud allegations, especially when large sums of money are involved.

United Way phishing scam — In 2018, the United Way fell victim to a phishing scam in which cybercriminals tricked staff into revealing login credentials. While no donor data was stolen, the breach highlighted the vulnerability of nonprofits to such scams and emphasized the importance of training staff to recognize phishing attempts and better secure sensitive information.

City of Chicago vendor fraud — In 2018, a nonprofit providing services to Chicago’s homeless population was caught in a vendor fraud scheme. The vendor overbilled the city for services it didn’t deliver, amounting to millions of dollars in overpayment losses. This case underscores the risks of vendor fraud in nonprofits and highlights the need for strong oversight and auditing to protect donor funds.

Feeding Our Future fraudĚý— In 2022, Feeding Our Future, a nonprofit dedicated to providing meals to children and families, was caught up in a massive fraud scheme. The organization allegedly inflated the number of meals served and diverted millions of donor dollars intended for child nutrition programs into personal accounts. Additionally, the scheme involved submitting false documentation to federal agencies.

How donors can protect themselves

These cases highlight how nonprofits can become entangled in fraud. And while nonprofits are vulnerable, donors also play a crucial role in ensuring their contributions are used responsibly. By taking proactive steps, donors can help ensure that their funds are directed to legitimate causes and used effectively.

By staying informed and vigilant, donors can help ensure that their donations make a meaningful impact and are not lost to fraud. Some key actions you, as a donor, can take include researching the charity. By utilizing trusted platforms such as , , and the to assess an organization’s financial health, transparency, and track record, you can determine a charity’s legitimacy. You also should verify a nonprofit’s tax-exempt status through the IRS database and review its financial reports. In fact, you should request and review the nonprofit Form 990, an annual financial disclosure required by the IRS that offers valuable information on how funds are being allocated within the nonprofit.

Further, reputable nonprofits offer regular updates on how donations are spent and the impact of their programs. Be cautious of organizations that aggressively solicit donations or fail to provide clear information about their activities. And avoid sharing personal information via email, phone calls, or text messages.

Finally, if you suspect fraudulent activity, report it to the appropriate authorities, such as the Federal Trade Commission or your state’s Attorney General’s office.

While these precautions are essential in safeguarding your contributions, it’s equally important to recognize potential warning signs of fraud that could compromise your donation. By staying alert to red flags, you can ensure your funds are being directed to legitimate organizations and used as intended.

Watch out for red flags

Recognizing key signs can help protect not only your contribution from being misused but ensure your dollars are supporting legitimate causes. Some important red flags include the use of high-pressure tactics. Reputable nonprofits respect a donor’s time and decision-making process. If an organization pressures you to donate quickly, it’s a cause for concern.

Also, if an organization offers inconsistent information or exhibits a lack of transparency, that could be another red flag. You should always verify claims about an organization’s success or financial needs through independent sources. Any organizations that refuse to share financial documents or program details should be avoided.

Further, be cautious about receiving unsolicited emails or text messages. Never click on a link without verifying it. Always manually type web addresses and make sure they end with .org, as this is the most common web domain suffix for nonprofits and charities. Also, donating by check or credit card, and you should be very cautious of donation requests that are asking the donor to use cash, gift cards, wire transfers, or virtual currencies.

Recognizing these warning signs helps protect your contributions and ensures your funds support legitimate causes. For donors, staying vigilant is crucial to safeguarding their donations and making a real impact in the nonprofit sector.

Conclusion

Fraud in the nonprofit sector can have devastating consequences for both the organizations it affects and the donors who place their trust in them. However, by taking due diligence seriously — researching charities, monitoring transparency, and being vigilant for red flags — donors can help ensure their contributions make a meaningful impact. Trustworthy nonprofits prioritize accountability, enabling them to fulfill their missions while safeguarding donor dollars.


You can find more ways to detect and prevent fraud here

]]>