Social Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/social/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Fri, 17 Apr 2026 06:41:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Housing affordability in Mexico City: How the 2026 FIFA World Cup exposes a deeper urban crisis /en-us/posts/sustainability/housing-affordability-crisis-mexico/ Fri, 17 Apr 2026 06:04:56 +0000 https://blogs.thomsonreuters.com/en-us/?p=70429

Key takeaways:

      • The FIFA World Cup is a catalyst, not the root cause — Mexico City’s housing affordability crisis predates the coming tournament. Rental prices have been rising uncontrollably for years, displacing thousands of families annually. The World Cup will accelerate and amplify an already existing problem.

      • The 2024 rental reform is a step in the right direction, but it has significant limitations — Capping rent increases at the annual inflation rate was a necessary measure, but its impact has been limited by grey areas in the law.

      • The real battle is formalization — No housing regulation can be fully effective if a large portion of the market operates outside of it. Until authorities find ways to make formal rental agreements genuinely attractive and accessible for both landlords and tenants.


On the eve of the 23rd playing of the FIFA World Cup, Mexico stands as one of three host countries for one of the most significant sporting events in the world. It will feature matches in Mexico City, Guadalajara, and Monterrey, and it will be co-hosted alongside the United States and Canada.

Organizing such an event carries notable financial benefits, including a surge in tourism, job creation, and substantial foreign investment — all of which generate a local economic spillover that strengthens the national marketplace. At the same time, Mexico’s major capitals— especially its World Cup host cities — have been undergoing a level of urban transformation that has significantly altered the daily lives of its residents. Chief among these changes is the sharp rise in rental costs, which has been pushing residents toward the cities’ outskirts. According to government figures, are displaced each year due to the uncontrolled increase in housing prices in Mexico City alone.

Mexican authorities had to get to work

Legal changes to real estate regulation in Mexico City are not isolated, and what is implemented in the capital often sets a precedent for the rest of the country. Time and again, Mexico City has served as a laboratory for new policies, and when these are proven effective, they become models for nationwide reform.


According to government figures, more than 20,000 households are displaced each year due to the uncontrolled increase in housing prices in Mexico City alone.


That said, in August 2024 — after the city’s head of government noted that rentals costs in none of the boroughs of Mexico City fall below the city’s minimum wage, and that 9 out of 13 boroughs average rents that exceeded twice the minimum wage — the Official Gazette of Mexico City published a decree amending Articles 2448-D and 2448-F of the Civil Code for the Federal District, imposing limits on rent increases for residential properties. Previously, the monthly rent increase could not exceed 10% of the agreed-upon rent. That paragraph was amended to establish that rent increases shall never exceed the inflation rate reported by the Bank of Mexico for the previous year.

It is worth noting that the prior 10% cap was nearly three times the general annual inflation rate calculated by the Bank of Mexico in 2025, which stood at 3.69%.

More than a year after these reforms took effect, however, 2025 closed with an average increase in rental prices of . With the FIFA World Cup approaching, prices are expected to continue rising uncontrollably due to the influx of tourists drawn by the event. This concern is well-founded: Ahead of the 2022 World Cup in Qatar, empowered landlords to raise rents by more than 40%.

Mexico City’s rental reform also introduced additional measures. For example, a digital registry for lease agreements was established, to be immediately authorized and managed by the Government of Mexico City. Landlords now are required to register lease agreements within 30 days of their execution. Furthermore, landlords are prohibited from refusing to rent to tenants on the grounds that they have children or pets.

The registration requirement carries real consequences: Should a landlord fail to register a contract within the stipulated period, their ability to invoke legal protection mechanisms in the event of a dispute with a tenant becomes significantly more complicated.

Regardless of the efforts, it’s not all smooth sailing

That said, the reform contains certain grey areas that limit its scope. For instance, it only applies under specific conditions — most notably when a lease has been in place for three years or more. A landlord can effectively circumvent the cap by choosing not to renew an existing contract and instead requiring the tenant to sign a new one at a higher price.

A separate but equally significant obstacle to the reform’s effectiveness is the rapid growth of short-term rental platforms. In recent years, the proliferation of temporary accommodation services has steadily reduced the supply of traditional long-term rentals, as more properties are listed on platforms such as Airbnb, Vrbo, or others. Indeed, every 48 hours, three housing units in Mexico City are . And from a national perspective, the Tourism Gross Product reached approximately US $151.5 billion, equivalent to 8.7% of Mexico’s GDP.


Every 48 hours, three housing units in Mexico City are converted into Airbnb listings.


This problem is further compounded by the scale of informal rental arrangements. According to the National Housing Survey conducted by Mexico’s National Institute of Statistics and Geography (INEGI), there are more than 200,000 informal rental agreements in Mexico City — none of which involve formal contracts.

Forcing the real estate market into formalization

This brings us to the central challenge facing city authorities with regard to housing: The need to incentivize the formalization of the real estate market. This is already complicated by the country’s low tax culture and the requirement for landlords to enter a specific tax regime that raises their tax burden. Additionally, rental contracts are not only essential for protecting tenants’ rights, but they also are equally important for landlords — because without a legally binding agreement, there is no guarantee that the terms of any arrangement will be honored.

Paradoxically, the recent reform may actually push the informal market further underground. By requiring landlords to formally declare their rental income, the regulation inevitably creates a sense of heightened oversight — one that informal landlords may seek to evade rather than comply with.

To the authorities of Mexico City, the message is clear — punitive measures alone will not bring the informal market into the fold. Tax benefits for landlords who register their contracts, streamlined and accessible digital registration processes, and legal protections that make formal agreements genuinely advantageous for both parties could go a long way toward building trust in the system.

The 2026 FIFA World Cup will come and go, of course, but the people of Mexico City will remain. They deserve a housing market that works for them — not one that treats their homes as a commodity to be priced beyond their reach every time the world turns its attention to their city.


You can find out more about the

]]>
Scaling Justice: Unlocking the $3.3 trillion ethical capital market /en-us/posts/ai-in-courts/scaling-justice-ethical-capital/ Mon, 23 Mar 2026 17:12:28 +0000 https://blogs.thomsonreuters.com/en-us/?p=70042

Key takeaways:

      • An additional funding stream, not a replacement — Ethical capital has the potential to supplement existing access to justice infrastructure by introducing a justice finance mechanism that can fund cases with measurable social and environmental impact.

      • Technology as trust infrastructure — AI and smart technologies can provide the governance scaffolding required for ethical capital to flow at scale, including standardizing assessment, impact measurement, and oversight.

      • Capital is not scarce; allocation is — The true bottleneck is not the availability of funds; rather it’s the disciplined, investment-grade legal judgment required to evaluate risk, ensure compliance, and measure impact in a way that makes justice outcomes investable.


Kayee Cheung & Melina Gisler, Co-Founders of justice finance platform Edenreach, are co-authors of this blog post

Access to justice is typically framed as a resource problem — the idea that there are too few legal aid lawyers, too little philanthropic funding, and too many people navigating civil disputes alone. This often results in the majority of individuals who face civil legal challenges doing so without representation, often because they cannot afford it.

Yet this crisis exists alongside a striking paradox. While 5.1 billion people worldwide face unmet justice needs, an estimated $3.3 trillion in mission-aligned capital — held in donor-advised funds, philanthropic portfolios, private foundations, and impact investment vehicles — remains largely disconnected from solutions.

Unlocking even a fraction of this capital could introduce a meaningful parallel funding stream — one that’s capable of supporting cases with potential impacts that currently fall outside traditional funding models. Rather than depending on charity or contingency, what if justice also attracted disciplined, impact-aligned investment in cases themselves, in addition to additional funding that could support technology?

Recent efforts have expanded investor awareness of justice-related innovation. Programs like Village Capital’s have helped demystify the sector and catalyze funding for the technology serving justice-impacted communities. Justice tech, or impact-driven direct-to-consumer legal tech, has grown exponentially in the last few years along with increased investor interest and user awareness.

Litigation finance has also grown, but its structure is narrowly optimized for high-value commercial claims with a strong financial upside. Traditional funders typically seek 5- to 10-times returns, prioritizing large corporate disputes and excluding cases with significant social value but lower monetary recovery, such as consumer protection claims, housing code enforcement, environmental accountability, or systemic health negligence.

Justice finance offers a different approach. By channeling capital from the impact investment market toward the justice system and aligning legal case funding with established impact measurement frameworks like the , it reframes certain categories of legal action as dual-return opportunities, covering financial and social.

This is not philanthropy repackaged. It’s the idea that measurable justice outcomes can form the basis of an investable asset class, if they’re properly structured, governed, and evaluated.

Technology as trust infrastructure

While mission-aligned capital is widely available, the ability to evaluate legal matters with the necessary rigor remains limited. Responsibly allocating funds to legal matters requires complex expertise, including legal merit assessment, financial risk modeling, regulatory compliance, and impact evaluation. Cases must be considered not only for their likelihood of success and recovery potential, but also for measurable social or environmental outcomes.

Today, that assessment is largely manual and capacity-bound by small teams. The result is a structural bottleneck as capital waits on scalable, trusted evaluation and allocation.

Without a way to standardize and responsibly scale analysis of the double bottom line, however, justice funding remains bespoke, even when resources are available.

AI-enabled systems can play a transformative role by standardizing assessment frameworks and supporting disciplined capital allocation at scale. By encoding assessment criteria, decision pathways, and compliance safeguards and then mapping case characteristics to impact metrics, technology can enable consistency and allow legal and financial experts to evaluate exponentially more matters without lowering their standards.

And by integrating legal assessment, financial modeling, and impact alignment within a governed tech framework, justice finance platforms like can function as the connective tissue. Through the platform, impact metrics are applied consistently while human experts remain responsible for final determinations, thereby reducing friction, increasing transparency, and supporting auditability.

When incentives align

It’s no coincidence that many of the leaders exploring justice finance models are women. Globally, women experience legal problems at disproportionately higher rates than men yet are less likely to obtain formal assistance. Women also control significant pools of global wealth and are more likely to . Indeed, 75% of women believe investing responsibly is more important than returns alone, and female investors are almost twice as likely as male counterparts to prioritize environmental, social and corporate governance (ESG) factors when making investment decisions, .

When those most affected by systemic barriers also shape capital allocation decisions, structural change becomes more feasible. Despite facing steep barriers in legal tech funding (just 2% goes to female founders), women represent in access-to-justice legal tech, compared to just 13.8% across legal tech overall.

This alignment between lived experience, innovation leadership, and capital stewardship creates an opportunity to reconfigure incentives in favor of meaningful change.

Expanding funding and impact

Justice financing will not resolve the justice gap on its own. Mission-focused tools for self-represented parties, legal aid, and court reform remain essential components of a functioning justice ecosystem. However, ethical capital represents an additional structural layer that can expand the range of cases and remedies that receive financial support.

Impact orientation can accommodate longer time horizons, alternative dispute resolution pathways, and remedies that extend beyond monetary damages. In certain matters, particularly those involving environmental harm, systemic consumer violations, or community-wide injustice, capital structured around impact metrics may identify and enable solutions that traditional litigation finance models do not prioritize.

For example, capital aligned with defined impact frameworks may support outcomes that include remediation programs, compliance reforms, or community investments alongside financial recovery. These approaches can create durable benefits that outlast a single judgment or settlement.

Of course, solving deep-rooted inequities and legal system complexity requires more than new tools and new investors. It requires designing capital pathways that are repeatable, accountable, and aligned with measurable public benefit.

Although justice finance may not be a fit for every case and has yet to see widespread uptake, it does have the potential to reach cases that currently fall through the cracks — cases that have merit, despite falling outside traditional litigation finance models and legal aid or impact litigation eligibility criteria.


You can find other installments of our Scaling Justice blog series here

]]>
Human layer of AI: How to build human-centered AI safety to mitigate harm and misuse /en-us/posts/human-rights-crimes/human-layer-of-ai-building-safety/ Mon, 09 Mar 2026 17:33:34 +0000 https://blogs.thomsonreuters.com/en-us/?p=69789

Key highlights:

      • Map risks before building— Distinguish between foreseeable harms that may be embedded in your product’s design and potential misuse by bad actors.

      • Safety processes need real authority— An AI safety framework is only credible if it has the power to delay launches, halt deployments, or mandate redesigns when human rights risks outweigh business incentives.

      • Triggers enable proactive intervention— Define clear, automatic review triggers such as product updates, geographic expansion, or emerging patterns in user reports to ensure your safety processes adapt as risks evolve rather than reacting after harm occurs.


In recent months, the human cost of AI has become impossible to ignore. after interacting with AI chatbots, while generative AI (GenAI) tools have been weaponized to create that digitally undress women and children. These tragedies underscore that the gap between stated values around AI and actual safeguards remains wide, despite major tech companies publishing responsible AI principles.

, a senior associate at , who works at the intersection of technology and human rights, argues that closing this gap requires companies to: i) systematically assess both foreseeable harms from intended AI use and plausible misuse by bad actors; and ii) build safety processes powerful enough to actually stop launches when risks to people outweigh commercial incentives.

Detailing the two-step framework for anticipating and addressing AI risks

To build effective AI safety processes, companies must first understand what they’re protecting against, then establish credible mechanisms to act on that knowledge.

Step 1: Mapping foreseeable arms and intentional misuse

When mapping AI risks during “responsible foresight workshops” with clients, Richard-Carvajal says she takes them through a process that identifies:

    • foreseeable harms that emerge from a product’s design itself. For example, algorithm-driven recommender systems — which often are used by social media platforms to keep users on the site — are designed to drive engagement through personalized content, and are well-documented in amplifying sensationalist, polarizing, and emotionally harmful content, according to Richard-Carvajal.
    • intentional misuse that involves bad actors who may weaponize technology beyond its purpose. Richard-Carvajal points to the example of Bluetooth tracking devices, which initially were designed to help people find lost items, but were quickly exploited by stalkers, who placed them in victims’ handbags in order to track their movements and in some cases, to follow them home.

Tactically, the role-playing use of “bad actor personas” by Richard-Carvajal and her colleagues can help clients imagine misuse scenarios and help ensure companies anticipate harm before it occurs rather than responding after people have been hurt.

Step 2: Building a credible AI safety process

Once risks are identified, Richard-Carvajal says she advises that companies identify mechanisms to address them.ĚýThe components of a legitimate AI safety framework mirror the structure of robust human rights due diligence by centering on the risks to people.

Indeed, Richard-Carvajal identifies core components of this framework, which include: i) hazard analysis and to anticipate both foreseeable harms and potential misuse; ii) incident response mechanisms that allow users to report problems; and iii) ongoing review protocols that adapt as risks evolve.

Continual evaluation of new emerging risks is needed

As AI capabilities advance and deployment contexts expand, companies must continuously reassess whether their existing safeguards remain adequate against evolving threats to privacy, vulnerable populations, human autonomy, and explainability. Richard-Carvajal discusses each one of these factors in depth.

Privacy — Traditional privacy mitigations, such as removing information that leads to identifying specific individuals, are no longer sufficient as AI systems can now re-identify individuals by linking supposedly anonymized data back to specific people or using synthetic training data that still enables re-identification. The rise of personalized AI — in which sensitive information from emails, calendars, and health data aggregates into comprehensive profiles shared across third-party providers — can create new privacy vulnerabilities.

Children — Companies must apply a heightened risk lens for vulnerable populations, such as children, because young users lack the same capacity as adults to critically assess AI outputs. Indeed, the growing concerns around AI usage and children are warranted because of AI-generated deepfakes involving real children are being created without their consent. In fact, Richard-Carvajal says that current guidance calls for specific child rights impact assessments and emphasizes the need to engage children, caregivers, educators, and communities.

Cognitive decay — A growing concern is that too much AI usage can harm human autonomy and contribute to a decline in critical thinking. This occurs when , and it has the potential to undermine their human rights in regard to work, education, and informed civic participation.

Meaningful explainability — Companies’ commitment to explainability as a core tenet of their responsible AI programs was always a challenge. As synthetic AI-generated data increasingly trains new models, explainability becomes even more critical because engineers may struggle to trace decision-making through these layered systems. To make explainability meaningful in these contexts, companies must disclose AI limitations and appropriate use contexts, while maintaining human-in-the-loop oversight for consequential decisions. Likewise, testing explanations should require engagement with actual rights holders instead of just relying on internal reviews.

Moving forward safely

While no universal checklist exists for AI safety, the systematic approach itself is non-negotiable. Success means empowering engineers to identify and address human-centered risks early, maintaining ongoing stakeholder engagement, and building safety processes that have genuine authority to delay launches, halt deployments, or mandate redesigns when human rights outweigh commercial pressures to ship products.

If your company builds or deploys AI, take action now: Give your engineers and risk teams the authority and resources to identify harms early, keep continuous engagement with affected people and independent stakeholders, and create governance that have the power to keep harm from happening.

Indeed, companies need to make sure these steps go beyond simple best practices on paper and make these protective processes operational, measurable, and enforceable before their next product release.


You can find more about human rights considerations around AI in our ongoingĚýHuman Layer of AI seriesĚýhere

]]>
Inside the Shift: What happens in the professional workplace when AI does too much? /en-us/posts/sustainability/inside-the-shift-ai-overuse/ Wed, 25 Feb 2026 16:21:23 +0000 https://blogs.thomsonreuters.com/en-us/?p=69610

You can read TRI’s latest “Inside the Shift” feature,ĚýThe human side of AI: The growing risks of ubiquitous use of AI on talent here


It’s no exaggeration to say that AI is everywhere in our workplaces right now. It writes our emails, summarizes our meetings, generates slides, and even helps us think through problems. On the surface, this may sound like progress — and in many ways, it is.

However, our latestĚýInside the ShiftĚýfeature, The human side of AI: The growing risks of ubiquitous use of AI on talent by Natalie Runyon, Content Strategist for Sustainability and Human Rights Crimes for the Thomson Reuters Institute, makes a clear and timely point: When AI use becomes excessive and unchecked, it can quietly undermine the very people it’s meant to help.


One major consequence of cognitive decay is the weakening of the brain’s capacity to engage deeply, question systematically, and — somewhat ironically — resist the potential manipulation of AI.


As the article goes into in much greater detail, these harms caused by AI overuse can include a slow erosion of human connections, a loss of a professional’s sense of purpose, and a general sense of feeling overwhelmed in the workplace.

Of course, the solution isn’t to reject AI, it’s to use it better. To this end, the article makes a strong case for organizations to foster hybrid intelligence, a process by which human judgment and creativity work alongside AI capabilities.

In today’s workplace, AI can be a powerful advantage; however, that is only if organizational leaders can remember that technology should enhance the human experience, not replaces the parts of professional life that workers value.


To examine this and many more situations, the Thomson Reuters Institute (TRI) has launched a new feature segment,ĚýInside the Shift, that leverages our expert analysis and supporting data to tell some of the most compelling stories professional services today

]]>
The child exploitation crisis online: Gaps in digital privacy protection /en-us/posts/human-rights-crimes/children-digital-privacy-gaps/ Wed, 04 Feb 2026 18:39:04 +0000 https://blogs.thomsonreuters.com/en-us/?p=69312

Key highlights:

      • Fragmented protection creates vulnerability —Current US privacy laws operate as a patchwork system without comprehensive national standards, leaving children and other users exposed to data exploitation across state lines and international borders.

      • Body data collection opens future manipulation potential —Virtual reality platforms collect granular biometric information through sensors that can reveal deeply sensitive information about users.

      • Use-based regulations outlast technology changes — Restricting harmful applications of data provides more durable protection than the current regulatory approach, which relies on categorizing rapidly evolving data types.


Virtual reality (VR), social media, and gaming companies have long avoided robust content moderation, largely out of concern over implementation costs and the risk of alienating users. This reluctance stems from platforms wanting to have the widest pool of users as possible. Yet, the shortsightedness of this decision has consequences, including insufficient protection of children and long-term cost to companies’ bottom-lines.

The child exploitation crisis in digital spaces requires better laws and a reimagining of how VR, gaming, and social medial companies balance privacy, safety, and accountability across diverse platform architectures, according to , an expert in child exploitation methods in digital spaces and Policy Advisor at the NYU Stern Center for Business and Human Rights.

Limitations of existing regulatory frameworks

The current regulatory landscape is insufficient to protect children online. The lack of a comprehensive national privacy law in the United States, the use of consent mechanisms, and the haphazard rollout of age verification all expose protection gaps and come with economic and psychological costs, according to Olaizola Rosenblat. For example, some of the dangers include:

Gaps in patchwork of regulations leave children vulnerable — Regulatory demands for child safety often collide with privacy protections, creating contradictory obligations that platforms cannot realistically satisfy. In the absence of unified standards, however, companies operate in a jurisdictional maze that leaves most users, including children, exposed to data exploitation across borders.

America’s regulatory landscape remains especially fragmented, with no comprehensive national privacy law to provide consistent protection. comes close to establishing meaningful safeguards, according to Olaizola Rosenblat, yet it still permits companies to collect data even after users opt out of the sale or sharing of their data.

digital privacy
Mariana Olaizola Rosenblat, of the NYU Stern Center for Business and Human Rights

Federal reform attempts, including the , collapsed amid conflicts between states demanding stronger protections and tech lobbyists aligned with conservative representatives seeking weaker standards. In addition, child-specific laws, such as the , provide protection only for those under 13, which leaves older minors and adults vulnerable.

“Once users turn 13, they fall off a regulatory cliff,” says Olaizola Rosenblat. “There is no federal child-specific data protection regime, and existing state-level safeguards are patchy and largely ineffective for teens.”

Internationally, the European Union’s (GDPR), although considered the gold standard for regulation, suffers from a persistent gap between its ambitious text and its uneven enforcement.

Age verification tensions — These regulatory shortcomings also are evident in debates over age verification. Protecting children requires collecting data to determine user age, yet privacy advocates frequently oppose such measures. Without pragmatic guidance acknowledging these inherent trade-offs, platforms often face contradictory obligations they cannot simultaneously fulfill.

Current consent frameworks offer little protection — Current consent mechanisms offer users an illusory choice that fails to protect children from data exploitation. Even relatively robust frameworks like the GDPR rely on consent models in which refusal means exclusion from digital spaces essential to modern life. This approach proves particularly inadequate for younger users. Indeed, that about one-third of Gen Z respondents expressed indifference to online tracking.

VR data collections may allow future exploitation

VR platforms differ fundamentally from traditional gaming spaces and social media platforms. Users with VR headsets embody avatars that move through thousands of interconnected experiences. While no actual touching occurs, the experiences feel visceral. Indeed, the psychological and physiological responses can mirror aspects of real-world experiences, which include sexual exploitation, even though no physical contact occurs.

Olaizola Rosenblat explains that the data collected from the sensors can open up the potential for future exploitation. “The inferences that can be drawn from your body-based data collected by these sensors is granular and often intimate,” she explains. “The power that gives to companies is pretty remarkable in terms of knowing things about you that you might not even know yourself.”

Recommended actions to address challenges

Addressing the child exploitation crisis in digital spaces requires coordinated action, according to Olaizola Rosenblat, and that needs to include:

Universal protection standards — Corporate action in partnership with legislators is necessary for effective reform that protect all users rather than fragmenting safeguards by age or vulnerability status. Current approaches that shield only younger children create dangerous gaps and leave adolescents and adults exposed once they age out of protected categories.

Enforce existing regulations — Even well-crafted legislation proves meaningless without robust enforcement mechanisms. Commitment by government agencies along with the appropriate levels of funding is the most meaningful approach to achieve desired outcomes.

Technology-agnostic use regulation — Rather than attempting to categorize rapidly evolving data types, companies in the VR, gaming, and social media sectors must work with legislators to restrict harmful uses of data such as manipulation, exploitation, and unauthorized surveillance, regardless of technical collection methods. Regulating data use — rather than the current method of regulation based on categories of data, which include personally identifiable information — is the right approach.

Public mobilization is essential — Citizens must understand that the stakes of data exploitation beyond corporate collection also include hacking vulnerabilities and manipulative deployment. Without consumer demand for better protection and the willingness for legislators to pass the laws, regulation will not happen.

The path forward

The digital exploitation of children demands immediate action that transcends partisan divides and corporate interests. Only through coordinated regulatory reform, meaningful enforcement, and sustained public pressure can we create digital spaces in which innovation thrives without sacrificing our privacy and safety. The cost of continued inaction grows steeper each day we delay.


You can find out more on how organizations and agencies are fighting child exploitation here

]]>
Human Layer of AI: The crosswinds of AI, sustainability, and human rights enter the mainstream in 2026 /en-us/posts/sustainability/human-rights-enter-the-mainstream/ Thu, 08 Jan 2026 16:40:46 +0000 https://blogs.thomsonreuters.com/en-us/?p=68962

Key takeaways:

      • Clean energy takes center stage in corporate AI initiatives — Access to cheap, low‑carbon power will become a core driver of AI competitiveness, especially in the US, where electricity costs are on the rise.

      • Corporate buyers of AI will exert new leverage over suppliers — Corporate buyers will increasingly use their purchasing power to push data center operators to align AI build‑outs with local climate, water, and community expectations — not just to supply more metrics.

      • AI’s human labor layer enters mainstream due diligence — AI labor supply chains will be brought into the mainstream supply chain and require human rights due diligence.


As we enter 2026, there are three main themes that many corporations will need to manage around issues of renewable energy, AI supplier behavior, and labor.

Theme 1: Renewables move to the center of corporate AI strategies

In 2026, AI competitiveness and energy policy will be tightly fused. With AI workloads driving up electricity demand amid datacenter buildouts, particularly in the United States, access to renewable energy sources in the form of abundant, cheap, low‑carbon power becomes a decisive factor in AI pricing and availability.ĚýCountries and companies that lock in this advantage early will shape AI deployment patterns for the rest of the decade.

“The economics of renewable energy are what is causing it to accelerate, even in the US,” says , an expert in sustainability and business. “Despite the political winds, the fact is that wind and solar are growing faster… because it is cheaper, better energy.”

In addition, countries and firms with large, subsidized renewable energy capabilities and flexible grids, such as China’s massive solar, wind, and hydro infrastructure, will have a low-cost advantage. (However, countries’ push for AI may counteract this by prompting governments to prioritize domestic AI stacks over purely cost‑optimized ones.) Yet, combining this asset , such as Kimi K2 and DeepSeek, it is not outside the realm of possibility that the country could emerge in the top spot in AI development and innovation.

Corporate pressure to increase AI adoption for efficiency combined with stakeholder expectations of investing in a low-carbon future will make renewables the center of corporate AI strategies. Increasingly, companies will be asked where their computers run, what energy mix powers them, how cost effective that energy mix is, and whether companies are effectively endorsing environmentally and socially harmful projects in host communities.

Theme 2: Local backlash forces suppliers and companies to confront AI’s impact

Over the last few years, big names among AI infrastructure providers have tried to take advantage of the AI revolution, in AI-related data centers, cloud systems, and other infrastructure with no end in sight over the next few years.

Despite the demand, local communities in which large data center construction projects are planned are pushing back. According to , $64 billion of data center projects in the US have been blocked or delayed amid local opposition since 2025. This opposition comes in part because of concerns regarding , strains on local water and natural resources, and the reduction of working farmland from data center rezoning attempts in rural communities.

In fact, AI data centers are pushing up electricity demand and fueling higher electricity prices for many US households. And, as retail electricity price increases over the next couple of years are likely to continue, it will be in part because of consuming more electricity.

As a result, the demand from stakeholders — in particular, those from local communities including local and state politicians — for increased transparency on the environment and social impacts of corporate AI services is likely to surge. In turn, corporate buyers of AI services will put pressure on the big AI service suppliers to provide more precision in the locations of such data systems as well as disclose more associated sustainability data, such as energy sources, grid impacts, and their level of community engagement where large AI infrastructure is based.

To deal with these competing priorities, boards of companies using AI services will need to reconcile AI cost‑cutting with their transition commitments by ensuring that cost advantages are not built on externalizing environmental and social harms.

Not surprisingly, in 2026, more boards will be drawn into explicit debates about whether AI‑driven cost savings justify exposure to higher community, political, and regulatory risk. This turns questions about data center locations and power contracts into mainstream agenda items.

Theme 3: The human layer of AI emerges as a centerpiece of the supply chain

The idea that AI is automating everything will sit uncomfortably alongside a growing recognition that large‑scale AI depends on a largely invisible workforce. Across the full AI life cycle of products — some of which rely on models that utilize labor in data collection, curation, annotation, labeling, evaluation, and content moderation — there are thousands of workers performing the tasks that make models safe, accurate, and usable.

As AI systems scale across sectors, demand for this human labor increases in volume and complexity, according to , a human rights expert at Article One Advisory. Indeed, much of it remains outsourced, precarious, or gig‑based (often in the Global South), with low pay, weak protections, and exposure to psychologically harmful content rampant. Civil society, unions, and regulators are beginning to connect AI innovation with labor rights and occupational health; and this reality makes the human layer of AI a frontline human rights issue rather than a technical detail.

The for AI‑related labor is likely to move from a niche concern to a mainstream pillar of corporate human rights due diligence. Companies will be under pressure to know what subcontractors and suppliers are doing to ensure human rights for individuals doing AI data enrichment and moderation work, under what conditions, and through which intermediaries.

Following the evolution of how conflict minerals or modern slavery have been integrated into supplier management, a shared view of AI labor supply chains by corporate procurement, legal, product management, and sustainability teams will materialize.

Forward into 2026

As AI becomes embedded in the infrastructure of daily life, companies will face mounting pressure to demonstrate that their AI strategies align with human rights and environmental commitments, not just efficiency gains. The convergence of these three themes signals that transparency in AI governance in 2026 will be inseparable from broader corporate governance and responsibility. And those organizations that treat these themes as compliance checkboxes rather than fundamental design principles will risk both reputational damage and operational disruption in an increasingly scrutinized landscape.

Companies that fear the exaggerated risk of attracting the ire of activists are underestimating the greater risk of losing the goodwill of customers, investors, and employees that they need,” Friedman adds.


You can find out more about how companies are managing issues of sustainability here

]]>
Impact of AI on critical thinking: Challenges and opportunities for lawyers /en-us/posts/sustainability/ai-impact-critical-thinking/ Mon, 29 Dec 2025 14:04:00 +0000 https://blogs.thomsonreuters.com/en-us/?p=68783

Key insights:

      • Cognitive offloading is a significant risk —ĚýThe correlation between increased AI usage and decreased critical thinking, known as cognitive offloading, poses a threat to effective legal practice, especially with the rise of autonomous agentic AI.

      • Agentic AI risks and opportunities — The next generation of agentic AI poses significant challenges to lawyers’ critical thinking skills, but it also offers opportunities for lawyers to enhance their analytical rigor and human insight.

      • Agentic AI can enhance critical thinking when properly leveraged —ĚýWhen designed by lawyers, for lawyers and used to augment human judgment in legal workflow tasks — such as discovery, contract analysis, and drafting — agentic AI can improve efficiency, deepen analysis, and allow legal professionals to focus on higher-value critical thinking tasks.


The legal profession is at a critical juncture as AI becomes increasingly sophisticated. Recent research has uncovered a troubling correlation between the use of AI and the decline in critical thinking abilities among legal professionals. This phenomenon, known as cognitive offloading, threatens the very foundation of effective legal practice.

Studies have shown a clear pattern linking AI use, cognitive offloading, and critical thinking. According to , there is a notable correlation between increased AI usage and diminished critical thinking performance among individuals. Moreover, as people offload more mental work to AI tools, their critical thinking scores tend to be lower. While correlation does not necessarily imply causation, this pattern is strong enough to warrant proactive measures to safeguard critical thinking skills.

The findings from the study have implications for lawyers. First, it is essential to design workflows that ensure attorneys retain ownership of problem framing, authority weighting, and strategic judgment. Human checkpoints should be inserted at key decisions, and transparent evidence trails should be maintained. For junior lawyers, it is crucial to preserve desirable difficulty reps — basically, the baseline skill-building experience — before they consult AI. By pairing these guardrails with outcome tracking, law firms can harness AI’s speed and scale while minimizing the risks associated with cognitive offloading.

Risks increase with agentic AI

The next wave of AI-powered legal tech involves agentic AI, which operates as autonomous agents. These agents can plan and execute complex workflows independently, make real-time decisions, and adapt strategies without constant human input. This autonomy intensifies cognitive offloading risks by enabling workflow automation beyond human oversight, strategic cognitive offloading, and the black box problem magnified. (Basically, these are situations in which a system’s internal workings are hidden, and users may know what goes in and what comes out, but not howĚýthe system arrives at its decisions.)


To mitigate the risks associated with cognitive offloading, legal professionals can leverage agentic AI tools designed to enhance critical thinking.


The autonomous nature of agentic AI creates unprecedented professional responsibility challenges, including supervision standards, competence requirements, and explaining AI-developed strategies to clients. The legal profession faces significant challenges that could accelerate skills atrophy, such as new attorneys missing opportunities to develop foundational analytical skills, lawyers becoming dependent on AI, and AI handling strategic planning.

To mitigate the risks associated with cognitive offloading, legal professionals can leverage agentic AI tools designed to enhance critical thinking. For instance, AI-driven legal research and analysis platforms can make every step of the legal workflow more transparent, testable, and adversarially robust. These tools use custom-trained, agentic AI to produce transparent, step-by-step research notes and comprehensive reports that present arguments on both sides.

Illuminating examples of critical thinking skills

Agentic AI is transforming legal practice by enhancing critical thinking skills through various applications, and these innovative uses of AI not only improve efficiency but also augment human judgment. This in turn enables lawyers to focus on higher-value tasks that require critical thinking, creativity, and nuanced understanding. Several examples illustrate how agentic AI can enhance critical thinking in legal practice, such as:

      • Discovery — Autonomous analysis engines have uncovered patterns that traditional keyword searches missed. In one commercial litigation case, an agent found subtle shifts in executive language precisely around the period of alleged misconduct. The agent was able to explain why those patterns mattered and then tied each inference to source documents.
      • Contract analysis — In M&A diligence, agentic AI examined hundreds of legacy agreements and flagged indemnification variants that created potential exposure issues. With about 94% accuracy, transparent AI reasoning supported a targeted remediation strategy that averted post-closing liability.
      • Drafting workflows — Expert-designed, multi-step workflows assemble relevant know-how, generate first drafts to specification, and require counterarguments and verification before stylistic polish is done. This approach has been shown to reduce review time by roughly 63% and legal know-how tasks by about 10%.

As we are learning, agentic AI strengthens core litigation work by preserving human judgment while expanding pattern detection, accelerating theory testing, and deepening client advocacy. By handling comprehensive case law analysis and factual pattern identification, agentic AI frees litigators to develop creative legal theories, anticipate opposing strategies, and craft nuanced arguments.

Thus, to better elevate critical thinking in legal work, it is essential to use AI that is designed by lawyers, for lawyers. Domain-specific AI legal assistants provide nuanced insights that inform sharper, more strategic decisions. And expert-guided analytical workflows support comprehensive analysis without encroaching on professional judgment, ensuring that attorneys can interrogate sources confidently and build arguments on solid ground.

By embracing agentic AI as a collaborative counterpart, legal professionals can heighten analytical rigor and human insight — the very qualities that make legal practice both powerful and purposeful. As opportunities expand, so does the potential for creating more positive impact for clients, engaging in complex problem-solving, and advancing access to justice for more people.


You can find out more about the impact of AI and other advanced technologies on the legal profession here

]]>
Human Layer of AI: Protecting human rights in AI data enrichment work /en-us/posts/human-rights-crimes/ai-protecting-human-rights/ Fri, 19 Dec 2025 15:43:10 +0000 https://blogs.thomsonreuters.com/en-us/?p=68877

Key highlights:

      • Human rights risks are elevated for data enrichment workers — Data enrichment workers can face low and unstable pay, overtime pressure driven by buyer timelines, harmful content exposure with weak safeguards, limited grievance access, and uneven legal protections that hinder workers’ collective voice.

      • Human rights due diligence is essential for companies — Companies as buyers of these services must map subcontracting tiers, assess risk by employment model, document worker protections down to Tier-2 and Tier-3 suppliers, and audit and monitor their own rates, timelines, and payment terms to avoid reinforcing harm to workers.

      • Responsible contracting and remedy are a necessity — Contracts should embed shared responsibility, and include fair rates, predictable volumes, realistic deadlines, funded health & safety and mental‑health supports, effective grievance channels, and remediation.


Demand for data enrichment work has surged dramatically with the rapid development and expansion of AI technology. This work encompasses collecting, curating, annotating, and labeling data, as well as providing model training and evaluation — all of which are critical activities that improve how data functions in technological systems.

However, the workers performing these tasks currently operate under different employment models, according to from Article One Advisors, a corporate human rights advisory firm. Some workers are in-house employees at major AI developers, others work for business process outsourcing (BPO) companies, and many are independent contractors on gig platforms on which they bid for tasks and get paid per piece.

Human rights issues in data enrichment work

Data enrichment workers sit at the sharp end of the AI economy, yet many struggle to earn a stable, decent income. In particular, pay for gig workers often falls short of a living wage because tasks are sporadic, payments can be delayed, and compensation is frequently piece‑rate. Because work flows through , fees and margins get skimmed at each layer and shrink take‑home pay — another area of exploitation for today’s digital labor workforce.

In addition, another human rights issue at work is their right to rest, leisure, and family life and, in some places, even breaching guidance from the International Labour Organization (ILO) or local labor laws. Buyer purchasing practices with aggressive deadlines are a significant upstream driver of this overtime pressure.


National labor protections vary widely, and platform workers in particular often fall through regulatory gaps.


For many, the work itself carries health risks. Labeling and moderation can require repeated exposure to violent or graphic content, with well‑documented mental‑health impacts. Yet safeguards are uneven. Indeed, workers may lack protected breaks, task rotation, mental‑health support, adequate insurance, or the option to switch assignments. Even when content is not graphic, strain shows up as ergonomic problems, stress, and disrupted sleep.

When harm occurs, remedy can be hard to access. Platform-based work setups often provide no clear, trusted point of contact, and reports of retaliation deter complaints. Effective operational grievance mechanisms can be missing, and this leaves workers without credible paths to redress.

Finally, national labor protections vary widely, and platform workers in particular often fall through regulatory gaps. Because work is individualized and online, forming unions or works councils is harder. This weakens workers’ collective voice just where and when it is most needed to identify risks, negotiate improvements, and secure remedies.

Due diligence for companies buying data enrichment services is essential

When companies procure data enrichment services, they must recognize that respecting human rights extends throughout the entire value chain and not just with themselves and their direct suppliers. Companies creating trusted partnerships with their suppliers helps to identify issues before they become harmful and create mutual accountability for the humans behind the algorithms.

Article One Advisors’ Lloyd explains that the mandatory baseline starts with human rights due diligence, and can be found in areas such as:

      • Risk identification and assessment — The first step for companies is to identify and assess risksĚýby understanding their suppliers’ model. This means knowing which groups of workers are full-time employees, contracted workers, or platform-based gig workers. Each model carries different risk profiles.
      • Subcontractor ecosystem mapping — Tracing the subcontracting chainĚýto see how many layers exist between the supplier and the workers is essential. Fees and pressures compound at each tier of the value chain, says Lloyd.
      • Documentation of worker protections in Tier 2 and Tier 3 suppliers — Assessing and promoting worker protections for every layer of the value chain — which includes making sure the wage structures are clearly defined and equitable, health and safety measures are adequate, and protections for exposure to harmful content and effective grievance mechanisms exist — are baseline elements of human rights due diligence.
      • Examination of company’s own practices — Finally, it is necessary for companies to ensure that their own procurement standards and contracts are not reinforcing human rights harms. This includes companies confirming that their contract terms, timelines, and payment schedules are not inadvertently forcing suppliers to cut corners.

Responsible contracting and remedy mechanisms

Companies as buyers of data enrichment services also must instill shared responsibility in owning worker outcomes among themselves, BPOs, platforms, and model developers. Comprehensive, clear human-rights standards, living-income benchmarks, and shared responsibility are essential elements of good purchasing practices. More specifically, these require fair rates for work, predictable volume expectations, and realistic timelines to make sure suppliers do not push excessive hours. In addition, budgets should include cost-sharing for audits, key risk management measures (such as mental health support), and occupational health and safety controls.

Smart remediation turns harmful situations into improved conditions by providing back-pay for underpayment, medical and psychosocial care after exposure to harmful content, contract adjustments to remove perverse incentives, and time-bound corrective action plans co-designed with worker input. As a last resort when buyer and supplier need to part ways, a responsible exit is planned with notice, transition support, and no sudden contract termination that strands workers.

Similarly, grievance mechanisms for platform workers — who are often dispersed across geographics, classified as independent contractors, and lack line managers or union channels — need to be contractually documented. Effective grievance redressal needs to include confidential mechanisms and remediation processes, in-platform dispute tools, independent individuals to investigate complaints, multilingual facilitation, and joint buyer-supplier escalation paths to bridge gaps in labor-law protection and deliver credible remedies at scale, Lloyd notes.

Promoting quality through worker well-being

Protecting data enrichment workers is not only an ethical imperative but also essential for AI quality itself. When workers face excessive hours, inadequate pay, or harmful content exposure without proper support, the resulting stress and burnout directly impact data quality outcomes. Companies must recognize that responsibility for worker well-being and quality data outcomes extend throughout the entire value chain and does not solely rest with BPOs providers alone.


You can find more about the challenges companies and their workers face from forced labor in their supply chain here

]]>
The Human Layer of AI: How to build human rights into the AI lifecycle /en-us/posts/sustainability/ai-human-layer-building-rights/ Mon, 24 Nov 2025 16:33:36 +0000 https://blogs.thomsonreuters.com/en-us/?p=68546

Key takeaways:

      • Build due diligence into the process — Make human-rights due diligence routine from the decision to build or buy through deployment by mapping uses to standards, assess severity and likelihood, and close control gaps to prevent costly pullbacks and reputational damage.

      • Identify risks early on — Use practical methods to identify risks early by engaging end users and running responsible foresight and bad headlines

      • Use due diligence to build trust — Treat due diligence as an asset and not a compliance box to tick by using it to de‑risk launches, uncover user needs, and build durable trust that accelerates growth and differentiates the product with safety-by-design features that matter to buyers, regulators, and end users.


AI is reshaping how we work, govern, and care for one another. Indeed, individuals are turning to cutting-edge large language models (LLMs) to ask for emotional help and support in grieving and coping during difficult times. “Users are turning to chatbots for therapy, crisis support, and reassurance, and this exposes design choices that now touch the right to information, privacy, and life itself,” says , co-Founder & Principal at , a management consulting firm that specializes in human rights and responsible technology use.

These unexpected uses of AI are reframing risk because in these instances, safeguards cannot be an afterthought. Analyzing who might misuse AI alongside determining who will benefit from its use must be built into the design process.

To put this requirement into practice, a human rights lens must be applied across the entire AI lifecycle from the decision to build or buy to deployment and use, to help companies anticipate harms, prioritize safeguards, and earn durable trust without hampering innovation.

Understanding human rights risks in the AI lifecycle

Human rights risks can surface at every phase of the AI lifecycle. In fact, they have emerged in efforts to train these frontier LLMs in content moderation functions and now, are showing up elsewhere. For example, data enrichment workers who refine training data, and data center staff, who power these systems, are most likely to face labor risks. Often located in lower‑income markets with weaker protections, they face low wages, unsafe conditions, and limits on other freedoms.

During the development phase, biased training sets and the probabilistic nature of models can generate misinformation or hallucinations, and these can further undermine rights to health and political participation. Likewise, design choices often can translate into discriminatory outcomes.

Unfortunately, the use of AI-enabled tools also can compound these harms. Powerful models can be misused for fraud or human trafficking. In addition, deeper integration with sensitive data can heighten privacy and security risks.

A surprising field pattern exacerbates the risk when people increasingly use AI for therapy‑like support and disclose issues related to emotional crises and self‑harm. In particular, this intimacy widens product and policy obligations, which include age‑aware safeguards and clear limits on overriding protections.

Why human rights due diligence is urgent

That’s why human rights due diligence must start with people, not the enterprise. By embedding human rights due diligence into the lifecycle of AI, development teams can begin to understand the technology and its intended uses, then map those uses to international standards. Next, a cross functional team gathers to weigh benefits alongside harms and to consider unintended uses. Primarily, they need to answer the question, “What happens if this technology gets in the hands of a bad actor?”

From there, the process demands an analysis of severity — which assesses scale, scope, and remediation, and the likelihood of each use. The final step involves evaluating current controls across supply chains, model design, deployment, and use-phases to identify gaps.

The biggest barrier in layering in a human rights lens into to AI is the need for speed to market. The races to put out minimally viable products accompanied by competitive pressure can eclipse robust governance, yet early due diligence may prevent costly pullbacks and bad headlines. Article One’s Poynton notes that no one wants to see their product on the front page for enabling stalking or spreading disinformation. Building safeguards early “ensures that when it does launch, it has the trust of its users,” she adds.

How to embed safeguards without slowing teams

The most efficient path in translating human rights into the AI product lifecycle is to turn policy principles, goals, and ambitions into actionable steps for the engineers and the product teams. This requires the “engineers to analyze how they do their work differently to ensure these principles live and breathe in AI-enabled products,” Poynton explains. More specifically, this includes:

Identifying unexpected harms — One of the most critical, yet difficult components of the human rights impact assessment is brainstorming potential harms. Poynton recommends two ways to make this happen: First, engage with end users to help identify potential harms by asking, “What are some issues that we may not be considering from the perspectives of accessibility, trust, safety and privacy?” Second, run responsible foresight workshops at which individuals play the parts of bad actors to better identify harms and uncover mitigation strategies quickly. Pair that with a bad headlines exercise that can be used to anticipate front‑page failures. Then, ship with these protections in place, pre‑launch.

Implementing concrete controls — Embedding safety-by-design should cover both content and contact, a lesson from gaming in which grooming risks require more than just filters. Build age‑aware and self‑harm protocols, including parental controls and principled policies on overrides. Govern sales and access with customer vetting, usage restrictions, and clear abuse‑response pathways. In the supply chain, set supplier standards for enrichment and data center work that include fair wages, safe conditions, freedom of association, and grievance channels.

Treating due diligence as value-creating, not box-checking — Crucially, frame due diligence as an asset rather than a liability. “Make your product better and ensure that when it does launch, it has the trust of its users,” Poynton adds.

Additional considerations

Addressing equity must be front and center. Responsible strategies include diversifying training sets without exploiting communities and giving buyers clear provenance statements on data scope and limits.

Bridging the digital divide is equally urgent. Bandwidth and device gaps risk amplifying inequality if design and deployment assume privileged contexts. In the workplace, Poynton stresses that these impacts will be compounded, from entry-level to expert roles.

Finally, remember that AI’s environmental footprint is a human rights issue. “There is a human right to a clean and healthy environment,” Poynton notes, adding that energy and water demands must be measured, reduced, and sited with respect for local communities, even as AI helps accelerate the clean energy transition. This is a proactive mandate.


You can find out more about the ethical issues facing AI use and adoption here

]]>
5 steps for fostering ethical corporate cultures /en-us/posts/corporates/5-steps-ethical-corporate-cultures/ Thu, 30 Oct 2025 13:47:53 +0000 https://blogs.thomsonreuters.com/en-us/?p=68229 This blog post was written by Max Beilby, an organizational psychologist specializing in applying behavioral science to enhance culture and risk management within financial services; Antoine Ferrere, the CEO of lumenx.ai, and a recognized leader in applying behavioral, data science and AI for good; and Brian R. Spisak, PhD, a leading voice at the intersection of digital transformation and workforce management. The views expressed in this article are Max’s alone, and do not reflect the views or opinions of his employer.

Key insights:

      • Ethics must be embedded, not bolted on — Corporate leaders should move beyond legal compliance to proactively weave ethics into decision-making and define their legacy by how results are achieved, not just what is achieved.

      • Misconduct is usually systemic, not individual — Unethical behavior often arises from environments in which there are misaligned incentives, pressure, and self-deception. Thus, redesigning rewards and evaluations to balance short- and long-term outcomes is pivotal.

      • A practical playbook exists — Use a catalytic anchor event, measure the ethical climate rigorously, empower local teams with data and AI-driven tools, align policies and incentives globally with core values, and run small, iterative experiments to refine what works.


We’re at a pivotal moment in history, a defining moment in which rapid societal change and mounting crises are intersecting with awe-inspiring technical advancements. This convergence — undoubtedly dangerous — is also an opportunity for leaders and their teams to turn the tide and demonstrate how principled decisions can lead to transformative outcomes for business and society.

As we navigate this critical juncture, it’s clear that the path forward requires more than mere compliance with laws and regulations. It calls for proactively embedding ethics into all aspects of organizational decision-making. Indeed, this is corporate leaders’ opportunity to redefine their legacies — to be remembered not just for what they achieved, but for how they achieved it.

The challenge

First, we acknowledge the trade-offs and ethical dilemmas that often seem insurmountable in the corporate world. The perceived tug-of-war between people and profit, speed and safety, or quality and quantity, pose significant challenges. However, our belief is that, despite these hurdles, it is indeed possible to create ethical ecosystems that not only survive but thrive.

What is important to understand is that ethical lapses are often . Employees don’t suddenly become dishonest; however, in the absence of an evidently ethical culture, rationalization and post-justification can make it seem as if one is grappling with complex ethical dilemmas of balancing benefits against potential risks. While this dilemma is often real, it can also be fundamentally self-serving, cloaking the profit motive in the guise of the societal benefits — be it patient care or financial well-being.

Further, misconduct in business arguably most often stems not from a couple of rogue actors, but rather, the broader environment that either promotes, or fails to curb, unethical behavior. In other words, it’s , and more about a work environment that allows ordinary people to succeed professionally, through what they perceive to be acceptable compromises. Misconduct therefore doesn’t occur in a vacuum; it festers in conditions in which flawed incentives, unreasonable commercial pressures, and ethical blindness prevail. Similarly, hyper-competitive performance evaluations that incentivize individuals to compete in a zero-sum contest, rather than and their ethical conduct, can encourage unethical cultures to spread.

5 steps toward ethical cultures

Admittedly, redesigning these systems requires a paradigm shift in business. However, there are several practical steps that enlightened business leaders can take to foster ethical organizational cultures.

1: Identify an anchor

To initiate such a transformation, it’sĚýcrucial to identify an anchor — a notable event that can serve as a catalyst. For example, this could be prompted by a scandal or a change in leadership. The key is to use this event not just as a standalone occurrence, but to signal a shift in how seriously ethics is taken within the workplace.

Use your anchor event to announce your intention to enhance your organization’s ethical infrastructure. This should be the moment that captures people’s attention.

2: Establish a baselineĚý

While data from engagement surveys may offer some insights, they often lack the rigor needed to assess the ethical climate of the typical workplace. To create a precise measure, consider other factors such as employees’ perceptions of fairness and trust.ĚýThese perceptions can be evaluated using methods such as anonymous surveys and confidential interviews.

By establishing a robust analytical system, you will be able to produce a clear picture of the current ethical climate across your organization, while identifying those business areas that need intervention.

3: Empower locallyĚý

Presenting data can ignite interest and spark meaningful conversations about ethics, which in turn can help shift the narrative and establish a common understanding.

Focus on creating interactive sessions in which leaders can digest data and discuss implications for their business areas.ĚýEmpower HR, Risk & Compliance, Legal, Finance, and other corporate functions with the tools and training needed to analyze and interpret the data. This could involve training modules, workshops, and the integration of AI tools to provide nuanced insights.

4: Act globallyĚý

While empowering local teams to address ethical dilemmas is crucial, it is equally important to ensure their policies and processes are aligned with the organization’s overarching values.ĚýWhile such alignment can be a challenge for large multinational organizations, emphasizing these universal values can be done.

For example, revisit incentive systems and check that they promote desirable behavior,Ěýrather than solely focusing on financialĚýperformance. Also, ensure that these systems are transparent and communicated clearly and consistently to all employees.

5: Embrace experimentationĚý

Finally, foster a mindset of experimentation. Run a series of small-scale pilots to test various interventions. Approach this with humility and scientific rigor, acknowledging that adjustments may be necessary, and that success is rarely straightforward.

This approach, while it may sound daunting, is actually quite manageable. By embracing the challenge with the curiosity and methodology of a scientist, you can pave the way for genuine and lasting improvements.

Moving forward to an ethical environment

Today’s business leaders are navigating a multitude of hazards, ranging from rising geopolitical tensions, rapidly evolving AI-driven technology, and society’s shifting attitudes and expectations. Yet, in these challenges lies an unprecedented opportunity for leaders to redefine their legacy by embedding ethical principles deeply into the heart of their organizations. In this way, business leaders can turn these risks into sources of long-term competitive advantage.


You can find out more on how within their workplaces here

]]>