Fintechs Archives - Thomson Reuters Institute https://blogs.thomsonreuters.com/en-us/topic/fintechs/ Thomson Reuters Institute is a blog from ¶¶ŇőłÉÄę, the intelligence, technology and human expertise you need to find trusted answers. Mon, 10 Nov 2025 16:06:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Legal aid leads on AI: How Lone Star Legal Aid built Juris to deliver faster, fairer results /en-us/posts/ai-in-courts/legal-aid-ai-lone-star-juris/ Mon, 10 Nov 2025 15:57:22 +0000 https://blogs.thomsonreuters.com/en-us/?p=68394

Key takeaways:

      • Legal aid is leading on AI adoption — Legal aid organizations are leading the way in leveraging AI with 74% using AI in their work, driven by the need to serve millions of citizens who lack legal help.

      • Lone Star Legal Aid creates Juris — A new AI-powered tool Juris from Lone Star Legal Aid improves accuracy and trust through retrieval-augmented generation, source-cited answers, and a secure Azure-based architecture with an integrated citation viewer.

      • Keeping costs low — A phased, two-year build-and-test process kept costs low (at about $2,000 a year in infrastructure costs, plus about 300 staff hours) and produced dependable results.


A finds that under-resourced legal aid nonprofits are adopting AI at nearly twice the rate of the broader legal field because of the urgency of the need to serve millions of Americans who may lack legal help. The study shows that almost three-quarters (74%) of legal aid organizations already use AI in their work, compared with a 37% adoption rate for generative AI (GenAI) across the wider legal profession. (LSLA), a legal aid non-profit serving easter Texas, is one of early adopters of AI.

According to LSLA, its attorneys were spending too much time and money hunting for answers across pricey platforms and scattered PDFs. Key materials lived in research databases, internal drives, and static repositories, while individual worker-vetted documents were not centrally accessible. Without a single, trusted hub, staff experienced slower research time that affected clients through duplicated effort and delays.

These strains are not unique to LSLA. In fact, court help centers and self‑help portals face the same fragmentation, licensing costs, and uneven access to authoritative guidance. A verifiable, consolidated knowledge hub that could stabilize quality while reducing spending would be a needed solution.

To solve this problem, LSLA turned to AI to create a legal tool called Juris built to return fast, source‑cited answers. Juris was designed to centralize high‑value legal materials, cut reliance on expensive third‑party platforms, and lay a flexible foundation that the organization could reuse beyond legal research for internal operations and future client tools.

Multifaceted approach to ensuring accuracy and reliability

There were several aspects of Juris that designers used to help its mission to increase access to justice, including:

Design methods fuel trustworthy output — Juris was built to ensure accuracy using a number of methods, such as a retrieval-augmented generation (RAG) pipeline to ensure the chatbot delivers fact-based, source-cited answers. It also uses semantic chunking, a process that breaks a document into natural, meaning‑based sections (for example, a heading plus the paragraphs that belong to it) so the original context stays together.

When a user asks a question, Juris retrieves only the most relevant of these sections. Limiting the AI to evidence from those passages improves accuracy and reduces hallucinations because the model is not guessing from memory. Instead, it is grounding answers in the text it just accessed.

Solid technical architecture helps reliability — Juris’s technical architecture also ensures reliable results because it combines Azure OpenAI for secure, stateless access to AI models to better handle document ingestion, processing, and vector storage. Users interact through a custom internal web interface that integrates a PDF viewer alongside the chat experience that enables seamless citation and document navigation. The platform is securely hosted on Azure App Service with continuous deployment orchestrated through GitHub, which provides reliable operations and streamlined updates.

Phased approach to building and testing yielded dependability — Also to ensure trustworthy results, LSLA developed Juris by following a structured, phased approach over two years. It began with a concept phase that was focused on clearly identifying the problem, followed by a platform evaluation that compared open-source and commercial solutions. A prototype was then created and demonstrated as proof of concept.

In addition, internal testing included adversarial exercises, hallucination detection, and rigorous validation of citation reliability. Based on these findings, the team implemented enhancements, such as moving from size-based to semantic chunking, improving the interface, and expanding the set of source materials. Juris is now in pilot preparation and undergoing final refinements before its release to a select group of subject matter experts.

Efficient resourcing and sharing learnings

LSLA’s phased method to building and testing also made sure that sustainability was built in from the beginning. Indeed, ongoing maintenance is minimal, and Microsoft’s nonprofit Azure credits keep infrastructure costs around $2,000 per year.

The most significant cost was in staff time. Development so far totals roughly 300 staff hours (or about 0.5 full-time equivalent, plus 0.3 FTE over two years). Once Juris enters phase two, which has been funded by a Legal Services Corporation (LSC) technology initiative grant, expected benefits will include faster, more consistent research and reduced workload for frontline and administrative staff, plus a modular framework that others can adapt.

Other legal service organizations that face similar challenges can learn from the Juris development, testing, and implementation as well as other related case studies. These recurring lessons include:

      • beginning with a small, manageable scope
      • inviting end users in from the start, and
      • carving out protected time so staff can innovate alongside daily duties.

Looking ahead, the LSLA team will continue to roll Juris out in phases, while building sister tools. LSLA also plans to share lessons learned through LSC’s AI Peer Learning Labs to help other organizations replicate the model.

Real change at scale, such as this, will only come from collaborating across organizations to share playbooks, pool datasets, and co‑design tools that lift quality while lowering cost. It is only with such partnership and sharing lessons from early adopters of AI that peers can adapt the model and, together, scale solutions that narrow the justice gap.

Angela Tripp, Program Officer for Technology for the Legal Services Corporation contributed to this article.


You can learn more about the ways legal aid organizations are using advanced technology to better serve individuals as they access the justice system here

]]>
The rise of autonomous AI: How intelligent agents are redefining strategy, risk & compliance /en-us/posts/technology/autonomous-agentic-ai/ Mon, 03 Mar 2025 12:45:27 +0000 https://blogs.thomsonreuters.com/en-us/?p=65066 The rise of generative AI (GenAI) has been the momentum moving corporate and research expectations, investments, and innovation regardless of industry or discipline. However, in just two short years — and even without extensive scale and maturity of GenAI production systems — new variants and challenges are already alternating deployment designs and operating strategies.

In fact, 92% of companies say they will invest more in Gen AI over the next three years, yet only 1% state that their investments have reached maturity, according to from McKinsey & Co.

For leaders currently struggling with the terminology and designs of GenAI — protocols, messaging, large and small language models, vector databases, algorithms, and more — a next generational shift is already underway. And it’s already building on top of early directions, while introducing a new set of requirements and governance demands. Will the AI momentum slow down? Will new AI innovations become mere extrapolations of early-stage data and intelligence advances? Or will something more profound happen?

Indeed, this pace of AI change is dwarfing anything previously experienced. However, what struck me is the question: How do you instill robust oversight for solutions which are temporal, self-learning, and adaptative based on the data they ingest? Everyone has their own understanding of what AI is from the daily blast of media articles, so baselining is necessary.

In 2023, the term pre-training took on new importance as ChatGPT permanently changed the discussion of systems and data — in addition to costs, cloud architectures, and skills needed. By mid-2024, enterprises were witnessing the rise of retrieval augmented generation (RAG) using external data to improve the accuracy of GenAI and their industry’s large language models. Now as 2025 emerges, corporate leaders are being blanketed with yet another evolution or .agentic AI

To understand the progression of the question, What is AI? you need to compare the ideas of accountability and design between legacy priorities and ask the emerging questions surrounding accountability of data. It is this data that will simultaneously feed hundreds of AI layered components, and not just the one or two which today are simplistically being anticipated.

Legacy data brings next-gen complexity

Underneath these marvels of AI algorithms and chip technologies, the demand for usable data to improve the accuracy, longevity, and auditability of capabilities continues to strain internal departments and compliance personnel. However, as AI systems explode in their usage and deployment, the vast questions surrounding data complexity — its lineage, ingestion, storage, manipulation, and cross-domain usage — are often a black box.

As 2025 unfolds with macroeconomic and political uncertainties, what is certain is that given AI’s expansive trajectory, data can no longer be isolated or reviewed at a system level. When AI systems are pre-trained on separate data ecosystems, when AI systems begin to feed their outputs to downstream systems, and when AI results are materially different from common control criteria over time, then how will these systems be re-trained on event-driven data and at what cost?

AI discussions today are energetic and promising, especially when solving business demands for efficiency, customer service, profitability, and competitive distinction. Yet there are tradeoffs that must be made when it comes to scope, costs, and time (often referred to as the triple-constraint of program management and budgeting).

After three years, the development and adoption of GenAI solutions is now becoming more common. The legal, compliance, and audit considerations are clearer, and investors from individuals to private equity now conduct due diligence of these solutions to ensure business rule conformity and valuation. Nonetheless, what we are experiencing is that the methods and techniques that used legacy priorities to guide early AI solutions continually show decreased efficacy and importance when it comes to next-gen AI solutions that may possess greater intelligence and shared data.

These shifts, as represented in Figure 2 below, when mapped against an organization’s triple constraints, illustrate distinctive requirements that are not currently accounted for within the enterprise and its cohesive governance designs. In short, the data controls, compliance, and auditability for a small number of emerging AI solutions will not provide the robustness and scalability demanded when agentic AI begins to migrate or replace early-stage AI capabilities.

The diagram elicits another question, who has the roadmaps to migrate AI solutions to next-gen AI solutions? What happens when an AI system is re-trained or retired — what happens to all that data?agentic AI

By establishing a baseline for the information already provided, we can see from the details in the diagrams several factors, including:

      • The controls and accountability for large numbers of AI systems changes the discussion of data, its architecture, its reuse, and most importantly, its event-driven ingestion which in turn alters AI outputs (model efficacy).
      • The mechanisms and oversight employed for traditional passive, sample-driven conformity will fail consistently due to interconnectivity, real-time adaptations, and speed of change.
      • The challenges of security, privacy, and ethical data take on new challenges when factoring in agentic AI and its (likely) creation of synthetic data, which in turn is fed back into the system as part of event-driven feedback and continuous improvement.
      • Skills and transformative process guardrails will lag agentic AI capabilities. For example, the newest AI chipset performs total calculations in one second what would have taken a human 125 million years.

Finding proper agentic AI governance

However, beyond the evolution of GenAI, beyond its expansion into RAG, agentic AI can leverage the positive designs underway to aid with its continual refinement and self-learning decision-making. In Figure 3 below, the comparison of these is presented against the new demands being placed on data.

agentic AI

It is in this final illustration that we see a fundamental and permanent shift of priority — data over system ideation. Legacy methods started with the process, and many AI controls today start with algorithms. For agentic AI, there is a phase shift that must start with the data because these thinking, adjusting systems are built not on rules but on goals. Indeed, agentic AI requires accurate, reusable, and auditable data sources.

Corporations and their innovative leaders are experiencing a technological and generational function shift. The traditional legacy control playbooks and prescriptive development approaches are poorly equipped to address the next-gen requirements. Data is the key for explosive algorithmic intelligence that will be increasingly segmented into reusable modular components that are stacked one upon the other.

Finally, the fundamental challenge for every organization and those overseeing AI automation, are the questions: Can we adapt to meet the technological realities? Can we shift the prioritizations and governance to data before siloed, cascading AI risks result in unintended havoc? Will we, as humans in the AI loop chase the AI algorithms, and will we repeat the same mistakes we made with the rapid adoption of financial and regulatory technologies a decade prior?

For agentic AI in 2025 and its impacts on business models and operational performance, oversight will represent a continual journey — not a destination.


You can find more blog posts here

]]>
How financial institutions can best manage third-party fraud models and compliance complications /en-us/posts/investigation-fraud-and-risk/third-party-fraud-models/ Fri, 27 Dec 2024 12:44:15 +0000 https://blogs.thomsonreuters.com/en-us/?p=64184 Banks, credit unions, and all types of financial institutions must constantly adapt to new styles of fraudster attacks and techniques. Now, fraud is evolving more quickly, quietly, and efficiently than ever before.

To combat this, financial institutions must leverage both internally and externally developed fraud models. Ideally, these models aim to identify, prevent, and deter risky behavior on their platforms with minimal friction for their valid customers.

Internally developed fraud remediation models are proprietary systems created within the financial institutions themselves. These models are tailored to the specific needs and risk profiles of the institution, allowing for a highly customized approach to detecting and preventing fraudulent activities. By leveraging internal data, historical fraud patterns, and insights from their own customer base, financial institutions can develop models that are finely tuned to their unique environment.

Such internally developed models provide the advantage of direct control and flexibility to make adjustments as new fraud patterns emerge. They also allow institutions to build on their existing technological infrastructure and integrate fraud detection more seamlessly into their operations. However, developing these models can be resource-intensive, requiring significant time, expertise, and ongoing maintenance to ensure these solutions remain effective against increasingly sophisticated fraud tactics.

Third-party models drive fraud recovery

On the other hand, external or third-party models are attractive to financial institutions for multiple reasons.

First, third-party models may offer financial institutions their fastest go to market option. If a financial institution has an urgent, time sensitive exposure to fraud, for example, it may choose to quickly implement a third-party model instead of taking months or years to develop one internally. In doing so, the institution can save significant amounts of exposed funds.


Internally developed fraud remediation models are proprietary systems created within the financial institutions themselves that are tailored to the specific needs and risk profiles of the institution.


Second, a third-party model may be more technologically sophisticated or nuanced to measure risk variables that many financial institutions couldn’t otherwise. As fraudster techniques evolve, fraud modeling organizations may be more able to predict and react quickly to the latest fraudster developments. And for financial institutions that have competing priorities, effectively outsourcing fraud research, development, and management can offer a huge benefit.

Third, third-party models can leverage multiple clients’ data to benefit a single institution’s entire customer group. If one financial institution is hit with a fraud attack, for instance, the model could analyze the exposure, remediate it, and apply the remediation protections across the entirety of the model’s customer base. In doing so, other financial institutions may benefit from the third-party’s broader industry vision. This group benefit aligns all the financial institutions involved towards the common goal of improving fraud loss prevention.

Although third-party models can be valuable tools to mitigate fraud exposure, they often bring additional regulatory and compliance scrutiny. Over the past five years, regulators in the United States have increased their intensity and scope when reviewing fraud model use. Primarily, regulators — such as the Office of the Comptroller of the Currency (OCC), the Federal Reserve, and the Federal Deposit Insurance Corporation (FDIC) — use model risk governance and model risk management programs or frameworks to ensure that financial institution models are applied appropriately, effectively, as expected, and without bias.

Regulators hold model owners — those who ultimately implement and use the models (most commonly the financial institutions) — responsible for complying with regulators’ requirements. Even if the model was developed by a third party, the financial institution is still most often liable for compliance in the on-boarding, validation, and regularly cadenced monitoring of the model. Unfortunately, many financial institutions struggle to satisfy these regulatory requirements for their third-party models precisely because they do not own them.

Confidentiality can cause compliance complexities

For a third-party model developer, their ultimate value to customers is found within the model itself: how fast it can be implemented, what it uniquely measures, how it acts, and how well it performs. These characteristics — a developer’s secret sauce, if you will — are proprietary to each model, and if their unique blend is published or known outside of the company, the model could be replicated. This, of course, would cause the developer to lose any competitive advantage and value to the marketplace. Thus, even after selling the model to financial institutions, third-party fraud model developers are incentivized to keep their valuable model characteristics private.

However, this private nature presents difficulties for regulators, that want to know about the model’s risk variables, weights, and who and what they generally identify; but developers, not surprisingly, want to protect the dissemination of their data. This places the model users, often financial institutions, in the precarious position between the regulators and developers.


Although third-party models can be valuable tools to mitigate fraud exposure, they often bring additional regulatory and compliance scrutiny.


Financial institutions want the fraud protection that third-party models can provide, while regulators want to ensure the models in market don’t adversely impact consumers. For all parties to align towards stopping fraud with minimal consumer impact, they may choose to meet in the middle. As model use proliferates, regulatory burdens may increase as well.

Thus, while financial institutions prepare for increasingly thorough documentation requirements from the OCC, FDIC, and other regulatory authorities, third-party fraud model developers would be prudent to similarly prepare and create sharable documents that present more information than historically given, while protecting the minute details. For their part, regulators might consider easing their requirement timelines, knowing that those parties they might question may not have immediately available answers.

In summary, a compliance headache can feel nearly as costly as fraud losses. These compliance difficulties and fraud losses can be remediated most quickly if developers, financial institutions, and regulators can align on reasonable documentation parameters and expectations to reduce the burden on all parties.


You can find more about the regulatory and compliance challenges faced by financial institutions here.

]]>
How AI will disrupt fraud prevention & detection technologies /en-us/posts/corporates/technological-considerations-fraud-prevention/ Mon, 23 Dec 2024 13:23:16 +0000 https://blogs.thomsonreuters.com/en-us/?p=64259 Digital channels are widely used today to create efficiency in the on-boarding process for various financial products, including commercial and retail checking accounts, credit cards, automotive loans, and commercial loans. However, these channels also present opportunities for fraudsters, particularly in committing new account fraud, often due to the remoteness and anonymity they offer.

With the rapid acceleration of AI in various business processes and workflows, AI-generated fraud is changing how banks and insurance companies must approach fraud prevention and detection.

After describing the technological considerations these institutions much manage and how they can identify the types of fraud they are up against, this final article in our series explores the implications of AI-generated fraud and how financial institutions and insurance companies are responding to these new challenges.

The impact of AI-generated fraud

One of the most notable examples of deceptive use of artificial intelligence involves a tech portal author who downloaded an — with great success.
This easily accessible technology highlights the potential risks for banks that rely on biometric identifiers for security and verification, particularly voice recognition. If a bank uses the prompt, My voice is my password; please verify me, it can become vulnerable to voice-cloning attacks. To commit this fraud, illicit actors need only an audio file of the victim’s voice, which can be obtained from a phone’s automatic answer service or from social media content available online.

Bypassing voice recognition is just the beginning. Visual identifiers, such as facial verification and visual liveness checks, are also at risk due to the explosion of deepfake technology. The innovation in creating realistic looking deepfakes is astonishing, with some being so authentic that they deceive even the most discerning viewers. For instance, a website featuring a was so convincing that many fans believed it was the real actor.

In the value chain of a fraud operation, all other components needed for verification — such as the victim’s name, personal information, email account access, and bank details — must also be in place, especially if a bank relies on two-factor authentication.

fraud preventionWith the continued acceleration of data breaches, however, these components can be at risk as well, and one can assume that personally identifiable information for all American citizens is available on the dark web and ready to be purchased. On platforms such as Telegram, fraud service providers create the necessary components to help fraudsters bypass know your customer (KYC) identity controls. For example, to open an account, a fraudster might use forged state-issued documents, fake identification, and even a cloned voice to impersonate a real person or existing client. One service, called Docs 4 You, enables the creation of a completely new identity, complete with a driver’s license, selfie videos, and a passport. The goal is to cultivate an identity for the long term and then establish a credit history that can later be maxed out. In one such advertisement, a seller can bypass at least five of the largest .

The insurance industry is also affected by AI-generated images, which are used to simulate car accidents, for example. If it is easy to clone voices and faces, it is even easier to create fake accident images, leading to fraudulent claims that are difficult to detect without thorough investigation and personal inspection of the affected property or vehicle.

How financial Institutions and insurance companies can respond

While machine learning, predictive analytics, and behavioral biometrics are effective for detecting ongoing account fraud, illicit actors seek to use AI-driven fraud to bypass security protocols such as liveness checks and voice verification during customer verification processes in both new and existing account fraud cases.

AI-generated fraud largely impacts three categories, including the use of:

      • AI-generated videos and images to bypass liveness detection;
      • AI-generated voices to bypass voice verification; and
      • AI-generated documents and pictures to be used as supporting documentation (such as IDs, financial records, and insurance claims).

To combat these threats, financial institutions, insurance companies, and corporations must upgrade their detection and prevention capabilities. This includes implementing the latest technologies and introducing new measures during their customer on-boarding and claims management processes to counter AI-generated fraud.

Despite being a target of AI-generated fraud, biometric information remains a crucial component of any on-boarding or verification solution. However, its limitations as a standalone verifier mean it must be combined with existing customer data from robust public sources. For example, if the identity of a customer cannot be verified using public records, a visit to a branch or in-person verification process may be necessary, even if a liveness check is confirmed by a biometric provider.

If an organization relies solely on digital channels, remote verification may be the only option. In such cases, the location of the individual can offer additional insight. For instance, a US-based institution might block account openings or credit card limit expansion requests if the online session or call originates from outside the United States or a specific region within the country.

Combining data, technology & personal interactions

The field of AI detection and prevention technology is rapidly evolving, offering innovative capabilities. Advanced liveness detection now utilizes 3D depth sensing and multi-angle face scans with anti-spoofing algorithms. Deepfake detection AI analyzes frame-level inconsistencies and employs neural networks trained on datasets of authentic versus deepfake videos.

As voice verification becomes more common in the financial industry, anti-spoofing systems can detect audio spectrum inconsistencies and synthetic overtones, which are typical of AI-generated voices. These technologies are particularly effective in call center operations.

For document validation, authentication solutions using optical character recognition and image forensics are essential for detecting fraud. , for instance, adds invisible pixels or audio patterns to documents or files that computers can detect but humans cannot.  Innovations in , for example, can further help uncover document and image alterations.

Document verification systems and deepfake detection tools are poised to become essential components of the anti-fraud arsenal in financial institutions and insurance companies. Combining the capacity and power of these tools is critical and is achieved through multimodal verification methods. Given the rapid pace of innovation in AI, it is essential to calculate returns on investment over shorter time spans.

Conclusion

Obviously, financial institutions and insurance companies should not rely solely on technology in their fight against AI-driven fraud.

The financial implications of this innovative type of fraud may necessitate additional steps in the account-opening process. For instance, live verification steps — such as face-to-face verification conducted by local branches or notaries — could serve as a deterrent to fraudsters.

By combining advanced technology with personal interactions and robust data analysis, financial institutions and insurance companies can better protect themselves against the evolving threat of AI-generated fraud. This multi-faceted approach ensures that while technology plays a crucial role, human oversight and interaction remain integral to the fraud prevention and detection process.


You can read our 3-part blog series of the technological considerations on how financial institutions and insurance companies need to manage fraud detection and prevention here.

]]>
A brighter future for fintechs in 2024? /en-us/posts/corporates/fintechs-future-2024/ https://blogs.thomsonreuters.com/en-us/corporates/fintechs-future-2024/#respond Mon, 30 Oct 2023 14:58:37 +0000 https://blogs.thomsonreuters.com/en-us/?p=59259 Heading into 2023, the outlook for financial technology (fintech) companies wasn’t especially bright. Rising interest rates and inflation had ended the pre-pandemic era of easy money, making it more difficult for fintech startups to raise capital.

Regulators were pressuring fintechs, particularly in the cryptocurrency space, to adhere to more conventional know-your-customer protocols. And a number of high-profile failures, including the spectacular implosion of Sam Bankman-Fried’s FTX crypto exchange, were making investors nervous. Then, in March 2023, the collapse of Silicon Valley Bank sent shivers of dread throughout the entire tech sector.

These converging factors generated a considerable amount of skepticism about the future of fintechs, particularly among under-capitalized startups. A brutal market shakeout was inevitable, the thinking went, and — in the great Darwinian tradition of entrepreneurial capitalism — only the strongest would survive.

Have these fears about fintech’s future materialized?

Looking back, it’s apparent that some skepticism about fintechs was warranted, but the situation heading into 2024 isn’t nearly as dire as many had predicted. In fact, some indicators are quite promising.

A funding rebound?

First, it is true that overall funding for fintechs has cratered, but signs of a turnaround are starting to emerge.

According to KPMG, overall global funding activity for fintechs — including venture capital (VC), private equity, and mergers & acquisitions (M&A) fell to $17.9 billion in Q2 2023, from $34.5 billion in Q1 2023 and down from its peak of $103.2 billion at the beginning of 2022. Also, global M&A activity also dropped to a paltry $2.8 billion in Q2 2023 from $21.2 billion in Q1 2023. During the same period, however, VC funding appeared to be rebounding (from a low of $11.9 billion in Q4 2022 to $14.8 billion in Q2 2023) — so it’s not all bad news.

The fundamentals of failure

And yes, there have been a number of high-profile failures in the fintech space over the past couple of years. Among them:

      • CommonBond — A student-loan lending service that saw its core business crumble during the pandemic
      • Ribbon — An all-cash real-estate service hit by rising mortgage interest rates; acquired by EasyKnock
      • Fast — An online checkout provider that burned through its funding too quickly
      • Nuri — A German that fell along with the collapse of crypto
      • Bank North — A United Kingdom neobank that ran out of funding
      • LendUP — A loan service that was shut down for “repeatedly lying and illegally cheating customers,” according to regulators
      • Plastiq — A banking-as-service provider tied to the Silicon Valley Bank collapse
      • Rize — Another banking-as-service provider acquired by regional bank Fifth Third
      • Daylight — A digital bank serving the LGBTQ community

Dozens of other fintechs have called it quits as well, but it’s worth exploring why so many have failed before making any grand generalizations about the overall health of the fintech market.

Indeed, failing fintechs tend to succumb to one or more of the following forces:

      • Financing issues — Capital drain due to investor pullback, higher interest rates, poor management, an inferior product, inability to grow, or some combination thereof
      • Regulatory hurdles — Lack of attention to compliance and other regulatory issues
      • Saturated markets — The product itself isn’t unique enough to differentiate it from competitors offering similar services
      • Poor data security — Operations compromised by inadequate security and control measures, eroding customer confidence and allowing possible breaches
      • Not enough customers — Projected customer base fails to materialize, calling all valuations and growth assumptions into question
      • No clear path to profitability — Good idea maybe, but a bad business plan
      • Staffing issues — Inability to hire enough people with the skills necessary for the product to succeed
      • Poor execution — The product itself does not work as advertised or expected
      • Pandemic fever — Many startups that counted on low-interest financing during the pandemic saw their business models crumble as interest rates rose

Half-empty or half-full?

Now, according to the Wall Street Journal, 75% of venture-backed fintechs eventually fail no matter what, so there is a certain amount of built-in churn in this space. Success also looks different for different companies. After all, there are many types of fintechs — such as neobanks, e-money services, digital wallets, banking-as-services, stock-trading apps, as well as those serving various sectors like insurance, regulatory, lending, payments, wealth management, personal financial management, and more. Lumping them all together as fintechs does not do the overall sector much justice.

In the current economic environment, for example, many payment and insurance fintechs (such as PayPal, Kin Insurance, or Sure) are holding their own, while some involved in real estate and wealth management are indeed struggling. But even within the spaces that are struggling, pockets of resilience can be found. For example, Altruist, a California-based fintech startup in the wealth management area that provides analysis software to financial advisers, has seen nothing but solid growth since its founding in 2019.

The brighter side of fintech

Overall, what’s really happening to fintechs going into 2024 is that the return of higher interest rates, inflation, and greater investor scrutiny has re-introduced the overheated fintech sector to some uncomfortable market realities — ones that don’t support wild bets and hopeful speculation. Of course, this leaves the door open for fintech solutions that have a compelling product story, meaningful differentiation from competitors, an identifiable market segment, and, most important of all, a realistic path to profitability.

The market’s new normal has also spurred a great deal of re-positioning and innovation. Faced with consumer markets that are too competitive, for example, some fintechs are pivoting to enterprise or business-to-business services. Others are scrambling to figure out how they can incorporate generative artificial intelligence (AI) capabilities into their products, or inventing new products based around AI. And those fintechs that are partnering with more established financial institutions are successfully insulating themselves against the risks of going it alone, and even flailing fintechs that get acquired are finding new life under the umbrella of their parent companies.

So, the truth is that yes, some fintech froth has been skimmed over the past couple of years, but those fintechs built on solid fundamentals — as with any other business model — are weathering the storm just fine and have plenty of reasons to be optimistic in 2024.

]]>
https://blogs.thomsonreuters.com/en-us/corporates/fintechs-future-2024/feed/ 0