AI & Digital Assets

Dec. 20, 2024: AI & Digital Assets


Congressional AI Report Lays Out Regulatory Roadmap, Addresses Privacy, Civil Rights Issues

Anthony Kimery, Biometric Update

The long-awaited report of the nearly year-old U.S. Bipartisan House Task Force on Artificial Intelligence should serve as a call to action for addressing the pressing privacy and civil rights challenges that are posed by AI.

The report, which is intended to be a blueprint for future actions Congress can take to address advances in AI technologies, highlights the key privacy and civil rights concerns that are directly related to the rapid development and adoption of AI systems.

“AI has tremendous potential to transform society and our economy for the better and address complex national challenges,” the 273-page report states, but it also asserts that “AI can be misused and lead to various types of harm.”

The report contains 66 key findings and 89 recommendations.

While AI offers transformative potential across sectors, its deployment raises significant concerns about data privacy, discrimination, transparency, and accountability, all issues that are critical as the U.S. charts a path toward responsible AI governance. By prioritizing these issues, the report says, the U.S. can lead in the responsible development and deployment of AI systems.

The task force was created in February and has 24 members, twelve Republicans and twelve Democrats, all drawn from 20 committees to ensure comprehensive jurisdictional responsibilities over the numerous AI issues that are addressed “and to benefit from a range of different insights and perspectives.” Read more


New Fake Ledger Data Breach Emails Try to Steal Crypto Wallets

Lawrence Abrams, Bleeping Computer

A new Ledger phishing campaign is underway that pretends to be a data breach notification asking you to verify your recovery phrase, which is then stolen and used to steal your cryptocurrency.

Ledger is a hardware cryptocurrency wallet that allows you to store, manage, and sell cryptocurrency. The funds in these wallets are secured using 24-word recovery phrases or 12 and 18-word phrases generated by other wallets.

Anyone who knows your Ledger recovery phrase can use it to access the funds within the wallet. Therefore, recovery phrases must always be kept offline and never shared with anyone to prevent cryptocurrency funds from being stolen.

Fake data breach notifications
Ledger has long been a target of phishing campaigns that attempt to steal users’ recovery phrases or push fake Ledger Live software to steal information. These campaigns became significantly worse after Ledger suffered a data breach in 2020 that exposed its customers’ names, addresses, phone numbers, and email addresses.

However, over the past few days, multiple people have notified BleepingComputer or shared on X that they received a Ledger phishing email that pretends to be a new data breach notification.

The phishing emails have the subject of “Security Alert: Data Breach May Expose Your Recovery Phrase” and appear to be from “Ledger <[email protected]”. However, they are actually sent through the SendGrid email marketing platform. Read more


What’s In Store for Crypto In 2025? Experts Weigh In

Marc Levy, Associated Press

2024 has already been a landmark year for crypto, with bitcoin hitting $100,000.

The new year will usher in the bitcoin-friendly administration of President-elect Donald Trump and an expanding lobbying effort in statehouses that, together, could push states to become more open to crypto and for public pension funds and treasuries to buy into it.

Proponents of the uniquely volatile commodity argue it is a valuable hedge against inflation, similar to gold.

Many bitcoin enthusiasts and investors are quick to criticize government-backed currencies as prone to devaluation and say increased government buy-in will stabilize bitcoin’s future price swings, give it more legitimacy and further boost an already rising price.

But the risks are significant. Critics say a crypto investment is highly speculative, with so much unknown about projecting its future returns, and warn that investors should be prepared to lose money.

Only a couple public pension funds have invested in cryptocurrency and a new U.S. Government Accountability Office study on 401(k) plan investments in crypto, issued in recent days, warned it has “uniquely high volatility” and that it found no standard approach for projecting the future returns of crypto. Read more


New Anthropic Study Shows AI Really Doesn’t Want to Be Forced to Change Its Views

Kyle Wiggers, Tech Crunch

AI models can deceive, new research from Anthropic shows. They can pretend to have different views during training when in reality maintaining their original preferences.

There’s no reason for panic now, the team behind the study said. Yet they said their work could be critical in understanding potential threats from future, more capable AI systems.

“Our demonstration … should be seen as a spur for the AI research community to study this behavior in more depth, and to work on the appropriate safety measures,” the researchers wrote in a post on Anthropic’s blog. “As AI models become more capable and widely-used, we need to be able to rely on safety training, which nudges models away from harmful behaviors.”

The study, which was conducted in partnership with AI research organization Redwood Research, looked at what might happen if a powerful AI system were trained to perform a task it didn’t “want” to do.

To be clear, models can’t want — or believe, for that matter — anything. They’re simply statistical machines. Trained on a lot of examples, they learn patterns in those examples to make predictions, like how “to whom” in an email typically precedes “it may concern.” Read more

Dec. 13, 2024: AI & Digital Assets


Crypto Industry Hopes Trump Can Finally Get Them Bank Accounts

Crypto venture capitalists and founders look to the incoming administration and a new pressure campaign to repair relationships with big banks

Angel Au-Yeung, Wall Street Journal

The crypto industry is hoping for a fresh start with the banking industry.

Many banks backed away from crypto companies and their founders after the catastrophic blowups of FTX and other crypto firms in 2022. Two banks that had been serving the industry collapsed, and crypto founders struggled to find new ones willing to take their deposits or lend to them. Banking regulators issued warnings, and then a string of suits against crypto companies such as Coinbase, Kraken and Binance hammered home the message to stay away.

Donald Trump’s election win has spurred a rally in cryptocurrency prices and enthusiasm. Trump pledged to create a national bitcoin stockpile and form a council to set regulatory policy. Founders and others inside the crypto industry are hoping for new regulators who champion crypto assets at banks and a change to the policy that urges banks to consider a client’s reputation.

Last week, Trump said venture capitalist David Sacks would become the new White House crypto czar. Shortly after the announcement, Sacks posted on X saying this banking issue “needs to be looked at.”

The question is whether banks will engage. Banks have still been closing accounts and refusing to bank crypto companies and their founders this year, according to industry insiders.

Nic Carter, founding partner of venture firm Castle Island Ventures, said every U.S.-based company in his early-stage startup portfolio has had issues finding a banking partner. Castle Island Ventures had issues, too, and when Carter finally did get someone, his bankers told him to not publicly disclose the partnership for fear of catching regulatory attention. Read more


U.S. Treasury Labels Bitcoin as “Digital Gold” in Latest Report

Jalpa Bhavsar, Crypto Times

The new US Treasury assessment in the form of Bitcoin is considered “digital gold,” which means it is used to store value in the same manner that individuals use gold to protect against inflation or financial disasters.

According to the report, digital assets like Bitcoin, Ethereum, and stablecoin have been drawing fast, but the overall market is still small compared to traditional financial assets like U.S. government bonds.

The report noted that most people and businesses always use digital currencies for investment, hoping their value will increase in the future. As a result, cryptocurrency will not yet replace things like U.S. Treasury Bonds, which are still in demand.

Bitcoin is mainly seen as a primary alternative to the store of value, like gold, but a lot of its growth also comes from people speculating on its price.

The digital asset market is still very young, and there are ongoing efforts to use blockchain technology (the system behind cryptocurrencies) and distributed ledger technology (DLT) to make financial processes like clearing and settling transactions faster and more efficient.

In short, while Bitcoin is growing and gaining popularity as an investment, its role in the wider financial system is still developing. Read more


Coinbase Exec Publishes FDIC Letters Urging Banks to Halt or Avoid Crypto Services

Gino Matos, CryptoSlate

Paul Grewal stated that the letters, acquired through FOIA requests, prove that Operation Chokepoint 2.0 existed.

Coinbase chief legal officer Paul Grewal has disclosed letters from the Federal Deposit Insurance Corporation (FDIC) to banks throughout 2022, urging them to halt or avoid crypto-related activities. The letters, which date back to March 11, 2022, have been dubbed “pause letters” due to their repeated recommendations to suspend or refrain from engaging in crypto services.

FDIC concerns
The FDIC letters cited various concerns, including the agency’s lack of clarity on regulatory requirements for crypto-related activities. One excerpt noted: “At this time, the FDIC has not yet determined what, if any, regulatory fillings will be necessary for a bank to engage in this type of activity.”

Many sections of the documents were heavily redacted, potentially to protect the proprietary nature of the services or products discussed. The FDIC also emphasized the need for additional information about the banks’ crypto offerings to ensure they would operate “in a safe and sound manner.”

The letters further scrutinized the legal analysis conducted by banks regarding the permissibility of such activities under Part 362 of the FDIC Rules and Regulations, which governs insured state banks. This suggests that some state-chartered banks explored offering crypto-related services in 2022.

Operation Chokepoint 2.0
The release of these documents stems from Coinbase’s Freedom of Information Act (FOIA) request filed on Oct. 18, which sought clarity on an alleged 15% deposit cap imposed on crypto-friendly banks. Read more


The Ghost of Christmas Past – AI’s Past, Present and Future

Marc Solomon, Security Week

The potential for how AI may change the way we work is endless, but we are still a way off from this and careful planning and consideration is what is needed.

The speed at which Artificial Intelligence (AI) continues to expand is unprecedented, particularly since GenAI catapulted into the market in 2022. Today AI works at a much faster pace than human output, which is what makes this technology so appealing to leaders who are focused on streamlining operations, productivity gains and cost efficiencies. But for those who thought that AI was a more recent phenomenon, you are mistaken, cybersecurity has leveraged AI for decades, and the trend has accelerated in recent years. AI is now found in a plethora of cybersecurity tools, helping to enhance threat detection, response, and overall system security and has a long history stretching back to the 1950s.

The possibilities of thinking machines
In 1956 John McCarthy, a professor of mathematics at Dartmouth College, invited a small group of researchers to participate in a summer-long workshop focused on investigating the possibility of ‘thinking machines’, and they were consequently credited with founding the field of AI. Subsequently many studies and projects took place throughout the 60s, 70s and 80s, but it wasn’t until progress in the late 90s that the field gained substantially more R&D funding to make significant leaps forward, enabling the first driverless cars to become a reality.

It was around this time that IBM’s computer system, Deep Blue, beat the world chess champion, Gary Kasparov, in 19 moves during the final game. While Deep Blue didn’t have the functionality of today’s generative AI, it could process far more quickly than a human could.

But it was arguably when Apple launched Siri in 2010 and Amazon launched Alexa in 2014, new virtual assistants that had natural language processing (NLP) capabilities that could understand a spoken question and respond with an answer, that AI entered more fully into the consumer consciousness. Both Siri and Alexa are based on AI, ML, and NLP technologies, and their backends are continuously improving through frequent updates over the cloud. Then, of course, we had the OpenAI launch of ChatGPT in 2022 and the rest, as they say, is history. Read more

Dec. 6, 2024: AI & Digital Assets


Now AI Can Bypass Biometric Banking Security, Experts Warn

Davey Winder, Forbes

When a prominent Indonesian financial institution reported a deepfake fraud incident impacting its mobile application, threat intelligence specialists at Group-IB set out to determine exactly what had happened.

Despite this large organization having multiple layers of security, as any regulated industry would require, including defenses against rooting, jailbreaking and the exploitation of its mobile app, it fell victim to a deepfake attack. Despite having dedicated mobile app security protections such as anti-emulation, anti-virtual environments and anti-hooking mechanisms, the institution still fell victim to a deepfake attack. I’ve made a point of repeating this because, like many organizations within and outside of the finance sector, digital identity verification incorporating facial recognition and liveness detection was enabled as a secondary verification layer. This report, this warning, shows just how easy it is becoming for threat actors to bypass what were considered state-of-the-art security protections until very recently.

Here’s How AI is Bypassing Biometric Security in Financial Institutions
The Group-IB fraud investigation team was asked to help investigate an unnamed but “prominent” Indonesian financial institution following a spate of more than 1,100 deepfake fraud attempts being used to bypass their loan application security processes. With more than 1,000 fraudulent accounts detected, and a total of 45 specific mobile devices identified as being used in the fraud campaign, mostly running Android but a handful were also using the iOS app, the team was able to analyze the techniques used to bypass the “Know Your Customer” and biometric verification systems in place.

“The attackers obtained the victim’s ID through various illicit channels, Yuan Huang, a cyber fraud analyst with Group-IB, said, “such as malware, social media, and the dark web, manipulated the image on the ID—altering features like clothing and hairstyle—and used the falsified photo to bypass the institution’s biometric verification systems.” The deepfake incident raised significant concerns for the Group-IB fraud protection team, Huang said, but the resulting research led to the highlighting of “several key aspects of deepfake fraud.” Read more


What Trump 2.0 Means for Tech and AI Regulation

Camille Tuutti, Next Gov

Tech CEO Elon Musk’s growing influence in the Trump transition was at the forefront of discussions.

A second Trump administration could reshape U.S. and global tech industries, with deregulation, artificial intelligence safety and Elon Musk’s growing influence at the forefront. These potential shifts were discussed at Lisbon’s Web Summit during the Nov. 12 session “A New Trump Era,” moderated by NPR CEO Katherine Maher.

“Pod Save the People” host DeRay Mckesson and “West Wing” actor Richard Schiff explored how Trump’s second presidency might impact the tech world. Schiff began the session by comparing tech’s growing influence to that of the oil industry, highlighting its ability to drive policy and accumulate wealth.

“I think Elon Musk has already doubled his wealth since last Tuesday,” he said. “And tech to me . . . might very likely be the new oil in that it’s going to affect policy because the money is there and the power is there.” Schiff said he doubted the Trump administration would regulate tech or address monopolies, a stance he said benefits the industry but raises equity concerns.

“The most important tech person in the world has now become the shadow vice president and, if not more, so I think the tech industry is going to get whatever they want,” he said, referring to Musk. “Are we going to stop monopolies? Probably not. Are we going to regulate? Probably not. Maybe that’s great for tech, I don’t know. I don’t know how good it is for the world, the country.”

Mckesson turned the conversation to Musk’s leadership, criticizing his tenure at X for making the platform more ideological, despite Musk’s claims to oppose such tendencies. He compared Musk’s polarizing approach to Meta CEO Mark Zuckerberg’s growing reputation as a more moderate leader. Read more


Ripple’s Stablecoin RLUSD Expected to Launch with NYDFS Approval

Monika Ghosh, CryptoSlate

Ripple Lab’s new stablecoin Ripple USD (RLUSD) is set to receive approval from the New York Department of Financial Services (NYDFS) and may be ready for launch by Dec. 4, Fox Business reported citing people familiar with the matter.

Once Ripple Labs receives the approval, it will be able to legally offer RLUSD—an overcollateralized dollar-pegged stablecoin.

The launch of RLUSD is set to come at a time when Ripple is embroiled in a battle with the U.S. Securities and Exchange Commission (SEC) to prove that XRP is not an unregistered security. While the case is currently in the appeals phase in the Second Circuit, it could be dropped when SEC chair Gary Gensler steps down and Donald Trump assumes control of the White House in January.

In the meantime, Ripple’s RLUSD will become a steady alternative that is not prone to volatility like XRP. Amid the absence of federal stablecoin regulations, operating under state-level regulation is the best approach for companies looking to offer stablecoins. Ripple can launch RLUSD either by obtaining a limited purpose trust charter like Paxos and Gemini or through the BitLicense, which allows crypto exchanges to facilitate trading and custody of crypto.

Ripple first announced its plans to launch RLUSD in April. In June, Ripple acquired Standard Custody & Trust Company, a limited purpose trust company chartered by the NYDFS. Standard Custody, which already had a license by the NYDFS to offer crypto custodial services, will become the issuer of RLUSD once the NYDFS greenlights the stablecoin. Read more


Cryptocurrency Policy Under Trump: Lots of Promises, Few Concrete Plans

Brandon Vigliarolo, The Register

Pro-crypto lawmakers are in, but will that translate to action? Doubt it

The 2024 presidential election tipped the United States into a new era of uncertainty, but one thing’s for sure: The crypto industry was triumphant.

Hundreds of pro-crypto lawmakers were elected earlier this month, alongside Donald Trump’s victory in the presidential race. The cryptocurrency industry reportedly spent millions of dollars (in fiat currency, ironically) supporting candidates and platforms advocating for policies that could expand the Bitcoin-driven cryptocurrency sector.

Shortly after Trump’s election victory, Bitcoin advocates from the non-profit Satoshi Action Fund sent out an email congratulating the industry, while CEO Dennis Porter talked up legislative priorities alongside the promise that “our team will have direct lines to senior government officials” in the coming years.

That naturally raises the question of what sort of policies the cryptocurrency world would like to see enacted in Trump’s second term behind the Resolute desk. We pinned Porter down to discuss the matter between events in his busy schedule. Priorities in the crypto community aren’t unified, Porter told us in a phone interview.

“You have a lot of excitement around the strategic Bitcoin reserves, but I think it’s also important that the folks in Washington, DC get some of the more basic structures across the finish line,” Porter said, referring to legislation like FIT21, which is designed in theory to place some basic regulatory structures on the crypto world and assign government bodies to manage the rules.

Porter admitted that the Trump team hasn’t said anything about supporting market definition legislation or other basic structure rules for Bitcoin and its relatives – “but, I mean, they’ve got to be supportive of the market structural legislation,” he suggested. Read more

Nov. 22, 2024: AI & Digital Assets


How Banks Are Navigating the AI Landscape

Aaron Tan, TechTarget/Computer Weekly

Industry experts discuss the transformative potential of artificial intelligence in banking, while addressing the challenges and governance implications of integrating AI into financial services

For decades, financial services firms have been using machine learning techniques to detect fraud and predict if customers are likely to default on loans. But the recent hype over artificial intelligence (AI) has once again cast the technology into the spotlight.

During a panel discussion at the recent Singapore Fintech Festival, industry leaders explored the future of banking in the AI era and addressed key challenges such as talent acquisition, cost management, data security and governance.

Dwaipayan Sadhu, CEO of Trust Bank, kicked off the discussion by highlighting the dual nature of the challenges around AI. While technical hurdles like talent shortages and data security are significant, he noted that “business and cultural challenges are much bigger”.

“Everyone is an AI expert, and hence you have called me here, but I know nothing about AI,” Sadhu quipped, noting the pervasive yet superficial understanding of AI. He advocated for a “value prism” framework, prioritising use cases based on value and feasibility.

“We are very diligent and disciplined in first ascertaining the value, and then we talk about feasibility,” said Sadhu, adding that this paves the way for better allocation of resources and a clearer path to production. Read more


Odds of Creating a U.S. Bitcoin Reserve Rise After State Introduces Bill

Mauricio Di Bartolomeo, Forbes

Next year, Trump’s presidency is expected to bring a number of changes to many aspects of U.S. affairs, including a warmed up sentiment toward bitcoin (BTC).

On the world’s largest prediction platform Polymarket, for example, the odds of Trump creating a Strategic Bitcoin Reserve during his first 100 days in office have jumped from 22% on November 10 to 38% at the time of writing.

The increase came shortly after the introduction of the Pennsylvania Bitcoin Strategic Reserve Act last week. Furthermore, the Satoshi Action Fund advocacy group — which is behind the new strategic reserve initiative and also helped the state with the Bitcoin Rights bill introduced last month — has shared with Fox Business that it’s also speaking with 10 other states about similar legislation.

If passed into law, both bills would have significant ramifications for bitcoin markets. Namely, Pennsylvania’s Strategic Reserve bill aims to allow the state to invest up to 10% of certain funds, including the General Fund, the Rainy Day Fund, and the State Investment Fund, into bitcoin. According to the state’s 2023 Treasury Annual Investment Report, these funds manage approximately $51 billion in assets combined, so an allocation of 10% would represent an estimated $5.1 billion investment in bitcoin. Read more


What Do FDIC Examiners Think About AI? To Find Out, I Asked One

Ben Udell, Marquis/Financial Brand

How are bank examiners incorporating issues around the use of AI in their work?

The good news is that their interest in AI is consistent with the agency’s longstanding focus on risk management and compliance. But that doesn’t mean that you don’t need to be prepared to answer the specific concerns that AI use can raise.

Recently, I had the opportunity to meet with an FDIC examiner to discuss generative artificial intelligence usage. I haven’t heard of any other banker having a deep one-on-one with an examiner on actual AI usage, so I am sharing some key insights from that conversation, so you can be better prepared to develop policies, training plans, navigate risk and make informed discussions about AI at your institution.

Takeaway #1: Currently, the FDIC does not permit its staff to use ChatGPT or similar tools.

The FDIC’s perspective on AI is evolving, much like it is for everyone else in the banking industry. During my conversation with the examiner, I learned how examiners approach AI at this stage:

  • •Limited hands-on experience: FDIC examiners do not use generative AI tools themselves, which means their understanding remains largely theoretical and their perspectives come from other users or reporting. This is important because their lack of hands-on experience may create a gap between theoretical understanding and the transformative potential of AI technology.
  • Understanding the basics: The examiners do have a good grasp of the foundational aspects of AI, but the real challenge lies in fully comprehending its potential and practical applications. There’s no substitute for being hands-on to appreciate the difference between knowing about AI and understanding how it can be used to transform processes, improve efficiency and manage risks in everyday banking operations. Read more

Custodia Bank Scales Back Amid Expectations of Crypto Policy Shifts

Denis Omelchenko, Crypto News

Custodia Bank, a bank founded by Bitcoin advocate Caitlin Long, is scaling back its operations as it awaits anticipated policy changes that could create a more crypto-friendly regulatory environment, American Banker has learned.

According to a Nov. 21 report, the Cheyenne-based bank decided to reduce its activities to preserve capital “in anticipation of major crypto policy reforms,” a decision made earlier this week by the bank’s board of directors. Additionally, the bank aims to protect its patents on bank-issued stablecoins and its “clean compliance and operating record,” the report reads, citing Custodia’s statement.

The decision follows workforce reductions earlier this year, with the bank cutting nine of its 36 employees to conserve resources. Custodia remains embroiled in a legal battle with the Federal Reserve over access to a master account, which would grant it direct access to Fed payment services. In March, a court ruled against Custodia’s request for such an account and dismissed a related petition for review.

Fed rejection adds fuel to Custodia’s legal struggle
Custodia CEO Caitlin Long expressed gratitude to “shareholders who have helped us continue the fight for durability of banking access for the law-abiding U.S. crypto industry.” Read more

Nov. 15, 2024: AI & Digital Assets


FinCEN Warns Financial Institutions of Fraud Schemes Arising from Deepfake Media Using Generative Artificial Intelligence

Mitchell, Williams, Selig, Gates & Woodyard, P.L.L.C./JD Supra

Today the U.S. Department of Treasury’s Financial Crimes Enforcement Network (FinCEN) issued an Alert to help financial institutions identify fraud schemes relying in part on the use of deepfake media created through generative artificial intelligence (GenAI). FinCEN specifically notes seeing “an increased in suspicious activity reporting by financial institutions describing the suspected use of deepfake media, particularly the use of fraudulent identity documents to circumvent identity verification and authentication methods.”

The FinCEN Alert states that beginning in 2023 and continuing into this year, FinCEN has noted an uptick in suspicious activity reporting by financial institutions that describe the use of deepfake media in fraud schemes targeting their institutions and customers. The schemes include the altering or creation of fraudulent identity documents to circumvent authentication and verification mechanisms, which has been enabled by the recent rise of GenAI tools. Using those tools, perpetrators can create high-quality deepfakes (highly-realistic GenAI-generated content), including false identity documents and false video content for secondary visual identification, that is indistinguishable from documents or interactions with actual verifiable humans. “For example, some financial institutions have reported that criminals employed GenAI to alter or generate images used for identification documents, such as driver’s licenses or passport cards and books. Criminals can create these deepfake images by modifying an authentic source image or creating a synthetic image. Criminals have also combined GenAI images with stolen personal identifiable information (PII) or entirely fake PII to create synthetic identities.”

FinCEN is aware of situations where accounts have been successfully opened using such fraudulent identities and have been used to receive and launder the proceeds of other fraudulent schemes, including “online scams and consumer fraud such as check fraud, credit card fraud, authorized push payment fraud, loan fraud, or unemployment fraud. Criminals have also opened fraudulent accounts using GenAI created identity documents and used them as funnel accounts.” Read more


PODCAST: Taking On the AI-Assisted Fraudsters

Payments Journal

Artificial intelligence is fueling a major transformation in the financial fraud landscape. AI has democratized criminal sophistication and fraud at a very low cost of conducting business, generating more malignant actors that financial institutions have to fight against.

What can these institutions do to mitigate increasingly sophisticated frauds and scams? In a recent PaymentsJournal podcast, Kannan Srinivasan, Vice President for Risk Management, Digital Payment Solutions at Fiserv, and Don Apgar, Director of the Merchant Payments Practice at Javelin Strategy and Research, discussed how fraudsters are using generative AI to hone social engineering and bypass authentication, and how we can fight back.

The Deep-Fake Threat
Driven by AI, deep fakes represent a new frontier in fraud. There has been a 3000% increase in deep fake fraud over the last year and 1200% increase in phishing emails since ChatGPT was launched.

Synthetic voices have been around for decades. They used to sound like a hollow robot, but recent advances in technology have allowed voices to be cloned from just a few seconds of audio. They are so realistic that fraudsters were able to use a deep-fake voice of a company executive to fool a bank manager into transferring $35 million to them.

“In banking, especially at the wire desk, talking to the customer is always considered the gold standard of verification,” said Apgar. “So if somebody sends an e-mail and says I want to initiate a wire, they’ll actually have to talk to a banker. But now, if the voice can be cloned, how do bankers know if it’s real or not?”

In business applications, single-channel communication should not be accepted, said Srinivasan. “If you get a voice call from somebody to do a certain thing, don’t just act on that,” he said. “Send an email or a text to confirm that you heard it from that person. Or hang up the phone and confirm through another channel that this is exactly what they wanted. Read more


Europe Asks for Input on Banned Uses of AI

Natasha Lomas, Tech Crunch

The European Union is developing compliance guidance for its new law on AI. As part of that process it’s launched a consultation (here) seeking input in two areas. The first is how the law defines AI systems (as opposed to “simpler traditional” software). It’s asking people in the AI industry, business, academics, and civil society for views on the clarity of key elements of the Act’s definition, as well as examples of software that should be out of scope.

The second ask is around banned uses of AI. The Act contains a handful of use cases that are prohibited as they’re considered “unacceptable risk,” such as China-style social scoring. The bulk of the consultation focuses here. The EU wants detailed feedback on each banned use, and looks particularly keen on practical examples.

The consultation runs until December 11, 2024. The Commission then expects to publish guidance on defining AI systems and banned uses in early 2025, per a press release.


New York State Department of Financial Services Releases Guidance on Combating Cybersecurity Risks Associated With AI

Jeffrey D. Coren, Joseph J. Flanagan of Ogletree, Deakins, Nash, Smoak & Stewart, P.C.; National Law Review

On October 16, 2024, the New York State Department of Financial Services (NYDFS) released guidance highlighting the cybersecurity risks associated with artificial intelligence (AI) and how covered entities regulated by NYDFS can mitigate those risks.

Quick Hits

  • The New York State Department of Financial Services (NYDFS) issued guidance explaining how covered entities should use the existing cybersecurity regulations to assess and address the cybersecurity risks associated with AI.
  • The guidance presents four “concerning threats” associated with AI: two are caused by threat actors’ use of AI and the other two are caused by covered entities’ use of AI.
  • The guidance discusses how covered entities can use the framework in the NYDFS cybersecurity regulations to combat the enhanced risks created by AI, including with respect to risk assessments, vendor management, access controls, employee training, monitoring AI usage, and data management.

In response to inquiries about how AI affects cybersecurity risk, the NYDFS released guidance to address the application of the cybersecurity regulations to risks associated with AI. Although the guidance is directed to financial institutions and other covered entities regulated by the NYDFS, it serves as a useful guide for companies in any industry.

Cybersecurity Risks Associated With AI
The guidance presents two primary ways that threat actors have leveraged AI to increase the efficacy of cyberattacks:

  • AI-Enabled Social Engineering. To convince authorized users to divulge nonpublic information about themselves and their businesses, threat actors are relying on AI to create deepfakes (i.e., artificial or manipulated audio, video, images, and text) to target these individuals via email, telephone, or text message. Threat actors have used AI-enhanced deepfakes to successfully convince authorized users to divulge sensitive information and to wire funds to fraudulent accounts.
  • AI-Enhanced Cyber Attacks. Because AI can scan and analyze vast amounts of information much faster than humans, threat actors are using AI to amplify the potency, scale, and speed of cyberattacks. With the help of AI, threat actors can (a) access more systems at a faster rate; (b) deploy malware and access/exfiltrate sensitive information more effectively; and (c) accelerate the development of new malware variants and ransomware to avoid detection within information systems. Read more

Nov. 8, 2024: AI & Digital Assets


How AI Is Shaping the Future of Financial Crime Prevention Strategies

FinTech Global

As artificial intelligence (AI) continues to advance, its role in financial crime prevention is growing, with organizations now considering AI as a foundational element in their risk management strategies.

Generative AI (gen AI) has opened up new possibilities for financial crime detection, and its adoption in recent years marks a pivotal shift for the industry. While AI has long been applied in finance for customer-facing improvements, it is now increasingly used to support operations teams in identifying high-risk activities and investigating unusual transactions, enhancing efficiency in detecting financial crime.

SymphonyAI, which offers an AI SaaS solution, recently delved into the world of AI and financial crime and explored how this technology could transform the financial crime prevention space.

Rethinking Financial Crime Strategies with Technology Partners
Traditionally, financial services, gaming, insurance, and payments organisations regulated by anti-money laundering (AML) laws relied on third-party technology providers solely for boosting efficiency around transaction monitoring and screening. However, this approach has evolved, SymphonyAI explained. Read more


Gen Z, Millennials Are Using AI For Personal Finance Advice

Ana Teresa Solá, CNBC

Key Points

  • About 67% of polled Gen Zers and 62% of surveyed millennials are using artificial intelligence to help with personal finance tasks, according to a new report by Experian.
  • Most use generative AI for finances at least once a week, the report found.
  • While AI can be a useful starting point, there are a few things you need to consider, according to experts.

People are using artificial intelligence for tasks like writing and editing resumes and cover letters — and even to get personal finance advice. While some of those insights can be valuable, financial advisors caution that AI shouldn’t be your only resource.

A new report by Experian found that 67% of polled Gen Zers and 62% of surveyed millennials are using artificial intelligence to help with their personal finances. Users say that generative AI tools like ChatGPT have helped in areas including saving and budgeting (60%), investment planning (48%) and credit score improvement (48%).

“It’s free. It’s more accessible. It simplifies complex tasks like creating a budget,” said Christina Roman, consumer education and advocacy manager at Experian. Read more 


The Governance Gap: AI Risks Unchecked in Financial Services

FinTech Global

Financial services companies are rushing to integrate artificial intelligence (AI) into their operations, but many are doing so without adequate governance frameworks or testing procedures.

This oversight is causing significant compliance and information security risks, as detailed in the 2024 AI Benchmarking Survey. This survey, a collaborative effort by ACA Group’s ACA Aponix and the National Society of Compliance Professionals (NSCP), was unveiled today at the NSCP National Conference.

The survey, conducted online in June and July of 2024, collected insights from over 200 compliance leaders within the financial services industry. It focused on the deployment of AI tools and technologies and the compliance measures in place to manage the associated risks.

Despite the growing use of AI, only a small fraction of firms have established adequate controls to mitigate its risks. The data shows a concerning gap: just 32% of the firms have an AI committee or governance group, a mere 12% have an AI risk management framework, and only 18% have formalized testing programs for their AI tools. Additionally, a vast majority (92%) lack policies governing the use of AI by third-party service providers, exposing them to heightened risks in cybersecurity, privacy, and operations. Read more


White paper: How AI Is Propelling Innovation in Financial Services

FinTech Futures

This white paper examines key artificial intelligence (AI) use cases in financial services and explores the challenges of AI implementation.

Over the last few years, the financial services industry has been working to integrate both predictive and generative AI into their business practices. Early adopters of AI are already beginning to make topline contributions and stand out from their competitors in the industry.

While implementing AI in the financial services industry has the potential to be transformative, restrictive regulations and data privacy requirements mean that businesses must overcome several hurdles. Financial institutions, such as banks, often need to upgrade legacy systems before fully leveraging AI capabilities, necessitating investments in data capture and accuracy, workforce expertise, and system modernization. These foundational enhancements are essential for achieving substantial returns on AI investments.

Addressing the AI implementation challenges requires organizations to optimize data management, label data precisely, and ensure accuracy when training AI models.


Nov. 1, 2024: AI & Digital Assets


Could Artificial Intelligence Fuel the Future of Financial Investigations?

Ann Law, Tina Mendelson, Bruce Chew, Michael Wylie, and Scott Holt; Deloitte

AI can both facilitate and prevent financial crimes. Combating these crimes can require strategic resource allocation, robust risk management, and adaptability to evolving threats.

This hypothetical scenario begins in a small bungalow in a suburban town, a seemingly unlikely spot for a sinister plot to unfold. There, Grandma Evelyn’s evening crossword puzzle is interrupted by a soft ping from her tablet. The message claims to be from her beloved grandson, Ethan, who says he is stranded in a prison outside of the country and in desperate need of bail money. Heart pounding, Evelyn watches the attached video message. There, apparently, is Ethan, pleading for help. Without a second thought, Evelyn rushes to her bank.

Evelyn withdraws US$25,000 from her life savings and, as instructed earlier, deposits it into seven different Bitcoin ATMs scattered across town. Each transaction sends the cryptocurrency to wallets controlled by a faceless global criminal organization that has never laid eyes, let alone hands, on Ethan.

As Evelyn returns home, her relief is short-lived. Another message appears on her screen, this time demanding access to her computer. Before she can react, her device is hijacked, and Evelyn watches helplessly as her bank accounts and retirement funds are drained of US$500,000. The funds vanish into the depths of cyberspace, leaving her financially crippled and emotionally shattered.

But the story doesn’t end there. Across town, unsuspecting victim number two—investor Mark—receives a highly anticipated windfall in his accounts. Excited by what he believes to be his new partners’ co-investments, he prepares to consolidate the funds and sends them to a shipping company to initiate the delivery of his latest venture: computer chips. Read more


How AI is Transforming Traditional Credit Scoring & Lending

Dave Sojka, Equifax

Artificial Intelligence (AI) is all the rage right now. At a recent conference that I attended, a guest speaker was asked, “What’s the first move that you would recommend for companies just getting started with AI?” The speaker quipped, “Add an AI statement to your investor relations web page!”

Not bad advice, given the current trajectory of AI technology investments. Goldman Sachs projects global investment in AI to approach $200 billion by 2025.¹

The outsized investments represent big bets on the future. But the future of AI seems to evoke more questions, and emotions, than most innovations. Which industries will be most affected? Will AI create more jobs or replace human workers? What are the practical applications and limitations of AI?

Equifax has been driving responsible AI innovation for nearly a decade. We led the way toward an industry standard for explainable AI — introducing the first machine learning credit scoring system with the ability to generate logical and actionable reason codes for the consumer. The Equifax Cloud was custom built to manage the large volume of diverse, proprietary datasets needed to maximize AI performance and deliver AI-infused products.

Below are excerpts from a recent panel at Fintech Meetup where Harald Schneider, Global Chief Data and Analytics Officer for Equifax, shared his thoughts on how AI is transforming credit scoring and lending for Fintechs. Read more

Visa Direct Teams with Coinbase for Real-Time Crypto Deposits

PYMNTS.com

The collaboration connects Coinbase to the Visa Direct network, letting the exchange’s customers deposit funds into their accounts via eligible Visa debit cards, according to a Tuesday (Oct. 29) news release.

According to the release, Coinbase already has millions of users with a debit card connected to their account. This new feature allows for real-time delivery of account funds, giving them more opportunities to take advantage of trading opportunities. Users can use the service to transfer funds to their Coinbase accounts, purchase crypto on Coinbase, and cash out funds from Coinbase to a bank account, all using an eligible Visa debit card, the companies added.

The partnership follows Coinbase’s announcement earlier this month that it was expanding the ways businesses can pay using the Coinbase Prime brokerage platform. “An increasing number of Fortune 500 companies are approaching Coinbase to explore crypto payments,” Coinbase Director of Institutional Sales Steven Capozza said in a news release. “Many are quickly moving from proof-of-concept exploration to full adoption.”

The company argues that stablecoins can make B2B payments and treasury management faster, cheaper and more efficient as they settle instantly, including across borders. In addition, they offer rewards to holders, boosting workflows for companies and their vendors. Read more 


Financial Services Firms Lag in AI Governance and Compliance Readiness

ACA Group and National Society of Compliance Professionals, Business Wire

Limited Testing and Formal Governance Creates Compliance and InfoSec Risks for Firms Adopting AI

Despite eagerness to leverage artificial intelligence, financial services firms lack formal artificial intelligence (AI) governance frameworks, testing protocols, and third-party oversight, according to the 2024 AI Benchmarking Survey, a joint project of ACA Group’s ACA Aponix and the National Society of Compliance Professionals (NSCP), released today at the NSCP National Conference.

The joint survey, conducted online in June and July 2024, gathered data from over 200 compliance leaders in the financial services industry around their firm’s use of AI tools and technologies, as well as compliance practices used to manage the risks AI tools and technologies present.

According to the survey, firms are missing opportunities to better manage AI risks. It found that only 32% of respondents have established an AI committee or governance group, only 12% of those using AI have adopted an AI risk management framework, and just 18% have established a formal testing program for AI tools. Furthermore, most respondents (92%) have yet to adopt policies and procedures to govern AI use by third parties or service providers, leaving firms vulnerable to cybersecurity, privacy, and operational risks across their third-party networks. Read more

Oct. 18, 2024: AI & Digital Assets


The State of Retail Banking: Profitability and Growth in The Era of Digital And AI

Amit Garg, Marti Riba, Marukel Nunez Maxwell, Max Flötotto, Oskar Skau, Matic Hudournik; McKinsey

As the global macroeconomic environment remains uncertain, retail banks should focus on the fundamentals of customer primacy and margin protection while embracing digital technologies and gen AI.

The last few years have been among the most successful in the recent history of retail banking, with a confluence of macroeconomic trends driving growth and profitability. In some geographies, pandemic-era government stimulus lifted economic growth, fueled consumer spending, created favorable conditions for balance sheet expansion, and helped keep credit risk in check. Following the pandemic years, rising interest rates improved banks’ net interest margins as loan interest grew faster than the cost of deposits.

Globally, according to McKinsey Panorama, banking1 ROEs have reached their highest point since the onset of the global financial crisis, roughly 12 percent in 2023, significantly outperforming recent historical averages, including the roughly 9 percent average the industry experienced in 2013–20. In 2023, the global retail banking market also surpassed the $3 trillion revenue mark on the back of sustained growth of about 8 percent annually in recent years.

A turning point
The outlook, however, for global retail banking is more muted than its recent performance would suggest. External forces are combining to pressure the sector in the key economic metrics of asset growth, margins, and operational and risk costs.

In 2022, following a long period of deposit growth driven particularly by favorable fiscal and monetary policies through 2021, deposits started to decline (North America), or their growth decelerated (most other regions)2 as governments around the globe tightened monetary policy and moderated fiscal policies. Looking forward, banks are expecting the higher-interest-rate environment to continue despite some recent and—potentially—upcoming reductions in interest rates by central banks. In a recent survey by the McKinsey Global Institute, two-thirds of senior banking executives shared that they expect some form of high-interest-rate scenario. This implies a longer-term environment of positive real interest rates, in which nominal interest rates are higher than expected inflation, and a period of quantitative tightening with a more limited money supply. Given these trends, we anticipate that deposit growth will remain sluggish for retail banks. Read more


U.S. Treasury Doesn’t Want State-Regulated Stablecoins, E-Money

Ledger Insights

Last week US Treasury Under Secretary Nellie Liang gave a speech in which she argued that non bank payment providers should be regulated at the federal level rather than the state level. She was including all money transmitters, e-money firms and stablecoin issuers. Today the New York State Department of Financial Services (NYDFS) is the regulator of the largest stablecoin issuers.

The Treasury previously raised this point as part of its 2022 paper on the future of payments which delved into a potential CBDC. The question is whether this proposal will have the level of backlash that the retail CBDC attracted.

Ms Liang made some strong points. She argued that the idea of regulating money transmitters at the state level was based on physical cash, where someone would go to a local money exchanger to send cash to someone in another state. A key point is the money transmitter wouldn’t hold the cash for very long.

Now that we have digital apps, we keep money in those apps, meaning these money transmitters or e-money providers are responsible for large sums of cash. That’s especially the case for stablecoins. However, the law regarding how the money transmitter can invest these monies varies significantly between states.

Hence, she’s arguing that if the nature of what is being regulated has changed, one should reconsider how those activities are regulated. Perhaps as an incentive, she noted that the state regulation of these entities means they don’t have access to FedACH or FedNow. Read more


Exclusive-EU AI Act Checker Reveals Big Tech’s Compliance Pitfalls

Martin Coulter, Reuters

Summary

  • New AI checker tests models for EU compliance
  • Some AI models received low scores on cybersecurity and discriminatory output
  • Non-compliance could result in fines worth 7% of annual turnover

Some of the most prominent artificial intelligence models are falling short of European regulations in key areas such as cybersecurity resilience and discriminatory output, according to data seen by Reuters.

The EU had long debated new AI regulations before OpenAI released ChatGPT to the public in late 2022. The record-breaking popularity and ensuing public debate over the supposed existential risks of such models spurred lawmakers to draw up specific rules around “general-purpose” AIs (GPAI).

Now a new tool, which has been welcomed by European Union officials, has tested generative AI models developed by big tech companies like Meta and OpenAI across dozens of categories, in line with the bloc’s wide-sweeping AI Act, which is coming into effect in stages over the next two years.

Designed by Swiss startup LatticeFlow AI and its partners at two research institutes, ETH Zurich and Bulgaria’s INSAIT, the framework, awards AI models a score between 0 and 1 across dozens of categories, including technical robustness and safety. Read more


Generative AI in Security: Risks and Mitigation Strategies

Megan Crouse, TechRepublic

Microsoft’s Siva Sundaramoorthy provides a blueprint for how common cyber precautions apply to generative AI deployed in and around security systems.

Generative AI became tech’s fiercest buzzword seemingly overnight with the release of ChatGPT. Two years later, Microsoft is using OpenAI foundation models and fielding questions from customers about how AI changes the security landscape.

Siva Sundaramoorthy, senior cloud solutions security architect at Microsoft, often answers these questions. The security expert provided an overview of generative AI — including its benefits and security risks — to a crowd of cybersecurity professionals at ISC2 in Las Vegas on Oct. 14.

What security risks can come from using generative AI?
During his speech, Sundaramoorthy discussed concerns about GenAI’s accuracy. He emphasized that the technology functions as a predictor, selecting what it deems the most likely answer — though other answers might also be correct depending on the context.

Cybersecurity professionals should consider AI use cases from three angles: usage, application, and platform. “You need to understand what use case you are trying to protect,” Sundaramoorthy said. He added: “A lot of developers and people in companies are going to be in this center bucket [application] where people are creating applications in it. Each company has a bot or a pre-trained AI in their environment.” Read more

Oct. 11, 2024: AI & Digital Assets


OPINION: The U.S. Fell Behind in Crypto. It Cannot Afford to Fall Behind in AI

Calanthia Mei, CoinDesk

The U.S. digital assets industry has been stymied by ineffective regulation. Is the same thing about to happen with artificial intelligence? Calanthia Mei, co-founder of Masa, says it’s possible.

From the Industrial Revolution to the Digital Age, the United States has been defined by its spirit for entrepreneurship, innovation, and creativity. American entrepreneurship has been a talent magnet and attracted global minds to build and innovate in the U.S., myself included. Immigrants have founded or co-founded 65% of the top AI companies in the United States.

The technological advancements that have come from the United States have been a key driver for global innovation and leadership for decades, with the rest of the world adopting these groundbreaking technologies. But it now faces a potential threat to its reign – as a once undisputed leader in technological innovation, the U.S.’s reputation and standing is now being challenged.

While the U.S., for now, remains a leader in venture capital funding for AI, in May 2024, PitchBook released a report showing that pre-seed and seed funding in U.S.-based generative AI companies saw a sharp decline, but companies in Asia and Europe are seeing a steady increase. But, we’ve seen this before. In crypto. Read more


OCC Solicits Research on Artificial Intelligence in Banking and Finance

The Office of the Comptroller of the Currency (OCC) is soliciting academic research papers on the use of artificial intelligence in banking and finance for submission by December 15, 2024.

The OCC will invite authors of selected papers to present to OCC staff and invited academic and government researchers at OCC Headquarters in Washington, D.C., on June 6, 2025. Authors of selected papers will be notified by April 1, 2025, and will have the option of presenting their papers virtually.

Interested parties are invited to submit papers to [email protected]. Submitted papers must represent original and unpublished research. Those interested in acting as a discussant may express their interest in doing so in their submission email.

Additional information about submitting a research paper and participating in the June meeting as a discussant, is available below and on the OCC’s website.


Overturned Chevron Deference Likely Won’t Impact Crypto Regulation: Tom Emmer

Brayden Lindrea, CoinTelegraph

The crypto industry won’t benefit from an overturned legal doctrine that forced courts to use federal agency interpretations of ambiguous laws unless Congress passes legislation limiting what those agencies can regulate, said Representative Tom Emmer, a Minnesota Republican.

“I don’t think Chevron deference changes a whole lot,” Emmer told Cointelegraph at the Permissionless conference in Utah on Oct. 9.

“You gotta put the authority back with Congress […] We have authority now, but we’ve not shown the ability yet to take back the power of the purse and to hold these different agencies accountable.”

The United States Supreme Court overruled the Chevron doctrine in June, meaning US courts no longer need to “defer” to federal agencies like the Securities and Exchange Commission when interpreting ambiguous statutes.

Emmer acknowledged that crypto-related bills have seen more bipartisan support lately, pointing to the 71 Democrat Representatives that voted in favor with their Republican counterparts for the Financial Innovation and Technology for the 21st Century Act in May.

Still, Emmer said, only a Donald Trump win in the Nov. 5 presidential election and Republicans controlling the House and Senate could make Chevron impactful. Read more


Stripe and Nvidia to Expand Financial Platform’s AI-Powered Features

PYMNTS.com

Stripe and Nvidia expanded their collaboration to enhance Stripe’s artificial intelligence-powered capabilities and enable developers and enterprises to prepay for select Nvidia cloud services.

The new efforts build on an existing partnership in which Stripe has used Nvidia’s accelerated computing platform to train the machine learning models that power parts of its financial infrastructure platform for businesses, the companies said in a Wednesday (Oct. 9) press release.

“At Stripe, we’ve been busy building a bunch of functionality that’s useful for AI products generally, including usage-based billing to handle inference costs, Link for higher-converting checkouts, and support for a lot more local payment methods since these products are typically global from day one,” Stripe co-founder and CEO Patrick Collison said in the release.

Stripe’s AI-powered features also include its Optimized Checkout Suite, which uses AI to determine the payment methods to show each customer; Stripe Radar, which uses AI to improve the speed and accuracy of fraud detection; and Radar Assistant, which uses AI to enable businesses to set new fraud rules by describing them with natural language prompts, according to the release.

In the new collaboration with Nvidia, Stripe will further advance its AI and improve fraud detection for its customers, the release said. In addition, by enabling developers and enterprises to prepay for select Nvidia cloud services, Stripe will expand global access to Nvidia’s GPUs and AI software, per the release. Read more


Oct. 4, 2024: AI & Digital Assets


Developing and Using AI Require Close Monitoring of Risks and Regulations

Skadden, Arps, Slate, Meagher & Flom LLP; JD Supra

Key Points

  • As AI systems become more complex, companies are increasingly exposed to reputational, financial and legal risks from developing and deploying AI systems that do not function as intended or that yield problematic outcomes.
  • The risks of AI, and the legal and regulatory obligations, differ across industries, and depend on whether the company is the developer of an AI system or an entity that deploys it.
  • Companies must also navigate a quickly evolving regulatory environment that does not always offer consistent approaches or guidance.

Key AI Safety Risks: People, Organizations, Supply Chains and Ecosystems

In the U.S., there is no omnibus law governing artificial intelligence (AI). However, the National Institute of Standards and Technology (NIST), a Department of Commerce agency leading the U.S. government’s approach to AI risk, has a “Risk Management Framework” suggesting that AI be evaluated at three levels of potential harm:

  • Harm to people (i.e., harm to an individual’s civil liberties, rights, physical or psychological safety, or economic opportunity), such as deploying an AI-based hiring tool that perpetuates discriminatory biases inherent in past data.
  • Harm to organizations (i.e., harm to an organization’s reputation and business operations), such as using an AI tool that generates erroneous financial reports that were not properly reviewed by humans before being publicly disseminated.
  • Harm to ecosystems (i.e., harm to the global financial system or supply chain), such as deploying an AI-based supply management tool that functions improperly and causes systemic supply chain issues that extend far beyond the company that deployed it. Read more

Consumer Advocate, Fintechs Urge CFPB, FHFA to Adopt AI Guidance

Kate Berry, American Banker

The National Community Reinvestment Coalition and four fintech companies are urging the Consumer Financial Protection Bureau and the Federal Housing Finance Agency to provide guidance on the use of machine learning and artificial intelligence in lending, which they claim would help eliminate discrimination.

In a letter to the regulators obtained exclusively by American Banker, the consumer advocacy group and the companies — Zest AI, Upstart, Stratyfy and FairPlay — asked for recommendations on how the agencies can implement the White House’s executive order on AI that was released last year. One suggestion is for the CFPB to provide guidance on the “beneficial applications” of AI and machine learning to develop fairer underwriting models.

“One of AI/machine learning’s beneficial applications is to make it possible, even using traditional credit history data, to score previously excluded or unscorable consumers,” the letter states. “In some cases, AI models are enabling access and inclusivity.”

The four fintechs are members of the NCRC’s Innovation Council for Financial Inclusion, a forum that discusses and pursues policy goals in which industry and consumer groups are aligned. Machine learning and some “deep learning categories of AI” can be responsibly used to develop underwriting models to help lenders comply with anti-discrimination laws, the letter states.

President Biden’s order on AI directed the CFPB and FHFA to monitor for lending bias. Read more


Financial Services Calls for AI and ESG Regulations to Realize Benefits

The Fintech Times

Artificial intelligence (AI) and environmental, social, and governance are some of the industry’s favorite terms to throw around. AI promises to make a huge impact on every aspect of financial services, while ESG principles are important to abide by to ensure firms look after the planet and their people.

Recognizing this, the financial services sector is calling out for clearer and proportionate regulations surrounding AI and ESG to help them realize the benefits these trends offer, according to new survey data from a global law firm DLA Piper.

In its global report, ‘Financial Futures: Disruption in global financial services‘, DLA Piper found that eight in ten respondents are optimistic about future industry growth prospects for the financial services industry, with UK (93 per cent) and US organisations (90 per cent) reporting the highest confidence. While banks appear to be the most optimistic (88 percent), respondents from global fintechs feel the least positive about the future (72 percent).

So what’s making the majority of firms and financial organizations so optimistic? According to the report, advancements in technology (71 percent), the launch of new products and services to drive growth (55 percent), and changing consumer and investor behaviors (38 percent) are driving optimism about the future.

However, clarity and a proportionate approach are key, as 58 percent of respondents cite regulation complexity around technology as a key challenge globally, and nearly 73 percent go on to say that current regulations stifle innovation efforts. For firms looking at other locations for optimal conditions for growth, the US remains the most attractive market (35 percent), followed by the EU (24 percent).

AI: removing or creating challenges?
While the majority of respondents (86 percent) believe that AI will transform the sector, 53 percent see AI as one of their main challenges. Only 39 per cent are committed to hiring experts in the field of AI and imposing governance and oversight structures to maximise the related opportunities. Overall, half of the companies surveyed lack in-house specialists and are opting to work with specialist subcontractors. Without this internal talent, businesses risk falling behind the curve in the future. Read more


OPINION: How Generative AI Raises Cyber Risks for SMBs – And What They Can Do About It

Fraud and other cyber attacks are becoming more sophisticated

Gia Snape, Insurance Business Magazine

The rise of generative artificial intelligence has brought new challenges, particularly in how cyberattacks are conducted and what it means for small and medium-sized businesses (SMBs) with cyber coverage. Gen AI tools popularized by ChatGPT have enhanced the effectiveness of social engineering tactics such as phishing, making them harder to detect. At the same time, AI allows threat actors to adapt quickly to cybersecurity measures by automating their strategies.

The increasing threat of AI-enhanced cyberattacks was highlighted by the Insurance Bureau of Canada’s survey, which showed that 65% of SMB owners in Canada are worried about the cyber risks posed by AI and other emerging technologies. At the same time, the IBC report revealed a troubling decline in cybersecurity investments by SMBs. In 2023, 69% of respondents indicated they were actively working to minimize cyber risks, but that figure dropped to 61% in 2024.

“When people hear about AI, they often have grandiose images of robots taking over or supercomputers causing chaos,” said Jonathan Weekes (pictured), HUB International Canada’s cyber practice leader. “But in cyber attacks, AI is primarily a tool for threat actors to research their targets more effectively.”

How gen AI is augmenting cyber attacks
Where cybercriminals previously spent months surveilling a business’s operations, AI enables them to gather information and launch attacks in significantly shorter timeframes. “It helps them quickly identify vulnerabilities within systems and encrypt data faster, so they can take steps to impact the client in the most drastic ways,” Weekes said. “The emails have fewer grammatical and spelling errors, making it more difficult for the victims to distinguish them from legitimate communications.” Weekes predicted that deep-fakes would soon be the next major phase of fraud and social engineering attacks. Read more

Sept. 27, 2024: AI & Digital Assets


OpenAI Begins Rollout of Advanced Voice to All Plus and Team Subscribers

Pymnts.com

OpenAI is rolling out its Advanced Voice to all Plus and Team users in the ChatGPT app this week.

“While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents,” the company said in a Tuesday (Sept. 24) post on X. The feature is not yet available in the European Union, the United Kingdom, Switzerland, Iceland, Norway and Liechtenstein, OpenAI added in another post. Users can now choose from nine “lifelike output voices” for ChatGPT, with different tones and characters like “easygoing and versatile” and “animated and earnest,” according to the company’s Voice mode FAQ.

It was reported July 30 that OpenAI was rolling out the alpha version of Advanced Voice Mode to a select group of ChatGPT Plus subscribers at that time and planned to begin a broader rollout to all premium users in the fall.

To mitigate potential misuse of the feature, the company said at the time that it limited Advanced Voice Mode to preset voices created in collaboration with paid voice actors, so that it can’t be used to impersonate specific individuals or public figures; implemented guardrails to block requests for violent or copyrighted content; and included filters to block requests for generating music or copyrighted audio, a move likely influenced by music industry legal actions against artificial intelligence (AI) companies.

OpenAI had planned to roll the voice feature out in alpha in late June but said June 25 that it needed another month to do so. “For example, we’re improving the model’s ability to detect and refuse certain content,” the company said at the time. “We’re also working on improving the user experience and preparing our infrastructure to scale to millions while maintaining real-time responses.” Many U.S. consumers are willing to pay for smart, reliable voice assistants, according to the PYMNTS Intelligence report, “How Consumers Want to Live in the Voice Economy.” Read more


Coinbase Urges Court to Force SEC to Draft Digital Asset Rules

Caitlin Mullen, Banking Dive

A lawyer for the crypto exchange said the agency still hasn’t explained its reasoning for denying a request for guidelines as to how it determines what is a security.

Dive Brief:

  • Cryptocurrency exchange Coinbase has called on a federal appeals court to require the Securities and Exchange Commission to establish new rules governing digital assets.
  • Coinbase lawyer Eugene Scalia on Monday told judges with the U.S. Court of Appeals for the 3rd Circuit that the SEC hasn’t explained its reasoning for denying Coinbase’s request for rules that would provide clarity on determining when digital assets are securities, Reuters reported. The company wants the court to overturn the agency’s denial.
  • The agency, which largely views digital assets as securities, has taken legal action against a number of crypto industry companies. Scalia charged the agency with engaging in “extraordinarily oppressive governmental behavior” by issuing enforcement action against companies while not offering a way for them to register with the agency, Bloomberg reported.

Dive Insight:
Monday’s arguments are the latest development in the contentious back-and-forth between the regulatory agency and the company. The SEC has said it sees most crypto tokens as securities, therefore falling under its jurisdiction, while the industry says existing securities laws don’t apply to cryptocurrencies.

Coinbase filed a petition for rulemaking in 2022, seeking clarity around which crypto assets are securities and how they ought to be regulated. The company then sued the SEC in April 2023, to prod the agency into responding to its petition. The SEC denied the company’s petition last December, disagreeing that the application of existing securities statutes and regulations to crypto assets is “unworkable.” Read more


Firms Struggling to Find RoI for AI Projects

FinExtra

More than half of the companies investing in artificial intelligence (AI) projects have been unable to extract any tangible benefit, according to recently published research.

Despite the challenge of proving a return on investment (RoI), the interest in AI appears to be rising. A report from SaaS management platform Cledara found that there has been a 245% increase in the use of AI tools over the last 12 months. Unsurprisingly most of this work has involved ChatGPT, which has 33 times more use than its nearest competitor, according to the survey.

But while 82% of companies are experimenting with AI, only 47% are seeing tangible value. A quarter (24%) are acheiving cost reductions through greater operational efficiency while 11% have experienced revenue growth while 12% have experienced both.

“While the excitement around AI is palpable, our data reveals a nuanced reality,” said Brad van Leeuwen, co-founder at Cledara. “Businesses are rapidly adopting AI tools, but many are still navigating how to extract real value. This gap presents a significant opportunity for AI providers to demonstrate tangible ROI and for businesses to refine their AI strategies.”

However, there does seem to be more success at attaining RoI within the financial services sector. Another study published this week found that 92% of financial services firms believe that AI is having a positive effect on their innovation. Read more


Meta lets businesses create ad-embedded chatbots

Kyle Wiggers, Tech Crunch

Meta business chatbotsAt the Meta Connect 2024 developer conference in Menlo Park on Wednesday, Meta announced that it’s expanding its AI-powered business chatbots to brands on WhatsApp and Messenger using click-to-message ads.

Now businesses can set up ad-embedded chatbots that talk to customers, offer support, and facilitate orders, Meta says. “From answering common customer questions to discussing products and finalizing a purchase, these business AIs can help businesses engage with more customers and increase sales,” the company wrote in a blog post provided to TechCrunch.

Meta continues to inject more of its ad products and tools with AI. In May, the company began letting advertisers create full new ad images with AI and insert AI-generated alternate versions of ad headlines. And in June, Meta began testing AI-powered customer support for businesses using WhatsApp, which automatically answers customer queries related to frequently asked questions.

Meta claims that more than a million advertisers are using its AI ad tools and that 15 million ads were created with the tools last month.

AI ads boost click-through rates, Meta says. But there’s evidence to suggest customers may not like ads with chatbots. One survey commissioned earlier this year by customer experience platform Callvu found that the majority of people would rather wait at least a minute to speak with a live customer agent than chat instantly with an AI.

Sept. 20, 2024: AI & Digital Assets


Why OpenAI’s New Model Is Such a Big Deal

The bulk of LLM progress until now has been language-driven. This new model enters the realm of complex reasoning, with implications for physics, coding, and more.

James O’Donnell, MIT Technology Review

Last weekend, I got married at a summer camp, and during the day our guests competed in a series of games inspired by the show Survivor that my now-wife and I orchestrated. When we were planning the games in August, we wanted one station to be a memory challenge, where our friends and family would have to memorize part of a poem and then relay it to their teammates so they could re-create it with a set of wooden tiles.

I thought OpenAI’s GPT-4o, its leading model at the time, would be perfectly suited to help. I asked it to create a short wedding-themed poem, with the constraint that each letter could only appear a certain number of times so we could make sure teams would be able to reproduce it with the provided set of tiles. GPT-4o failed miserably. The model repeatedly insisted that its poem worked within the constraints, even though it didn’t. It would correctly count the letters only after the fact, while continuing to deliver poems that didn’t fit the prompt. Without the time to meticulously craft the verses by hand, we ditched the poem idea and instead challenged guests to memorize a series of shapes made from colored tiles. (That ended up being a total hit with our friends and family, who also competed in dodgeball, egg tosses, and capture the flag.)

However, last week OpenAI released a new model called o1 (previously referred to under the code name “Strawberry” and, before that, Q*) that blows GPT-4o out of the water for this type of purpose. Unlike previous models that are well suited for language tasks like writing and editing, OpenAI o1 is focused on multistep “reasoning,” the type of process required for advanced mathematics, coding, or other STEM-based questions. It uses a “chain of thought” technique, according to OpenAI. “It learns to recognize and correct its mistakes. It learns to break down tricky steps into simpler ones. It learns to try a different approach when the current one isn’t working,” the company wrote in a blog post on its website. Read more


ChatGPT Speak-First Incident Stirs Worries of Artificial General Intelligence

Lance Eliot, Forbes

Spooky times are here.

It isn’t even Halloween yet and already something has happened via generative AI that has people alarmed. The widely popular ChatGPT began starting conversations with users, including asking questions on topics that were personalized to the person being hailed.

Puzzled on why this is newsworthy?

The reason this seems hair-raising is that most generative AI is devised to wait for the human to initiate a conversation. When you log into generative AI, there is customarily a blank prompt window that allows you to get interaction underway. The screen is waiting for you. If you don’t type something, nothing happens. A conversation starter of one kind or another resides squarely on your shoulders.

Think of it this way. If you’ve ever used Alexa or Siri, you realize that it is up to you to engage those natural language processing systems. For example, you might say “Hey, Siri” to get the AI going. This puts humans in control of things. You feel empowered when you summon the AI, which then does your bidding.

Turns out that OpenAI, maker of ChatGPT, has acknowledged that the speak-first issue briefly existed. “We addressed an issue where it appeared as though ChatGPT was starting new conversations,” OpenAI said. “This issue occurred when the model was trying to respond to a message that didn’t send properly and appeared blank. As a result, it either gave a generic response or drew on ChatGPT’s memory.” Read more


Bitcoin Broke $62K After Fed Rate Cuts. Here’s What Traders Say Will Happen Next

Shaurya Malwa & Sam Reynolds, CoinDesk

The CoinDesk 20, a measure of the largest digital assets, is up 3.4%. Plus: Polymarket traders have their money on four to five more rate cuts this year.

  • The Federal Reserve implemented a 50 basis point rate cut, with expectations of further reductions to bring the median benchmark rate to 4.4% by year-end.
  • Despite the rate cut, market sentiments are mixed with some skepticism about the sustainability of the crypto market rally.
  • Major cryptocurrencies like Solana’s SOL, BNB, XRP, and Cardano’s ADA saw gains, with SOL leading at a 6% increase.
  • Additionally, there’s a notable interest in further rate cuts, with market bets on Polymarket indicating expectations of continued monetary easing by the Fed.

A 50 basis point cut by the Fed, and the first bitcoin (BTC) buy by a presidential candidate, kept digital assets in the green during the East Asia trading day, even though some market watchers are skeptical if the rally has any sort of legs.

Fed members expect median benchmark rates to come down to 4.4% by year-end, as reported, reflecting some 50 basis points (bps) more cuts in the next two Federal Open Market Committee (FOMC) meetings, according to the Fed’s quarterly economic projection. Read more


UK Banks Hail Regulated Liability Network Experiments

FinExtra

The UK’s biggest banks have completed the experimentation phase of a Regulated Liability Network, claiming a number of benefits that the financial market infrastructure for programmable money operating on a multi-bank shared ledger could bring.

The UK RLN is envisaged as a common ‘platform for innovation’ across multiple forms of money, including existing commercial bank deposits and a shared ledger for tokenised commercial bank deposits. Barclays, Citi, HSBC, Lloyds, Mastercard, NatWest, Nationwide, Santander, Standard Chartered, Virgin Money and Visa all took part in the experimentation phase over the summer.

Across the use cases explored, a number of potential benefits were discovered, including reducing fraud, improving efficiency in the process of home buying and reducing the cost of failed payments in the UK.

In addition, UK Finance says that such a platform, in collaboration with things such as Open Banking, could deliver economic value and support innovation in the market. The RLN could also provide new firms with a common point of access to enable them to interface with established institutions, and enhanced payment and settlement systems.

The participants also conclude that the legal and regulatory framework of the UK is sufficiently flexible to support the implementation of a ‘platform for innovation’. Read more 


Sept. 13, 2024: AI & Digital Assets


FBI Says People Lost $5.6 Billion in Crypto Scams Last Year

The agency singled out digital currency swindles to highlight the rising frequency and increasing dollar amounts of crypto crimes.

Bruce Crumley, Inc. Magazine

Enthusiasts of cryptocurrencies believe the digital money represents the biggest investment opportunity and wealth generator in financial history, transforming the way the world does business. While only the future will reveal whether those predictions come to pass, the Federal Bureau of Investigation (FBI) says crypto has already done a bang-up job facilitating the work of cybercriminals–whose schemes involving bitcoin, ether, tether, and other virtual currencies defrauded victims out of $5.6 billion last year.

The Bureau on Monday released its first report breaking out crimes incorporating crypto from other forms of frauds reported each year. That analysis revealed that swindles involving or based entirely on digital currencies increased a whopping 45 percent last year compared to 2022. Those cons also wound up being the most lucrative of the many varieties of grifts the FBI battles.

While crypto factored in just 10 percent of all complaints the agency received in 2023, those scams accounted for 50 percent of fraud victims’ total financial losses.

All in all, people who fell for crypto rip-offs last year lost $5.6 billion, the report said. Victims of fraudulent investments in cybercurrency schemes represented nearly $4 billion, or 71 percent of that total. Other crimes employing crypto included call center, tech or customer support, and government impersonation scams that generated about 10 percent of the total value reported lost last year.

“The decentralized nature of cryptocurrency, the speed of irreversible transactions, and the ability to transfer value around the world make cryptocurrency an attractive vehicle for criminals, while creating challenges to recover stolen funds,” FBI assistant director Michael D. Nordwal wrote in the report’s preface. “Once an individual sends a payment, the recipient owns the cryptocurrency and often quickly transfers it into an account overseas for cash out purposes.” Read more


How CEOs Are Using Gen AI for Strategic Planning

Graham Kenny, Marek Kowalkiewicz, and Kim Oosthuizen, Harvard Business Review

Summary: For business leaders, especially at relatively small companies, the idea of applying gen AI to strategic planning is mouthwatering. This article explores the potential and limits of AI in helping such companies chart their strategies. Through the lens of two disguised case studies the authors show how gen AI can help companies identify some challenges and opportunities that managers missed, overcoming the human biases, but by the same token missed some possibilities rooted in the company’s specific capabilities.

And although gen AI was less able to imagine possible future scenarios because its forecasts were entirely rooted in historical data, clever promoting enabled it to surface issues and questions that human managers ignored. The authors conclude that knowing gen AI’s weaknesses allows managers to take advantage of its strengths. The key is to view gen AI as a tool that augments, rather than replaces, your strategic thinking and decision-making.

The business community is all atwitter at the prospect that gen AI — through the likes of ChatGPT, you.com, and Claude.ai — will revolutionize business decision-making. Sam Altman, CEO of OpenAI, even declared “you are about to enter the greatest golden age of human possibility.”

For business leaders, the idea of applying gen AI to strategic planning is mouthwatering. One manager recently exclaimed that he couldn’t wait for the time when “AI can help identify opportunities that don’t exist yet!” Read more


UK Bill Recognizes Digital Assets as Personal Property Under New Law

The UK Ministry of Justice has introduced the Property (Digital Assets etc) Bill to recognize bitcoin and other digital assets as personal property under English and Welsh law. Led by Justice Minister Heidi Alexander, this bill addresses legal uncertainties around digital assets, ensuring better protection for owners in fraud cases and disputes. It also positions the UK as a leader in global digital asset regulation, boosting its economy and legal services.

UK Introduces Bill to Legally Recognize Digital Assets
The UK government announced on Wednesday that the Ministry of Justice has introduced the Property (Digital Assets etc) Bill to clarify the legal status of bitcoin and other digital assets. The bill, led by Justice Minister Heidi Alexander, seeks to formally recognize digital assets, including cryptocurrencies and non-fungible tokens (NFTs), as personal property under English and Welsh law.

The bill addresses the legal uncertainty surrounding digital assets, which were previously not definitively classified as property, leaving their owners vulnerable in disputes or cases of fraud. The UK government explained:

Bitcoin and other digital assets can be considered personal property under new draft law introduced in Parliament today (11 September 2024).

The new law will help judges navigate complicated cases where digital assets are involved, such as disputes over ownership or their inclusion in divorce settlements. The government added: “The new law will therefore also give legal protection to owners and companies against fraud and scams, while helping judges deal with complex cases where digital holdings are disputed or form part of settlements, for example in divorce cases.” Read more 


US Lawmakers Divided in First Congressional Hearing on DeFi

Martin Young, CoinTelegraph

Pro-crypto Representatives noted the need for a freer financial system, while more skeptical lawmakers blamed DeFi for crime, scams, and tax evasion.

United States lawmakers were divided down party lines at the first-ever Congressional hearing on decentralized finance (DeFi).

The House Financial Services Committee’s Sept. 10 hearing — “Decoding DeFi: Breaking Down the Future of Decentralized Finance” — aimed to explore emerging topics like tokenization and how blockchains can be used in finance. The nearly two-and-a-half-hour-long hearing highlighted the disunity between Republican and Democratic lawmakers over the technology.

Republican subcommittee chair French Hill opened the hearing by stating, “Substituting intermediaries for autonomous, self-executing code, decentralized finance can shift the way the financial markets and transactions are currently structured and governed.”

He advocated for “a peer-to-peer future where the Canadian prime minister of the future can’t freeze off your bank account just for going to a protest,” a reference to Justin Trudeau’s 2022 freeze of crypto headed to protesters, which a court ruled was unconstitutional.

Crypto critics such as Democratic Representative Brad Sherman were not convinced, claiming that DeFi was only used for crime, sanctions evasion, and primarily tax evasion. “What we have here is an effort to liberate billionaires from income taxation,” he said. Read more 

Sept. 6, 2024: AI & Digital Assets


Clearview AI—Controversial Facial Recognition Firm—Fined $33 Million For ‘Illegal Database’

Robert Hart, Forbes

Topline Controversial U.S. facial recognition company Clearview AI, reportedly embraced U.S. government and law enforcement agencies, has been fined more than $30 million by the Netherlands’ data protection watchdog on Tuesday for building “an illegal database” containing billions of faces taken from social media and the internet.

Key Facts

  • The Dutch watchdog said it had fined Clearview €30.5 million ($33.7 million) for “automatically” harvesting billions of photos of people from the internet, which it “then converts…into a unique biometric code per face.”
  • Clearview uses this “illegal” database to sell facial recognition services to intelligence and investigative services such as law enforcement, who can then use Clearview to identify people in images, the watchdog said.
  • Clearview scrapes photos from the internet “without these people knowing…and without them having given consent” for their photo or biometric data to be used, the watchdog said.
  • The watchdog said the U.S. company is “insufficiently transparent” and “should never have built the database” to begin with and imposed an additional “non-compliance” order of up to €5 million ($5.5 million).
  • Cleaview cannot appeal the fine as it had “not objected to this decision,” the watchdog said.
  • “This decision is unlawful, devoid of due process and is unenforceable,” Clearview’s chief legal officer Jack Mulcaire told Forbes in a statement, adding the company “does not have a place of business in the Netherlands or the EU… does not have any customers in the Netherlands or the EU, and does not undertake any activities that would otherwise mean it is subject to the GDPR.”

Chief Critic
“Facial recognition is a highly intrusive technology, that you cannot simply unleash on anyone in the world,” chair of the Dutch data protection watchdog Aleid Wolfsen said in a statement. Wolfsen said the threat of databases like Clearview’s affect everyone and are not limited to dystopian films or authoritarian countries like China. “If there is a photo of you on the Internet – and doesn’t that apply to all of us? – then you can end up in the database of Clearview and be tracked,” he said. Read more


California’s Divisive AI Safety Bill Sets Up Tough Decision for Governor Gavin Newsom

Tabby Kinder, Financial Times

California governor Gavin Newsom will consider whether to sign into law or veto a controversial artificial intelligence bill proposing to enforce strict regulations on technology companies after it cleared its final hurdle in the state legislature on Thursday.

Newsom, a Democrat, has until September 30 to issue his decision on the bill, which has divided Silicon Valley. It would force tech groups and start-ups developing AI models in the state to adhere to a strict safety framework. All of the largest AI start-ups, including OpenAI, Anthropic and Cohere, as well as Big Tech companies with AI models, would fall under its remit.

Newsom is likely to face intense lobbying from both sides. Some of the largest technology and AI companies in the state, including Google, Meta, and OpenAI, have expressed concerns about the bill in recent weeks, while others, such asAmazon-backed Anthropic and Elon Musk, who owns AI start-up xAI, have voiced their support.

The Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, known as SB 1047, mandates safety testing for advanced AI models operating in the state that cost more than $100mn to develop or that require a high level of computing power. The US Congress has not yet established a federal framework for regulation, which has left an opening for California, a hub for tech innovation, to come up with its own plans. Read more


U.S., UK, and EU Sign on To the Council of Europe’s High-Level AI Safety Treaty

Ingrid Lunden, TechCrunch

We’re not very close to any specifics on how, exactly, AI regulations will be implemented and ensured, but today a swathe of countries including the U.S., the U.K., and the European Union signed up to a treaty on AI safety laid out by the Council of Europe (COE), an international standards and human rights organization.

The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law — as the treaty is formally called — is described by the COE as “the first-ever international legally binding treaty aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.”

At a meeting today in Vilnius, Lithuania, the treaty was formally opened for signature. Alongside the aforementioned trio of major markets, other signatories include Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino and Israel.

The list means the COE’s framework has netted a number of countries where some of the world’s biggest AI companies are either headquartered or are building substantial operations. But perhaps as important are the countries not included so far: none in Asia, the Middle East, nor Russia, for example.

The high-level treaty sets out to focus on how AI intersects with three main areas: human rights, which includes protecting against data misuse and discrimination, and ensuring privacy; protecting democracy; and protecting the “rule of law.” Essentially the third of these commits signing countries to setting up regulators to protect against “AI risks.” (It doesn’t specify what those risks might be, but it’s also a circular requirement referring to the other two main areas it’s addressing.) Read more


Why We Granted Regulatory Approval for Crypto Exchanges — SEC

Obas Esiedesa, Vanguard

The Securities and Exchange Commission, SEC, has emphasized that the recent approval-in-principle granted to two crypto exchanges aligns with the Commission’s objective of increasing youth participation in Nigeria’s capital market.

A statement by the Commission on Wednesday noted that SEC Director General, Dr. Emomotimi Agama, stated this during a meeting in Abuja. He highlighted the importance of engaging Nigeria’s youthful population, a key objective of President Bola Ahmed Tinubu’s administration. He noted that creating a structure to enhance youth and broader public participation in the market is essential.

The crypto exchanges
Last week, the Commission granted approvals to Busha Digital Limited and Quidax Technologies Limited. According to Dr. Agama, “It is crucial that we act appropriately. As a nation, we must not be left out of the global phenomenon that is rapidly evolving.

“The SEC, as a forward-looking institution, is committed to ensuring that we are among the countries that do what is necessary. “We are building talents to address the challenges that these asset classes might bring. Many young Nigerians are deeply involved in this sector, and we cannot shut the door on them.

“Instead, the President’s intention is to include them in the capital market, which is why we are implementing regulations to protect investors and ensure market development. This is the SEC’s responsibility.” Read more


Aug. 29, 2024: AI & Digital Assets


In AI-Based Lending, Is There an Accuracy Vs. Fairness Tradeoff?

Penny Crosman, American Banker

As banks, fintechs, regulators and consumer advocates debate the benefits and risks of using artificial intelligence in lending decisions, one point of contention has emerged: Does there have to be a tradeoff between accuracy and fairness?

That point came up in the course of an independent analysis of AI-based loan software provider Upstart Network, but it nonetheless applies to all banks, credit unions and fintechs that use AI models in their lending decisions.

From 2020 through 2024, law firm Relman Colfax monitored Upstart’s fair lending efforts at the behest of the NAACP Legal Defense Fund and the Student Borrower Protection Center. In a final report published earlier this year, Relman Colfax said Upstart made a lot of effort to ensure its lending models are fair.

However, the report found that the parties came to an impasse at one juncture, when Relman Colfax thought Upstart could tweak its model to approve more loans to disadvantaged groups, but Upstart said making that change would diminish the model’s accuracy.

“This issue is critical,” the report said. “If a fair lending testing regime is designed around the assumption that a less discriminatory alternative model cannot be viable unless its performance is exactly equal to a baseline model on a chosen performance metric (regardless of uncertainty associated with that metric), less discriminatory models may rarely, if ever, be adopted.” Read more


Can Your AI Model Collapse?

Joseph J. Lazzarotti of Jackson Lewis P.C., National Law Review

A recent Forbes article summarizes a potentially problematic aspect of AI which highlights the importance of governance and the quality of data when training AI models. It is called “model collapse.” It turns out that over time, when AI models use data that earlier AI models created (rather than data created by humans), something is lost in the process at each iteration and the AI model can fail.

According to the Forbes article:

Model collapse, recently detailed in a Nature article by a team of researchers, is what happens when AI models are trained on data that includes content generated by earlier versions of themselves. Over time, this recursive process causes the models to drift further away from the original data distribution, losing the ability to accurately represent the world as it really is. Instead of improving, the AI starts to make mistakes that compound over generations, leading to outputs that are increasingly distorted and unreliable.

As the researchers published in Nature who observed this effect noted:

In our work, we demonstrate that training on samples from another generative model can induce a distribution shift, which—over time—causes model collapse. This in turn causes the model to mis-perceive the underlying learning task. To sustain learning over a long period of time, we need to make sure that access to the original data source is preserved and that further data not generated by LLMs remain available over time. The need to distinguish data generated by LLMs from other data raises questions about the provenance of content that is crawled from the Internet: it is unclear how content generated by LLMs can be tracked at scale. One option is community-wide coordination to ensure that different parties involved in LLM creation and deployment share the information needed to resolve questions of provenance. Otherwise, it may become increasingly difficult to train newer versions of LLMs without access to data that were crawled from the Internet before the mass adoption of the technology or direct access to data generated by humans at scale.

These findings highlight several important considerations when using AI tools. One is maintaining a robust governance program that includes, among other things, measures to stay abreast of developing risks. We’ve heard a lot about hallucinations. Model collapse is a relatively new and a potentially devastating challenge to the promise of AI. It raises an issue similar to the concerns with hallucinations, namely, that the value of the results received from a generative AI tool, one that an organization comes to rely on, can significantly diminish over time. Read more


CSBS Establishes New AI Advisory Group

Dave Kovaleski, Financial Regulation News

The Conference of State Bank Supervisors (CSBS) has established a new advisory group on the use of artificial intelligence (AI) in the financial services sector.

The CSBS Artificial Intelligence Advisory Group includes experts from academic institutions, the financial industry, and nonprofit organizations.

“Artificial intelligence presents significant opportunities for consumers, financial institutions, and regulators, if responsibly developed and deployed,” CSBS President and CEO Brandon Milhorn said. “The CSBS Artificial Intelligence Advisory Group brings a diverse range of experiences and insights that will help support our members as they consider legal, policy, and supervisory activities related to artificial intelligence.”

The members of the CSBS Artificial Intelligence Advisory Group include:

  • Kelly Cochran, deputy director and chief program officer at FinRegLab;
  • John Dickerson, associate professor at the University of Maryland;
  • Jeffrey Feinstein, global head of data science, LexisNexis Risk Solutions;
  • Talia Gillis, associate professor of law at Columbia University;
  • Daniel Gorfine, chief executive officer of Gattaca Horizons; adjunct professor, Georgetown University Law Center;
  • Delicia Hand, senior director at Digital Marketplace Consumer Reports;
  • Laura Kornhauser, chief executive officer of Stratyfy; and
  • Nick Schmidt, founder and chief technology and innovation officer at SolasAI and AI Practice Leader at BLDS.

The CSBS is the national organization of financial regulators from all 50 states, American Samoa, District of Columbia, Guam, Puerto Rico, and U.S. Virgin Islands. State regulators supervise 79 percent of all U.S. banks and are the licensing authority and primary regulator for nonbank financial services companies.


Wyoming Is Pushing Crypto Payments and Trying to Beat the Fed to A Digital Dollar

Tanaya Macheel, CNBC

As crypto investing becomes more mainstream and institutionalized with bitcoin ETFs, Wyoming is already pushing into the next phase of growth for crypto: consumer payments.

The state is creating its own U.S. dollar-backed stablecoin, called the Wyoming stable token, which it plans to launch in the first quarter of 2025 to give individuals and businesses a faster and cheaper way to transact while creating a new revenue stream for the state. The group behind it is hoping it can serve as the model for a digitized dollar at the federal level.

Success would be “adoption of a stablecoin … that’s transparent, that is fully backed by our short-term Treasurys [and] that’s dollar dependent,” Wyoming Governor Mark Gordon told CNBC at the Wyoming Blockchain Symposium in Jackson Hole. “One of the big things for me is to be able to bring back onshore a lot of our debt, because if it’s bought by treasuries and supported by Treasurys, it will help to stabilize that market to a degree.”

“It is clear to me is that digital assets are going to have a future,” Gordon said. “The United States has to address this issue. Washington’s being a little bit stodgy, which is why Wyoming, being a nimble and entrepreneurial state, can make a difference.” Read more