Anthropic’s “The Briefing: Financial Services” Event Was Different in the Best Way
On May 5, Dario Amodei and Jamie Dimon shared a stage in lower Manhattan to talk about AI in financial services. At first glance it looked like a standard vendor and customer keynote or panel-type discussion. Except the vendor in question had spent the prior two weeks as the subject of banking regulator briefings in three countries over a model that it concluded was too dangerous to release broadly. And the customer in question is the CEO of the world’s largest bank, a man who has spent the last 18 months publicly warning that AI may move faster than society can absorb.
Analysts are trained to treat events like this as product news. In this case, we shouldn’t. What happened Tuesday isn’t a typical product release. It became clear that Anthropic had stopped looking like an enterprise software vendor and started looking like a piece of public utility infrastructure that comes with a warning label. The relevant question for CIOs and CISOs isn’t which features shipped. It’s how to evaluate a vendor whose CEO writes essays about AI ending the labor market while selling that same technology into know your customer (KYC), credit memos, and anti-money laundering (AML) investigations.
There are three key messages here, and each one is something the technology industry hasn’t really heard before.
1. The vendor publishes its own risk research but adjusts the message depending on the audience.
Two weeks before the briefing, Anthropic disclosed that its Mythos model could autonomously find zero-day vulnerabilities across every major OS and web browser. They decided to hold back on the release of the model and committed $100 million in usage credits to defenders through Project Glasswing. The Bank of England, the UK Financial Conduct Authority (FCA) and National Cyber Security Centre (NCSC), the US Treasury, the Federal Reserve, and the Bank of Canada all convened emergency briefings on what to do about it.
Sure, but on stage Tuesday, the message was more tempered. Andrew Ross Sorkin opened with “Is the freakout over AI-enabled cyberattacks warranted?” and got a long pause and a laugh. Amodei explained that other US labs are one to three months behind Mythos, Chinese models six to 12 months behind, and tens of thousands of identified vulnerabilities sit unpatched. He positioned it as a “moment of danger” that, handled correctly, will lead to “a better world on the other side.” Dimon called it a “transitory period.”
Read those two messages together. The company can publish work that triggers regulatory emergency sessions in three countries and then, in front of paying customers, pitch the same situation as a manageable patching exercise. Both messages are technically defensible. They are also calibrated to the audience in the room. That’s not a knock on Anthropic; it’s a signal that risk disclosure from this vendor is going to arrive with a couple of different tones, and customers should listen to both.
2. The customer is publicly worried, but not at the vendor’s keynote.
Dimon’s broader view is on the record. At the 2025 World Economic Forum (WEF) Annual Meeting in Davos, Dimon said that AI’s effect on the labor market “may go too fast for society,” potentially producing civil unrest. At the JPMorgan Chase Investor Day in February, he talked about a hypothetical 2 million displaced truck drivers stocking shelves for $25,000. In his testimony in front of Congress in March, he suggested that the AI transition “may be quicker” than the internet was and said “I don’t know” if society can absorb it.
On Anthropic’s stage Tuesday, the tone was much more tempered. Technology has made our lives better. JPMorgan is doubling down on redeployment. The one concession to displacement risk came when Sorkin asked what should be done for workers AI displaces. Dimon pointed to trade adjustment assistance, the federal retraining program created with the North American Free Trade Agreement (NAFTA) in the early 1990s, as a template. Then in the same breath he conceded it didn’t actually work because the benefits were too hard to access. The historical analogy he reached for to defend the redeployment story is the one that proves redeployment programs are hard to design well. The civil unrest concerns he’d raised before were not mentioned.
Again, both messages are plausible. But the takeaway at the level of the customer is that the largest bank in the world is publicly preparing for a labor market disruption, while it’s also paying its AI vendor to accelerate. CIOs reporting up to the boardroom should expect both registers to show up in their own conversations and should not let the keynote messaging set the strategy.
3. Regulators are watching the vendor, not the deployments.
Banking supervision is built around watching banks. The novel shift in 2026 is to supervisors watching the AI model vendor directly. The Treasury’s late April convening of systemically important US bank CEOs was about Mythos, not about any individual bank’s deployment. The Treasury Secretary publicly urged bank executives to approach Anthropic’s recent releases with caution. The Fed, Bank of Canada, Bank of England, FCA, and NCSC followed with their own guidance.
Dimon backed Amodei on stage in pushing back against a rumored Food and Drug Administration (FDA) style executive order for AI model approval. Both want predictability and an automotive industry approach where innovation is permitted but safety standards are mandatory. That’s fine and it’s a reasonable position. It doesn’t change the role of the supervisory regulators. CIOs evaluating Claude for regulated workflows now have to consider regulatory risk that includes the vendor’s own capability release decisions, not just the bank’s deployment governance, and that’s new.
The numbers underneath
A few things came out of the briefing that should be considered separately from the policy framing.
Amodei disclosed that Anthropic’s revenue growth has run at 80x against an internal projection of 10x. “The cone is even wider than I thought.” That’s the answer to the question we’ve been asking about who pays for the massive AI CapEx spending. Dimon answered the other half when he publicly justified the trillion-dollar buildout with “the technology is so powerful, it’s worth the trillion-dollar investment.” When the CEO of the world’s largest bank endorses the CapEx from your vendor’s stage, you’re going to see that line in earnings call decks for the next year.
Anthropic’s chief economist Peter McCrory presented numbers from the Anthropic Economic Index. AI is now in use for at least a quarter of tasks across roughly half of all US jobs, up from one-third a year ago. He projected AI could add 1.8 percentage points per year to US labor productivity over the next decade, doubling recent run rates and resembling the late 1990s IT boom. He described AI as “an innovation in the method of innovation,” producing stronger compounding than prior general-purpose technologies.

McCrory noted another implication of this acceleration in innovation. Workers report scope expansion, with the largest gains at the lower end of the income distribution. Concerns about displacement is highest among the workers with the largest productivity gains. Managers are less worried than the people using the tools. That has organizational change management implications when it comes to equipping young workers with the tacit expertise and skills to get into leadership positions.
The Amodei pivot worth mentioning
For most of the last year, Amodei was Silicon Valley’s most prominent voice on AI-driven white collar job loss. On Tuesday he referenced the Jevons paradox, citing University of Chicago’s Alex Imas and Apollo’s Torsten Slok. “If you automate 90% of the job, then everyone does the 10% of the job. And the 10% kind of expands to be 100% of what people do and kind of increases their productivity tenfold.”
Either he has genuinely updated his stance based on new data, including the McCrory results his own team just published, or the political environment, including the Pentagon “supply chain risk” lawsuit Anthropic is currently fighting over its refusal to drop guardrails for military use, has made the employment bloodbath narrative inconvenient. Probably some of both.
But Jevons operates at the aggregate level, not the individual one. The pie getting bigger doesn’t redistribute the slices automatically. A first-year associate whose document review work is gone is not consoled by aggregate growth in legal services. That gap is where redeployment and retraining initiatives must exist, and neither principal on stage offered a credible answer for how to build it. Dimon pointing to NAFTA trade adjustment assistance as a model and then immediately conceding it failed is the entire problem in one statement.
Oh, and there were product announcements
Right, the actual product news. Anthropic shipped ten prebuilt agents for the highest volume finance workflows: pitchbook builder, model builder, market researcher, valuation reviewer, GL reconciler, month-end closer, statement auditor, KYC screener, plus credit memos and insurance claims. Each runs on Claude Opus 4.7 and ships as a plugin in Claude Cowork or Claude Code, or as a cookbook for Claude Managed Agents.
The Microsoft 365 integration became generally available for Excel, PowerPoint, and Word, with Outlook in beta. Claude now carries context across all four. Moody’s came in as a native Model Context Protocol (MCP) app, joining a new connector roster that includes Verisk, Third Bridge, Fiscal AI, Dun & Bradstreet, Experian, GLG, Guidepoint, and IBISWorld. Fidelity Information Services (FIS) announced its Financial Crimes AI Agent built on Claude is live at BMO and Amalgamated Bank, with broader rollout coming in the second half of 2026.
A CIO panel followed the Dimon-Amodei session, with Peter Zaffino (AIG CEO), Marco Argenti (Goldman CIO), and Lori Beer (JPMorgan CIO). Most of that discussion was about deployment scale in huge financial services organizations.
For non-financial services CIOs, the takeaway from the product side isn’t the agent list, it’s the broader agentic AI platform evolution. Claude Managed Agents plus prebuilt vertical templates plus a curated connector ecosystem is the deployment model Anthropic is going to repeat for whichever vertical comes next. The actual finance agents only matter if you’re in finance. The agentic platform matters to everyone.
Our Take
The theme of the event was more the implications of the exponential advancement of technology, specifically AI models and agentic capabilities, than product announcements. You don’t typically see vendor events that discuss the disruptive nature of their technology across organizations, public sector, financial services, employment, and society as a whole. That’s what made it so interesting and so important to pay attention to. Anthropic breaks the mold on what a technology vendor looks like, and it’s changing the market. Executives and leaders across every industry, public and private, will need to adapt.
• AI vendor evaluation changes. Standard scorecards (capability, roadmap, partner ecosystem, pricing) don’t capture systemic dependency on a frontier AI lab whose own policy pauses a release they deem a potential societal threat. Add concentration risk, capability release governance, and the vendor’s public policy positions as evaluation dimensions.
• Build the consumption pricing budget model now. Per-seat budgeting can’t cover the cost of agentic usage for vendors. Build a TCO model that prices variable agentic seat plus usage spend against displaced FTE cost and quantifiable productivity improvements over a two-year horizon, and decide which workflows justify outcome-based contracts and investment.
• Establish Your Adaptive AI Governance Program. Prepare your organization for agentic AI and tomorrow’s AI technologies and solutions. Generative AI guardrails were written for content. Agentic governance has to cover regulated decisions executed end-to-end autonomously. Define accountability, human-review thresholds, and audit-trail requirements before scaling, not after.
• Threat-model with Mythos in mind. The same vendor’s offensive capability research is running on a parallel timeline to its enterprise rollout. Threat-model the next year on the assumption that vulnerabilities surfaced via frontier models will reach adversaries before they reach defenders, whether or not you deploy Mythos yourself.
• Include workforce retraining and redeployment in the AI strategy. Retraining and redeployment should be on the AI deployment critical path, not just the HR roadmap. Decide explicitly whether your institution is using agentic AI to do the same work with fewer people or more work with the same people, and budget accordingly.
Bottom line
Anthropic has stopped acting like a vendor and started acting like a piece of public utility infrastructure that comes with its own warning label – from a vendor that knows when not to read the label out loud. CIOs and CISOs need to evaluate it that way.
Want to Know More?
Establish Your Adaptive AI Governance Program: From Principles to Practice
Claude Mythos Preview and Project Glasswing: What IT and Security Leaders Need to Know Now