Published
Trista Kelley is a freelance journalist. She was formerly editor-in-chief of DL News, deputy editor of Financial News and a Bloomberg reporter.
Omar Sayed is a former event-driven portfolio manager at Millennium, where an army of analysts help generate ideas, vet trades and do industry research. At his new fund, he’s using AI to do most of this work.
Sayed estimates that by using large language models — including Claude from Anthropic and Google’s Gemini, combined with a retrieval-augmented generation (RAG) system — he can replicate roughly 75 per cent of a traditional analyst’s job at his new start-up, Porchester Capital.
In fact, with DCF models, LBO scenarios, CRM integration, document intake and idea vetting all handled by bots, the hedge fund manager estimates he can be four times as productive as he was in his pre-AI life.
This is possibly a sign of things to come. Last week, Microsoft published a report that tried to estimate the occupational impact of AI by looking at individual activities and how easy it is for bots to do them. It’s unnerving reading for a lot of people in finance and tech (and, yes, journalists).
From the 40-plus page report, with Alphaville’s emphasis below:
. . . We find the major occupation categories with the highest AI applicability scores are Sales; Computer and Mathematical; Office and Administrative Support; Community and Social Service; Arts, Design, Entertainment, Sports, and Media; Business and Financial Operations; and Educational Instruction and Library.
The Microsoft methodology, based on live user data from over 200,000 Bing Copilot interactions, is a bit complicated so you read it for yourself here. But put simply, the research suggests that the tasks linked to the job categories on the top half of the image below are easiest to outsource to AI. Those on the bottom half are trickiest to delegate to bots.
Microsoft is essentially saying that jobs that require a lot of human interaction, emotional intelligence, and judgment are among the least automatable. In contrast, quite a lot of high-skill, high-prestige finance jobs now suddenly look vulnerable.
The corporate rush to cash in on financial AI tools is another worrying sign for staff with right-brained skills. 
S&P Global bought research platform Visible Alpha last year for $500mn. Bloomberg’s MODL function now includes more bells and whistles to stave off rivals and justify its $26k annual licence. Last month Anthropic announced a deal to embed historical S&P data directly into Claude. Goldman Sachs even says AI agents may serve on investment committees, scrutinising decisions.

As a result, investment firms may not need a full analyst team (or as many Bloomberg terminals) if their LLMs can pull and process the same data more cheaply.
There are some areas with deep moats. Shareholder activism, for example, still needs a human touch. Claude probably can’t yet handle the emotional turbulence of a proxy fight. Client-facing bankers and senior strategy people are probably also safer.
But even quants — who have dominated growing swaths of finance over the past two decades — are probably not as immune as once thought. As Microsoft’s research indicates, there are lots of quite mathy and code-driven jobs that can now probably be done by bots.
Indeed, this is what Man Group is now trying to do with an internal AI system the quant hedge fund giant has unimaginatively dubbed “AlphaGPT”, according to Bloomberg. Alphaville’s emphasis below:
While humans still vet the outputs — and errors like hallucination remain a big issue — the firm says the goal is to automate more of the research pipeline and accelerate the discovery of smart, rules-based trades. Several dozen signals generated by the AI system have passed Man Group’s investment committee and are slated to be deployed in live trading, according to Ziang Fang, a senior portfolio manager.
“One of the challenges we’re facing as quant researchers these days is an information avalanche,” said Fang, referring to an abundance of data sets and academic papers. “The idea is can we build some kind of agentic workflow to really leverage these things and automate more complex tasks in quant research that were previously impossible to do.”
This also has implications for who gets hired. The interview process used to involve asking candidates to kick the tires of a trade idea using financial analysis. Now, a hungry trader with a strong network but little modelling experience may suddenly be valuable, according to Sayed.
Rather than hiring spreadsheet gurus for his new fund, he is therefore testing for people who are good at sussing out other humans, like those in investor relations or management teams.

Allocators, constantly spooked about team risk, might like this. After all, an AI model won’t leave in a huff if it doesn’t get a bonus. And the analyst only sees the outputs, so if one does leave, the models don’t. As Sayed puts it: “Millennium can’t poach the AI.”