
The next wave of financial intelligence, led by FactSet, aims to meet professionals where they are, interpreting intent, understanding natural language and surfacing insights proactively.
As a long-time financial data and analytics provider, FactSet has spent years laying the groundwork for this transition. Its open, flexible platform consolidates data from hundreds of disparate sources, powering faster, deeper and more forward-looking decisions across the financial spectrum.
“Our evolution goes beyond productivity. It’s about reclaiming cognitive space, freeing professionals to focus on decision making and revenue generation, not navigating complex tools,” says Kate Stepp, CTO.
By integrating machine learning and AI since 2007 and large language models (LLM) as early as 2018, FactSet demonstrated how financial professionals can work, enabling them to focus on deal-making and revenue generation, rather than navigating software. Clients can query high-quality structured and unstructured data, automate time-consuming tasks for pitchbook creation, produce portfolio commentary and capture key points, themes and sentiment from earnings transcripts, all in one flow. The dynamic nature of AI and generative AI unlocks material efficiencies at scale, reducing what once took hours to minutes.
The paradigm of humans adapting to computers’ language has now flipped to computers understanding human language and intent.
Q: How have LLMs changed how users interact with data across FactSet’s platform and what productivity gains are you seeing as a result?
LLMs have changed how our users engage with data. Our digital platform holds vast structured and unstructured data, but navigating that depth requires specialized training. You had to know where to click, what to search and how to work with each tool.
Now, users simply ask direct questions versus clicking for information. With natural language inputs and GenAI capabilities, they express intent to ‘find the earnings highlights,’ ‘summarize this portfolio’s performance,’ or ‘create custom charts and slides,’ and the system interprets and delivers, often pulling together data in seconds and responding in a more dynamic way that molds to users.
FactSet Mercury is a great example. It offers a conversational interface that supports advanced research across our platform, delivering answers with source links and context grounded in trusted FactSet data. It also connects users to the next best action based on their query, interpreting a user's question to their underlying intent and aiming to automate more of their workflow.
These aren’t just convenience upgrades. Portfolio Commentary cuts the time spent on attribution summaries by nearly 90 percent. Pitch Creator compresses work hours into minutes, giving junior bankers space to focus on high-impact strategic initiatives rather than formatting slides.
The evolving conversational experience we built for users is a material jump forward and agentic AI offers another leap forward. These tools initiate, plan and follow through. Agents can chain tasks, interact across systems, reason and iterate to complete multi-step workflows. It’s a shift from asking for data to having an autonomous agent that helps drive the entire process forward.
We long ago laid the groundwork for agentic workflows with a strong API-first architecture that allows agents and AI models to execute tasks. With the evolution of new protocols for LLMs and agents, like the model context protocol (MCP), we've continued our open and flexible leadership with our own MCP servers that evolve our APIs and enable our data to be brought together in new ways through new models, empowering the most complex client use cases.
-
One of the important pieces in how we prepared our firm for agents and this applies to other firms, too, is how we’ve taken services, capabilities and data and thoughtfully encapsulated them such that applications and large language models alike can access them (whether that be through APIS, MCP, or other means)
Looking into the near future of agents, we need to understand the triggers in workflows for efficient automation. When we know why someone thinks about asking a question, what causes someone to begin research, or trends in data that humans might not even notice, the focus is on how to kick off the agent processes that will elevate the workflows of the future.
Q: How does FactSet approach transparency, governance and trust when deploying AI across its platform?
Regarding AI, transparency is foundational, especially around privacy, security and utility. Our clients expect to know what models are being used so they can run them through their governance processes. They want clarity on where the data comes from, assurance that outputs can be audited and confidence that every response can be traced back to a verified source.
That’s why our AI systems are built with traceability in mind. Tools like retrieval augmented generation link each response directly to its source, so users aren’t left guessing where the information came from. This builds trust and keeps users in the loop.
Privacy is paramount in financial services, where companies must ensure their generative AI tools comply with applicable data protection laws, regulations and industry standards. FactSet has its own secure and private instances of LLMs and does not share client, confidential, or proprietary data with public generative AI models.
Regarding the utility of AI, as we continue to deliver the highest-quality data, efficiency gains and automated workflows, we also suggest firms consider that not every task needs an “AI” label. Sometimes, the best outcome comes not from a generative model but from a streamlined interface or a well-designed query engine. In that sense, the value of AI is in how well it supports the user’s goals.
Q: How can financial firms prepare now for the autonomous capabilities of agentic AI?
Preparing for agentic AI requires foresight and preparation across three critical areas.
First, start using AI tools now. The best preparation is hands-on experience. Mastering today’s GenAI tools helps users build critical skills, like prompting effectively, evaluating AI output and integrating results into daily workflows. Each interaction teaches the user and trains the system, reinforcing business-specific patterns that will shape more intelligent agents in the future.
Second, get your data house in order. Data is the most significant dependency for agentic AI, so data should be treated like critical infrastructure. Agents rely on vast volumes of structured, well-connected, accessible data to take meaningful action. Most enterprise data environments weren’t built for LLMs, which means gaps in metadata and inconsistent formatting can derail performance. Enhancing your metadata with semantic-rich keywords, summaries and geolocations helps AI systems accurately interpret and map queries to the right fields, improving precision and output quality.
Third, work with partners that value openness and interoperability. Agentic AI depends on connected systems. That’s why firms should seek partners who offer flexible, API-first platforms designed to integrate, not isolate, within existing workflows. At FactSet, our early investment in open architecture has allowed us to embed capabilities more deeply into client environments and build forward-compatible foundations for AI agents to operate across systems.
The ultimate goal is not just to adopt new tools; it’s to rethink how work gets done. Agentic AI enables financial firms to modernize their operating models and focus human effort where it matters most—on high-value thinking, innovation and long-term strategy.
Firms don’t have to navigate this shift alone. Built on a foundation of trusted data and deep industry knowledge, FactSet guides clients through this transition, helping them identify the proper use cases, design scalable implementations and unlock lasting value. With pioneering GenAI tools already in action across buy- and sell-side workflows, we’re enabling professionals to search smarter, automate faster and focus where they create the most impact.