With the acceleration of technology in all markets, firms are trying to create solutions to process data — from issuer offering documents and continuing disclosures to trades and pricing and evaluations — and turn it into digestible, usable forms.
For an illiquid, over-the-counter market with more than 50,000 issuers and a million securities outstanding, challenges stand in the way of combing through the massive and disparate data that is out there, but advances are coming at a faster clip, market participants said.
The process of turning the data into a usable form takes time and can be compounded by the difficulty of extracting what data there is and the format used by issuers in documents, like official statements, disclosures and other financials.
“Some of it is not timely, it’s not structured. PDFs vary in type and format. That’s a huge challenge because it’s not a standardized methodology,” said Matthew Gernstenfeld, co-founder and CEO of Munichain.
Parsing through the available data requires internal resources, which can come at a cost.
“If I was to access or purchase data, I still need to have sufficient technical expertise internally to build pipelines to bring the datasets in,” said Abhishek Lodha, director of strategy and innovation at AG Analytics.
Larger institutions can afford to access the data primarily due to their size and their available resources.
These institutions also have an IT function that works for multiple asset classes, including munis.
“So if I’m at [a bigger firm] it’s a very different story, but if I’m a medium-sized or smaller [separately managed account], then it becomes harder for me,” Lodha said.
This, he noted, is where platforms come in, including AG Analytics’ credit analytics platform.
These platforms allow firms to access the data and build their analytics and custom workflows on top of that in a scalable way, he said.
“It essentially eases that pain of being able to access that data and focus on inferences,” he said.
But even when firms have access to the data, it can be hard to extract it.
Firms like MuniPro, for example, use AI to improve to getting the right “information out of the [official statements], out of the financials, out of all the disclosures that the issuer community meticulously puts together,” said William Kim, founder and CEO of the firm, at at the California Public Finance conference.
That has improved MuniPro’s detection rates to the point where it “can put together these verified debt profiles by credit for issuers, we can go tighter than that on a by-purpose or by-project view,” he said.
“We see it as an enormous enabler in terms of extracting the data that’s already out there, and digitizing it in a way that has meaningful use for our clients,” Kim said.
Others, like Adage, use machine learning and natural language processing to “convert complex deal documents into structured data models from official statements,” said Dan Silva, CEO of Adage, at the same panel.
The firm extracts “intricately detailed information around ratings, covenants, deal details that are pretty useful to know from keeping track of what’s going on with your particular debt portfolio, your clients that portfolio,” he said.
Already, market participants are contemplating in the development of standards mandated by the Federal Data Transparency Act, which was passed with the intent of creating uniform standards for how municipal issuers submit machine-readable information to the Municipal Securities Rulemaking Board’s EMMA system.
The Municipal Securities Rulemaking Board, which aggregates and provides much of the issuer-specific data in the market, has been focusing on technologies like machine learning to establish whether multiple sources are referring to the same piece of reference data, said Brian Anthony, chief product officer of the MSRB.
Once the regulatory board has determined “we are talking about the same thing,” then it applies different “extract,” “transform” and “load” algorithms to determine which field from varying data sources is more important, Anthony said.
The MSRB’s focus is finding the consistencies across multiple sources and validating them for market participants via its EMMA website.
Getting data from multiple sources comes with a host of issues.
For one, integrating data from many sources is complicated, said Gil Shulman, founder and CEO of ficc.ai.
Ficc.ai, which builds AI foundation models to address the inefficiencies and opacity in their muni market by providing real-time and accurate pricing for munis, has technology that is server-less and completely “cloud-native,” he said.
“There are dozens of microservices that in real-time, fetch the data from all of these data sources, process them, clean them and move them into a point-in-time data set,” he said.
Some data comes in Extensible Markup Language, or XML, format. Those files are loaded from the data provider in real time, and then ficc.ai extracts them.
All of this is why some argue human element is still necessary.
AG Analytics relies upon both technology and employees to comb through the data, according to Lodha.
“Technology cannot completely do all these things given the nuances, and neither can human resources alone,” he said.
Money also plays a factor because it can be costly and time-consuming for humans to do what technology can do in a fraction of the time, but at the same time, it can cost a firm a tremendous amount of resources to build a refined process on the firm’s infrastructure that is able to collect this data, Lodhae said.
“So it’s important to make sure you choose the right combination of human resources and technologies to make the process efficient across the board,” he said.
This is especially true due to the massive amount of data out there.
“There’s more data in our marketplace than ever before, so to process the data without technology, you need more people,” said Gregg Bienstock, senior vice president and group head of municipal markets at SOLVE, at The Bond Buyer conference in California.
In this environment where headcount is shrinking or stagnant at some firms, there continues to be more data and more demand from clients. However, firms have the same or limited resources, he said.
“We take all that data, we create information for you so that your human resources … can get the most out of them for the most productive items,” Bienstock said.
“Technology is taking, in essence, what is arguably a menial task, and automating it so that the humans, all of us, can do the more important jobs,” he said.
If you can “teach a machine to do all the things that trader can do, but also process thousands and thousands of different factors and come up with a number, that’s a good thing,” Stephen Winterstein, managing partner at SP Winterstein & Associates, .
Then that trader, who’s “really good at their job,” can look at what that machine produces and almost act in the capacity of quality assurance and correct where things are wrong, he said.
“No matter where we go with this, whether it’s with actual trading or if it’s looking at official statements or disclosures and understanding whether we’re getting it right or not getting it right, that, for an analyst [or trader] can be very useful in that process,” Winterstein said. “It doesn’t mean we’re going to eliminate anybody. It means that we’re going to use their skills where it can really be applied and make a difference.”
Market participants are “recognizing that if their firms are not taking a closer look at this, then they are not going to be able to operate to the same degree as someone else,” thereby giving the “inherent advantage” to their competitors, Gernstenfeld said.
“Firms who are open-minded and understand the importance of how rapidly things are changing the market, and even organizations and issuers will be far better off in the long run,” he said.