AI Finds a Role as ‘Collaborative Intelligence’ Models Come to Life

On October 3rd, FIX Trading Community along with RBC Capital Markets will host the New York Regional Meeting. This evening event provided delegates with a regional update, followed by a panel debate and close with networking drinks.  The panel topic was ‘The Growing Impact of Artificial Intelligence (AI) in Financial Services’.

 

“None of us here believe we are anywhere close to autonomous, artificial intelligence decision-making. Not in 12 or 18 months. It’s going to be a while.” Thus went one panelist’s argument from a packed panel session at FIX Trading Community’s regional meeting in New York. It was also the consensus sentiment. Even with a searing-hot spotlight on AI today, its ascension in the general consciousness and evermore sophisticated techniques and possibilities, financial services companies are still reluctant to untether machines completely, not when millions – or even billions – lie in the balance.

Still throughout the session, hosted by RBC, speakers also made a subtle distinction. AI may still be carefully harnessed, or even shelved altogether, for a variety of technical, talent or legal reasons. But efforts combining it with selective human intervention are also firmly on the rise, they said, and much of the event therefore focused on how different types of participants – investment banks, asset managers, exchange operators and fintechs – can calibrate, distribute and deploy AI to support this “collaborative intelligence”.

Gains: Better Compute, Modeling Still Waiting on Data

To start, the panel generally agreed that the further away from trade execution the function, the more likely AI might be given some leash. But many global firms are trying to push the envelope, having now worked on AI for half a decade or more and with scores of data scientists and engineers dedicated to AI research. One speaker argued that rapid gains in graphics processing unit (GPU) power were the inflection point; another explained it as an evolution in techniques. Either way, the scope – and questions – have widened.

“If you look back 30 years ago, [AI] was defined by a set of rules, like playing a grandmaster at chess, but it was not good at stochastic problems with lots of variables and probabilistic decisions,” he said. “Now it can play things like Goor poker, beating humans at games we previously thought it couldn’t, and bots are much more accurate in their human interaction, too. AI applications in financial markets are evolving because of the techniques allowing those new breakthroughs. Previously, a machine-learning model would train on a set of data but something new would be introduced and prove it brittle, rendering a stupid decision. That is the benefit of transfer learning we have now: if you let it learn on four different markets and combine and transfer that to another new area, it will create robust, sustainable models.”

Today, obstacles more likely arise from the limits of available data to incorporate and weigh into AI, rather than its native capacity to learn. “For investment research, we took a very conscientious approach and we found that AI, combined with alternative data, could be very helpful and potentially predictive, and yet it proved incredibly messy, and we sought to know why” one investment manager panelist said. “As an active investment-strategy shop, we look at a lot of data points and that is what machine learning can help with. But in testing over a three-month window we realized that we don’t have great ways to manage our own data yet; as a result, the output would drift over time. We have to figure out that first, before letting models rule the world.”

Another speaker likewise suggested how collaborative intelligence in finance is much like the development of self-driving vehicles, with differences only in the nature of the risk. “AI will make recommendations in autonomous cars,  but for now a human is still on the wheel. For us it’s the same for trading and market surveillance,” he told the audience. “The benefit is in taking an output, incorporating the human learning, and making the model smarter. [Tesla CEO] Elon Musk has suggested testing these cars is about collecting telemetry data as much as anything, to learn what the human did when the car was wrong. We’ll reach a point in time when AI is way better than a human. But for most applications, the human still needs to be there.”

Challenges: Building Out, Explainability, Talent

When building out a collaborative intelligence program backed by AI, the panel next highlighted a trio secular challenges, apart from how much or little autonomy they are willing to give.

First, an investment banking representative offered that, within five years, machine learning will become “a mainstay for algorithmic trading, even if not in its total application.” The problem, he said, lies in designing algo componentry and evaluation that is suitable to the objective, and proving it out. “Reinforcement learning is the clear favorite where trading and market complexity are concerned, but studying it for years as we have, we’ve hit those kinds of roadblocks: how do you apply AI on one component where it could work, and where it might not? For example, in a parent-child trade execution structure? Or, if you have an algo with five components in sequence, and only one uses machine learning, can you separate out the value for clients?”

Of course, that analysis is not merely an issue of performance measurement, but also an increasingly unavoidable concern for regulators, too. “One of the biggest challenges we face lies in explainability,” he continued. “Sitting as we do against trading desks, everyone has an idea what an algo should do. Today, you know how it is engineered and you can figure it out pretty fast. Now we’re applying machine learning, and we have to be able to explain outcomes. Success and failure relies with that principle: can you explain what it did, when it did it? When an AI runs wild, and you have no answer, that’s the end of the game.”

Besides an added operational wrinkle and compliance cost, explainability also plays a huge part in a third challenge: talent acquisition and competing against unregulated tech sectors. In fact, perhaps the most surprising part of the session lay in the panelists candor on the issue. Said one: “Things we tried over time just didn’t quite stick: we saw ‘learn to code’ classes that would start with 35 individuals and three weeks later be down to one,”. “We initially failed because our approach was essentially ‘AI scientist, plus data,’ without any context for our business or markets or how clients think,” added another.

Ultimately, the panel broadly agreed that success requires a thoughtful “dynamic approach” that cross-pollinates tech and trading floor personnel first, and identifies the skills and crucial places, e.g. API development or tablet dashboards, where engineers can introduce AI effectively.

Trends: AI Governance, Creative Applications on the Rise

For all the progress, and potential pitfalls, awaiting collaborative intelligence with AI, the session’s final minutes took questions about future priorities including larger aspirations, concerns about bias, and the shape of creative applications most likely to thrive.

Bias, like explainability, has come to the fore in recent AI research; a recent study from University of Washington was cited in proving its importance going forward. As one speaker said, “bias in data, actions and decisions for AI is a fine line. It’s pretty tough, to dig into past behavior, localize it, and identify the parameters that lead to biased decisions.” Still, the panel agreed that a broader, institutional response is the way forward. “Safeguards around who is getting a loan or not should go beyond just sticking information into a model,” another noted. “It’s the reason for the hot trend in AI governance, AI committees and councils, establishing guidelines for what goes in there, and articulating why those decisions are made.”

Indeed, that boardroom attention will be required. While earlier discussion had outlined why AI adoption is a challenge, one firm’s “heuristic” was telling. “We see it as: something a human can do in a second should be considered for AI if done at scale, millions of times,” said its panelist. “Other use-cases are where things are very easy to spot, like patterns, but extraordinarily hard to describe. AI is very adept at that, in areas like optical character recognition. You could write the number ‘2’ in 15 different ways, but describing those differences to a static program is much more difficult than with machine learning.”

A final comment pointed out how some startups, particularly those in fraud deterrence and regulatory oversight, are already showing how to put that pattern recognition to fresh use. “You see companies like BioCatch, using AI to profile personal habits in banking websites to ask, ‘are you who you say you are?’ It’s not about exact data you’re inputting, so much as your behavior moving around the application. Another, Waymark Tech, analyzes and predicts if a certain type of firm, like a small hedge fund, will be subject to future regulatory requirements, or whether those requirements conflict across regulations. The problem is absorbing all these new vendors, bringing new applications in.” As research into AI continues apace, that is a good problem to have. As one speaker concluded, “If we’re not going to force machine learning on people just yet, not entirely, the idea is this: help make better decisions, rather than replacing them.”

Read the article online here