Asset managers are using artificial intelligence and machine learning to augment existing processes and activities in the hope of cutting costs, being more efficient and freeing up resources. It might not be long, however, before regulators try to smother the whole process in bureaucracy.
AI, which has been around since the 1950s, has already revolutionised the asset management sector in many ways. According to the Chartered Financial Analyst Institute, a global investment-management trade body based in Virginia which offers qualifications to postgraduates, it has improved portfolio management, trading and risk management by making activities in those areas more efficient and compliant with rules and regulations. It helps asset managers to construct portfolios by making more accurate risk-and-return forecasts than ever before. Trading algorithms use AI to devise novel trading signals and execute trades with lower transaction costs. AI also improves risk modelling and forecasting by generating insights from new sources of data. Also, of course, robo-advisors owe a large part of their success to AI. Yet the use of AI can also create new risks and problems to do with model opacity and reliance on data integrity.
Machine learning is a certain approach to AI. Machine-learning programmes in some advanced cases learn to do jobs by finding patterns in large data sets and making inferences rather than taking a rules-based approach and obeying pre-existing commands. An AI system that uses machine learning of this sort evolves as it records inputs and results, successful or not successful, collecting and using huge quantities of data to make up rules by itself. In most cases in asset management, however, AI/ML does not make autonomous decisions. Instead, it guzzles up vast quantities of data, spots patterns and relationships that no team of people could ever spot, and presents the user with options. Even today, AI is fairly weak and its main use in asset management is to allow people to make more informed decisions.
Asset management firms and private banks that indulge in buy-side trading on exchanges and other venues now have to disclose the details of their trades to their regulators in the European Union and anywhere else where the writ of MiFID II (the EU's second Markets in Financial Instruments Directive, which now makes the buy side directly liable for the quality of its fill prices) runs. In doing so they use algo wheels, pieces of software that aggregate data to select the strategies and brokers through which the banks can route orders and which then generate reports that disclose the reasons that justify the trades. These routers increasingly use AI and machine learning. Take-up outside Europe is brisk as well, with tier 2 and tier 3 US broker-dealers that already white-label other brokers' algorithms finding them especially helpful. They also help HNW clients manage their order flows more efficiently.
Coming down the 'pike?
What, then, do regulators in advanced countries have in store for AI and machine learning? If various pronouncements from the International Organisation of Securities Commissions are anything to go by, regulators are likely to start requiring firms to name and appoint (and, judging by the preferences of the UK's Financial Conduct Authority whenever 'governance' rears its head, obtain prior regulatory permission for) specific senior managers to oversee the development, testing, deployment, monitoring and controls of AI and machine learning. The powers that be will probably oblige each firm to draw up - and keep updating - an internal governance regime with clear lines of accountability.
Regulators are also likely to require financial institutions to test and monitor the algorithms to validate the results of AI and machine-learning techniques in a never-ending loop. The more fastidious of them are likely to insist on firms testing those algorithms offline before they 'go live' and deploy them in earnest.
Then there is always training. The powers that be are likely to require firms to have adequate skills, expertise and experience to develop, test, deploy, monitor and oversee the controls over their AI and machine learning.
Firms are also likely to be compelled to understand their reliance on (and manage their relationships with) so-called third party providers, the bane of all regulators in the modern age. The authorities are likely to ask them to monitor their performance and oversee them, with strict service-level agreements in place.
MiFID II insists on firms sending off disclosures about trades to their HNW and other customers and it is hardly a stretch of the imagination to expect regulators to compel them to send customers reports about their use of AI and machine learning. This already happens to a great extent with the aforementioned algo wheels in the US, although broker-sponsored algo wheels do not usually offer customers much information about their "execution logic," for fear of reverse engineering.
Lastly, a slew of new rules and regulatory "expectations" is likely to arise to offset the risk of biases that lead to discrimination or bad advice. Biases might exist both in the information that the algorithms collect and the manner in which they process it. Again, training is likely to come to the fore as regulators begin to fine asset managers for not ensuring that their IT men know enough about biases in the data.
Making hay while the sun shines
It is to be hoped that the new, bureaucratic rules come into play a long time after AI has already generated plenty of extra revenue to pay for them. This is, as in so many other areas of compliance, likely to be especially true at the largest firms. These are the most notable 'early adopters,' the firms that can use AI very profitably for the longest amounts of time before the advent of the new rules and the consequent diminution of savings and profits. They are also the ones that can attract the right skill-sets most easily, offering the most prestigious career progression to people with the right knowledge of technology.