All Insights

Currently reading

AI: Closing the human knowledge gap

News and interviews

4 min read

AI: Closing the human knowledge gap

Interview with Martin Moeller

Martin Moeller is Director of Artificial Intelligence & Generative AI for Financial Services across EMEA at Microsoft. In the following interview, he explains why the deployment of AI should be favoured, not feared, by employees and highlights its value in delivering an enhanced client experience. He also discusses the need for responsible and ethical approaches to developing this transformative technology.

Marketing & Communications
Marketing & Communications

What is augmented intelligence and how does it differ from “traditional” artificial intelligence?
Humans are not perfect. We have gaps in our knowledge, limited time, limited resources, and biases that we are, or are not, aware of. The concept behind augmented intelligence is to shape machine intellect in a way that allows us to compensate for those gaps in human knowledge and complement our existing skills. A good way to illustrate how this works is to consider our senses – such as sight, touch or hearing – that help us to navigate the world around us. With augmented intelligence, we can amplify a human’s senses or abilities a thousandfold; it is like having a thousand pairs of eyes to read through vast company reports or to take a deep-dive into social media sentiment about a specific firm. As such, the concept of augmented intelligence is not different to traditional artificial intelligence or “AI” but it goes one step further by focusing on the philosophy of how to design, implement and utilise those AI systems.

How can AI be designed to complement and enhance human skills rather than replace them – particularly in the workplace?
When most people ask that question, what they really want to know is: “Will AI take my job?” I therefore want to state unequivocally that the answer to that question is “no” for multiple reasons. Firstly, every job consists of a number of individual tasks. To date, AI – including generative AI – can help with and, in some cases, even complete entire tasks − but never the job as a whole. And let’s not forget that many tasks tend to be repetitive and mundane. Secondly and more generally, it is not AI that could replace you in the workplace but rather someone who deploys AI effectively to help complete their tasks – especially if you are unwilling to embrace the technology yourself. The same applies to companies: those firms that make better use of AI to offer the same products or services as you will be more competitive.

When it comes to AI complementing human skills, there are plenty of examples of how this works. AI is already used extensively to combat financial crime, enabling vast quantities of information to be processed at high speed, often in milliseconds. Transaction monitoring is one such area; humans alone would be overwhelmed by the scale of the task but AI can help compliance teams to scan huge amounts of data in real time, freeing up human analysts to focus on investigating complex cases.Market research is another good example: analysts who want to find out about a specific company need to navigate huge volumes of news, social media posts and other data sources to gain key insights into the business and its management team. AI can help them to sift through this material and swiftly identify new and relevant insights, allowing them to produce meaningful analysis and gain an edge for their house view.

How effectively can humans and AI collaborate in an industry such as private banking?
In private banking, the use of generative AI or “GenAI” is mainly focused on the role of relationship managers. It is widely recognised that relationship managers have to spend too much time – often 60–70% of their day – on administrative matters and mundane tasks, reducing the time they can devote to their clients. I know of one large US bank that has implemented a wealth management GenAI assistant that provides every advisor with real-time markets insights; it is rather like having their very own Chief Investment Officer, Chief Economist and Global Equities Specialist on call around the clock. Similarly, here in Switzerland, there are a few examples of GenAI assistants that have already been rolled out successfully to assist client advisors. This development has been widely met with a positive response.

What are the key success factors in this context?
To gain acceptance for GenAI assistants among the workforce, I would say that the first key success factor is the “tone from the top” – in other words, senior leadership needs to support GenAI as a transformational technology and enable its implementation. Communicating the right use case is also essential: many companies fall into the trap of expecting too much or wanting to create something that is too perfect – resulting in endless iterations so that they don’t actually manage to move beyond proof-of-concepts. It is better to define one repetitive area or task, solve it intuitively and then deliver iterative improvements.

Are there any other essential ingredients for success?
Yes, to just name a couple: good adoption and change management are vital. For instance, GenAI needs to be embedded where people spend their working lives to make it as intuitive as possible to use. You also need to design the AI responsibly and to embed trust-building elements into it – such as ensuring that the GenAI always indicates the sources for its output to allow users to check and challenge it. It is only if users trust GenAI that they will adopt it. I also believe that skilling and training are crucial for effective change management. The most important skill is probably critical analytical thinking in order to determine which tasks to use – or not use – GenAI for and how to then challenge its output.

In private banking, the human factor is essential to foster trust and build long-term relationships, which are key to achieving sustained success in this industry. Do you believe augmented intelligence can nevertheless add value in this sector?
We humans all have limits. Advisors can’t always know everything about their clients, the market or the bank’s offerings at any given moment. Augmenting an advisor with GenAI means that a digital assistant can immediately provide a summary of the information needed if, for example, a client calls about changes in the economic outlook for a country and wants to know how it might impact their portfolio. The human advisor can then access that data in an instant, add it to the bank’s house view and possibly even correlate that data with the current portfolio of the client in question, taking into account their risk profile. With the benefits of augmented intelligence, the advisor can obtain the insights they need in near-real time and thus focus on providing trusted advice to their client.

Which advances in AI technology do you expect to see in the next decade, and which new opportunities or challenges might they bring?
No one can say what the next decade will bring. As a result of advances in and the democratisation of computing, the growing volume and availability of data, and sophisticated academic research, we have now reached a unique point in the field of AI that has given rise to generative AI. The pace of change is truly astounding, which means that if we try to look beyond the next three years or so, it is really a matter of guesswork. That said, there are two key trends that are already visible today and are set to continue in the coming years. Both are of particular importance to our industry. The first is increasing verticalisation. In the AI as well as the GenAI space, a growing number of specialised firms are creating solutions for industry-specific pain points. Some of those firms are start-ups – but at the other end of the scale, the London Stock Exchange, for example, is creating a number of GenAI solutions for investment advisors as well as investment bankers. These verticalised (Gen)AI offerings will drastically increase adoption, reduce costs and speed up time-to-value.

The second is the advent of (Gen)AI agents. In essence, this is about multiple (Gen)AIs with specific skills that operate autonomously and use tools to solve complex problems. These agents can collaborate like a project team, resulting in previously unimagined abilities to complement humans in mastering complex tasks.

As AI and augmented intelligence technologies continue to evolve, which key policies or frameworks are needed to ensure they are developed and deployed responsibly?
Responsible and ethical AI is an absolute must. It has many dimensions, from safety, privacy and security to fairness, inclusivity and transparency. Every company needs to think thorough these aspects and anchor them in its DNA. At Microsoft, we have been at the forefront of these efforts, given that we have been working in AI for many decades. Reflecting our passion for this subject, we not only make our own Responsible AI Framework available for other companies to use for free but we are also working hard to convert as much of what we have learned as possible into tools that organisations can use to pragmatically execute on their responsible AI goals.

Of course, it is important to always be aware of specific new risks that generative AI may bring with it. All forms of AI have always entailed data risks, model risks and use case risks that need to be carefully managed. The distinctive nature of foundation models – which can be used for a wide range of use cases – has now brought a number of new and unique risks. We need to know and understand them in order to mitigate them. On the positive side, however: while the underlying foundation models are evolving at a fast pace, we are also seeing rapid developments in the tools that can be used to effectively mitigate the risks associated with those models, like so-called hallucinations or jail-breaking. That is when foundation models present false or misleading information as facts or try to break the ethical safeguards that have been put in place.

Which role should governments play in regulating the development and deployment of AI technologies? 
At Microsoft, we have long been calling for clear and effective AI regulation. I believe that this technology will be one of the most transformative forces of recent times and we need to have safeguards in place for the future. Although we have committed to a large number of self-regulations, such as our AI Access Principles or rules aimed at fighting the deceptive use of AI in elections, we believe that governments need to establish a robust common governance framework for AI. Going forward, it will be key to ensure strong safety frameworks, including effective safety breaks − especially for AI systems that control critical infrastructure. It will also be vital to develop a broad legal and regulatory framework based on the technology architecture for AI to promote transparency and ensure academic and non-profit access to AI and, finally, to pursue new public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that go hand in hand with the advent of new technologies.

About the author

Martin Moeller is Director of Artificial Intelligence & Generative AI for Financial Services across EMEA at Microsoft. In this role, he leads Microsoft’s partnerships with banks and insurers to help them transform their companies and products through applied AI and to embed artificial intelligence into their organisations in a responsible and scalable manner. He spent his earlier career in wealth management and retail banking, where he worked in a variety of roles in the areas of business leadership, global strategy and digital transformation.

Martin Moeller holds a Bachelor of Arts in Economic and Political Science from the University of Manchester and a Master of Science in International Relations from the London School of Economics and Political Science (LSE) in the UK.

Innovation and the economy

Want to find out more about AI and its impact on private banking?

Click here for additional expert insights on this topic

必填

必填

必填

必填

必填

必填

必填

Please note you can manage your subscriptions by visiting the Preferences link in the emails you receive from us.

必填