Sibos: GenAI will be more impactful than ecommerce and the internet, says panel


“There have been many fundamental shifts in the IT space, such as the internet and the ecommerce revolution. This is the biggest one,” warned Dorian Selz, CEO and co-founder of Squirro, at Sibos today. “Leadership in companies needs to be on top of this revolution.”

With the introduction, development, and ubiquity of generative AI (GenAI) models such as ChatGPT, commentators are increasingly asking what new applications the technology may have. For the part of financial services, GenAI has the potential to boost customer experience, revolutionise software development, streamline processes and increase productivity. However, precisely measuring the returns and addressing privacy concerns is proving challenging.

On the third day of Sibos, the industry session ‘Beyond the hype: The realities of Generative AI deployment,’ explored this new landscape and underlined the business benefits that may be gleaned from GenAI today. The session’s panel comprised Namrata Jolly, managing director, head of financial services, Microsoft; Rachel Levi, global head of innovation engineering, Swift; Kai Yang, deputy general manager, AI engineering department, China Construction Bank; and Selz.

Moderator Cat Haines, APAC transformation program, EY, kicked off proceedings by running an audience poll, which asked: “How successful has GenAI adoption been in your organisation?” Response options fell on a scale – ranging from “We have yet to deploy GenAI”, all the way to “We have successfully adopted GenAI at scale across the business.” Results revealed that a 56% majority of respondents are currently running small GenAI experiments. If this modest sample size is at all reflective of financial services-at-large, a picture is painted of an industry that sees potential, but seeks to progress in a measured way.

Deployment areas

The first area of discussion looked at where AI can be inserted into financial firms’ operations. Yang said the CCB has been focusing on deploying large language models (LLMs) within “customer scenarios, risk management, and operational functions.” He noted the bank is already seeing improved efficiency of staff and streamlined servicing of customers.

Jolly picked up on the employee arena and the potential for productivity gains. GenAI can help with “knowledge”, she argued, so that staff can do a “smarter, better job.” This could involve use in call contact centers, which is “especially useful for banks.” Jolly employed the example of call transcripts, which must be manually generated following customer calls. With the use of GenAI, manual transcriptions “are done away with,” she said.

Chatbots underpinned by AI may also be used internally to help staff explain specific products to customers, or externally, for use by customers themselves. “AI can take customer questions in natural language format, pass on information, and give a fast, simple response,” she claimed.

Haines noted that most firms are today focused on AI’s backend use cases, since they are hesitant to make the leap of placing it directly in front of customers. Crucially, if used by staff to produce answers for customers, she argued, “staff must have a space for feedback, to tell leaders how accurate or useful the AI answers were.” From there, alterations can be made.

Jolly added that many organisations are today using Co-Pilot – a chatbot developed by Microsoft, based on the GPT-4 series of LLMs – while at the same time working on developing their own versions for launch in the mid-term.

“There is a broad ecosystem of capabilities, solutions, and application areas,” returned Haines. “Understanding how to knit all that together is key for at-scale deployment.”

Deployment hurdles

Selz highlighted that the biggest challenge around deployment in the last few years has been threefold. First, it requires firms to “re-think” their organisation; second, they must evolve their “management programme”, and third are the practical issues of implementation.

“It’s clear from the poll that many firms are running small experiments,” he underlined. “But between that and full deployment are the challenges of data integration. The amount of data being handled increases rapidly. Operating GenAI at scale is harder than you think, especially with access controls. Junior business analysts cannot see what CEOs see. What is being deployed must represent the company, so guardrails are needed.” He stressed that all of this must be “sorted early” if success is to be guaranteed.

Yang described three pillars to the AI “data challenge.” These are data volume, data quality and data infrastructures. “All data must be clean,” he argued. There needs to be “cross-pollination between systems.”

While Levi agreed on the importance of data management and safety, she stressed that not all firms are beginning from the ground floor: “We make it sound like there’s a lot for organisations to do, but we don’t need to start on these things from scratch. Many have processes for data integrity that are in place already.” Haines told the audience that “it’s about understanding where you are and how you can build on your situation.”

Upskilling staff

The implementation of GenAI is not all technical. One of the most important tasks is to ensure staff are abreast of the technology and its vision.

Jolly pointed at the need to “change management processes…bring the entire organisation up to speed,” and train employees. Senior management must “walk the walk, from the top down,” she argued. “It will benefit the organisation and employees and make it future proof.”

Microsoft, for its part, acclimatises staff and customers to AI by being transparent about its own “journey” with the technology. Be it within the context of “treasury, HR, or business functions,” said Jolly.

Haines added: “We see GenAI as this big scary thing, but governance, data management, upskilling, this is all in our toolbox already. Yes, it’s moving fast, that’s what adds concern.” She highlighted the importance of encouraging staff to use AI constructively and implementing policies that both reward them and manage the risks.

Levi claimed that the process of GenAI digestion is not dissimilar to the process of onboarding other technologies. “We can re-use certain processes,” she said. “We have handled big technological shifts over the years. But the big question is how do we measure ROI, efficiency gains and productivity? We see bias. Employees are skeptical of how this impacts their job. If you measure ROI too early, you can’t see that return. Are we capturing all benefits? We need a more rounded view.”

Indeed, GenAI deployment is also about standardisation, knowledge, and mitigating risk – not just driving profit.

Selz said: “There have been many fundamental shifts in the IT space, such as the internet and the ecommerce revolution. This is the biggest one. The technology has the power to change how we think about organisations. It’s not just about a machine chatting to a human. It’s also about machines chatting to machines. This means leaders need to rethink how they position the company, how they run the business and how they deploy company resources.”

Selz used the case study of a firm in Europe which saw a “45% increase in its match rate” by deploying GenAI. This is a “fundamental change in business,” he claimed. “Leadership in companies needs to be on top of this revolution.”

Ensuring AI accuracy

Jolly argued that AI accuracy starts with “responsible AI.” In other words, “models being created must be transparent and explainable.” Risks of bias or manipulation must be razed. “We must operate within these principles. Addressing these risks while maximising user benefits is key.”

Microsoft’s formal approach here is encapsulated by the acronym, IMMO, which stands for “Identify, Measure, Mitigate and Operate.” The Measure phase is “used by folks internally to steer development toward accuracy and safety. This requires systematic and iterative testing to produce clear metrics,” Jolly said.

Yang added: “LLMs that avoid bias are key. First, clean the data in the training process. It must be diverse. Second, go for internal scenarios, then broaden to customers. Last is monitoring all results and updating the models.”

Privacy concerns

A second poll from Haines asked the audience what the major hurdles have been in deploying GenAI in their organisations. Data privacy and security received the clear majority of votes.

Selz responded by explaining how firms should control the reach of AI in their systems. Firms have “various internal enterprise systems,” he said. “They have thought out procedures and controls in place, but LLMs can run horizontally through these systems. We manage that security challenge with Retrieval Augmented Generation (RAG) architectures, whereby you don’t expose everything to a LLM, only those parts that generate an answer. Then, install a new security layer. Who can see what in the architecture? There should also be a layer of synthetic data, or at least a way to cleanse data so it’s personal and identifiable. Finally, install guardrails…to ensure data is handled properly.” Selz acknowledged this is “all pretty complex and it takes a moment to get this done in large organisations.”

Levi underlined that Swift’s key focus has thus far been governance, and how they should “responsibly use AI as a company.” This has meant constantly managing and identifying risks and establishing a “AI governance framework,” via a dedicated council. She confirmed that Swift has mechanisms to ensure transparency, from decisioning, to ideation and proof-of-concepts. First, Swift tries “1, 2, 3 use cases internally,” then it tests “in controlled environments” to understand how the AI is running.

“There are ways to guarantee the privacy of GenAI,” Levi said, “but they come with other areas to be balanced. It’s about deploying from an infrastructure perspective.”

Top tips

Haines closed the discussion with a quick-fire round: “Do you have any tips for organisations deploying AI?” she asked the panel.

Kai focused on accuracy of models; Jolly underlined the importance of company culture and clean data; Levi urged for an appreciation of the sheer impact that AI will have on everyone’s personal and professional lives; while Selz told the audience to be responsible, but added, “Then, push the limits.”


By on Wed, 23 Oct 2024 16:15:00 GMT
Original link