| | JULY 202419CIOReviewwhich is still far from being optimized - will definitely not avoid that same fate." The revolution will only become real when there is a broad consensus on its most valuable use cases, probably thanks to the convergence of several forms of AI.The emergence of new risks Meanwhile, along with its benefits, the use of generative AI is also bringing new risks in terms of cybersecurity that need to be managed. "From a cybersecurity point of view, a highly sensitive and still little-known risk is that of `malicious prompt injection'. The attack could originate, for example, from an instruction hidden in a text that is automatically retrieved from the internet and analyzed by an autonomous agent using an LLM and could unwittingly lead to a data leak", says Matthieu Keip. Far from being perfect, AI can also lack ethics and transparency. A frequent issue raised is the "black box" effect, i.e. the lack of understanding about how an AI model made a decision or provided a result, due to the opaque and complex nature of the process. What's more, studies have shown that the technology is not free of bias (political, social, etc.) . An additional risk is the leakage of data. When a generative AI service is supplied by a third-party to a company for its internal use, the supplier can store all the prompts created by its client. For an asset management company like Amundi, protecting client data is essential for the relationship of trust. All these risks must therefore be analyzed and addressed to provide the necessary protection. Lastly, AI also has limitations in terms of ESG and CSR, particularly where the environment is concerned. Although the technology is not yet used on an industrial scale, it consumes a lot of energy. To run the servers of its partner OpenAI, Microsoft is estimated to have increased its consumption of water for data center cooling by 34% worldwide between 2021 and 2022. Such high consumption levels are mainly the result of AI research. As the technology is also very semiconductor-intensive, companies using AI that are concerned about their CSR profile will need to take all these aspects into account. Regulation: a losing battle?In the meantime, a battle is under way among regulators in the US, Europe and China about how best to control artificial intelligence. The move to regulate began in the US, the world's preeminent power and home to most of the leading AI companies. So far strongly inspired by the American model, European institutions reached a principle agreement on December 8, 2023, to outline the framework of a future European Regulation. For its part, China has the most stringent rules on the subject to date. For Matthieu Keip, effective regulation could prove to be a real challenge. "Although it has ever more applications in the civilian world, AI, like many technological revolutions such as radio communications and the internet - is also being driven by the military industry. Yet the latter is rarely subjected to any regulation." What's more, while private companies can be regulated, it's far more difficult to do the same with open-source applications. And given the rapid progress of open-source models, regulation is bound to be a step behind. ALTO, between technical components and business usesAware of the various risks and the ongoing regulatory battle, Amundi has been using artificial intelligence for more than six years deploying generative AI solutions on its ALTO technology platform, using the ALTO Studio product in particular. Available through Amundi Technology, the platform's services are used at various levels of the organization, from active and passive management to market monitoring, providing help to all aspects of the business - sales, support and onboarding teams, RFPs, marketing, etc. All the teams handling text-based information can use a private ChatGPT, thanks to Amundi's partnership with Microsoft, or an Open Source offering.Generative AI can also work on multiple internal Amundi sources, including documentation. In addition, it can deploy so-called `autonomous agents', AI-powered programs tasked with meeting objectives and carrying out tasks on their own. "Our compliance rule checker - the most complex autonomous agent we have developed to date can review all the marketing documents of an investment fund to see if they are consistent with each other and with the fund's legal regulations, and that they comply with current legislation", says Matthieu Keip. "Generative AI won't provide us with better forecasts and it won't replace in-house jobs," he adds. "But it will allow our staff to spend more time on higher value-added tasks. We see it as a great tool for simplifying the user interface. However, we're in no doubt that creating new knowledge and know-how will remain, by its very nature, a human activity. Natural language generation can work with a company's internal knowledge and represents a major technological leap forward." For Amundi's head of innovation, the difference between AI and generative AI "is the same as that between a parrot and a poet - the parrot simply tries to repeat words that it has heard before, while the poet can summarize and get to the very essence of a body of knowledge, an era, and concepts for an informed reader." Generative AI is undeniably a craze at the moment and with no real perspective on its technological implications for the future, it's tempting to believe it can "do everything". However, the hype will not last
<
Page 9 |
Page 11 >