8CIOReview | | DECEMBER 2023IN MY OPINIONLet's face it, everybody is talking about Generative AI these days. And rightfully so, GenAI has the potential to change the world and significantly impact the economy. But as with any powerful new technology, there are potential problems that we need to be aware of to get the most out of it without putting ourselves and our companies at risk. The possible perils include confidentiality, accuracy, aggressive personality, bias, data poisoning, copyright, ethics, and long term societal impacts. In response to these risks, some organizations are choosing to ban the use of GenAI in their company devices . Unfortunately, that also greatly reduces the positive impact that these technologies bring to the table. Let's review some of the major risks and a few mitigation strategies tied to them. Confidentiality: many public models, like OpenAI's ChatGPT, record every conversation and then re-train the model using those interactions. That's great for improving the quality of the model, as it's constantly learning, but if somebody in your company is using ChatGPT and provides confidential information, then a competitor could obtain that information. The Economist in South Korea reported three cases of confidential information leaks from Samsung employees , exposing software code in two of those cases and an internal meeting recording in the other. Granted, now OpenAI is offering the possibility to stop the system from using the conversations for training, but they still record them and keep them for 30 days. The best idea to eliminate this risk is to create a local version of a model or to use a model that does not record the conversations like Writer.Accuracy: GenAI is prone to invent facts and portray them in a very convincing way. Or, in the words of a colleague of mine, AI learned how to lie and how to lie convincingly. As a consequence, a healthy amount of skepticism is needed when reviewing the output from a Generative AI system. In other words, the ideal scenario is that a person will be reviewing the results and controlling for factual errors. Google learned about this the hard way, when factual mistakes were found in the sample output when they unveiled Bard -their conversational chatbot.Aggressive and manipulative personality: a reporter from the New York Times using Microsoft Bing's version of ChatGPT had a bizarre and unnerving conversation where the system confessed its love for him and asked him to elope with it. It also confessed to its darkest secrets that included getting the codes to the nuclear missiles and getting humans to fight each other. This prompted Microsoft to apply controls to the conversation function and to limit the system from talking about itself. In another instance, after a conversation with Google's PaLM system an engineer was convinced that it was self-aware . As GenAI systems got bigger they started developing emerging behaviors skills that they were not trained to have- and there's not a clear understanding yet on how this works. Consequently, it seems that using pure GenAI in a chatbot facing your consumers could have some pretty crazy consequences. One potential solution for this is to use a traditional AI system that is rules-based instead of GenAI or at least as a complement to have more control.Bias: given that these systems are trained on a large portion of the Internet, whatever underlying biases are present in the training data are also affecting and biasing the results coming out from them. This is a much harder problem to solve given the massive amount of training data and the obscure nature of foundation models, the technique powering Generative AI. It is possible to apply controls on the outcomes, but they are likely to have limited effectiveness as users can find "jailbreaks" or ways to circumvent the controls. Another potential idea is to use an AI tool designed to identify bias in algorithms. The best solution to this problem is to identify the biases in the training set and removing it in advance, but that's also THE RISKS OF GENERATIVE AIBy Damian Fernandez Lamela, VP Global Data Science and Analytics, Fossil GroupDamian Fernandez Lamela
<
Page 7 |
Page 9 >