8CIOReview | | OCTOBER 2023IN MY OPINIONnonetheless. It didn't take long for malicious users to convince Tay to say some awful things. In another well publicized example, a tech company created an AI to evaluate applicant resumes. They quickly learned that it amplified the gender and diversity gaps already notable in the technology sector. After attempts to fix it, they eventually abandoned the technology altogether. Correcting for inherent bias was just too difficult. These are just two examples showing a potentially malicious exposure or benign problem led to bad outcomes. What can we learn from this? In the Tay example, malicious users effectively were able to update the model of what the chatbot would respond with. Effectively changing the training data of the AI. In the resume scanner, the inherent bias of the source data skewed the results to favor a particular set of outcomes. These examples are both from several years ago. The problems that lead to them still exist in even the most recent AI. Yes, AI and ML have improved their understanding of human language. The knowledge used to formulate responses is still from one of the most uncurated sources in THE THREAT MODEL OFAI By Matt Clapham, CISSP, Sr. Director, Application Security, Activision BlizzardAI has the potential to be a revolutionary tool. I believe that AI will eventually become better than humans for constrained use cases, such as driving a car. I think it will be especially true when a variety of different data needs to be correlated at once. For more generalized use cases, AI will be like other electronic tools before it, creating new ways for artistic or technical expression. Speeding up processes that perhaps took more resources before. I may be a luddite, but I don't think AI will replace humans in many of the proposed use cases, much as the mellotron didn't replace orchestras. But what are the threats to AI? I mean AI as an application, i.e., the model outcomes themselves. I'm not referring to the operational security of the servers and other infrastructure that run it, and those are reasonably understood and manageable using today's tooling. Within the application of AI, we can point to several examples where the manipulation or bias of AI model inputs has led to bad outcomes. Here's an example of malicious users influencing a (admittedly simple) machine learning system: Tay. For those not familiar, Tay was a chatbot that would echo back parts of messages it received. Nowhere near ChatGPT's level of understanding, but a simple AI,
<
Page 7 |
Page 9 >