CIOReview
| |NOVEMBER 20239CIOReviewinformation security teams have not even fully exploited the machine learning use cases that have been around for years and now those teams are anticipating being asked to enable business for AI initiatives while at the same time protecting the organization by placing guardrails over proposed use cases exploited by business. This feels very much like security awareness 2.0 which places a high burden on information technology and information security to have visibility over user activity associated with AI.· Does an AI strategy violate data classification protocols? Many companies may have to rethink their data classification protocols or at the very least consider this crucial area when validating AI use cases. · Can or should Information Security drive AI platform use toward tools that provide stronger content visibility and control? For example, should a Microsoft shop require the use of copilot for all AI use cases?· What obligations rest with information security beyond guardrail development, visibility gap assessments, and data leakage controls? The increased use of AI will be an ongoing responsibility for CISOs. How can AI technology be used effectively and responsibly? How can the risks be mitigated internally and externally? The answers to those questions, just like the technology they concern, will continue to evolve. Machine learning has been in use by many organizations for a long time. Being a "subset" of AI with extreme limitations, there is probably the most to claim in this space
< Page 8 | Page 10 >