CIOReview
| | JUNE 20239CIOReviewAI has been quietly "generating" art, music, speeches and images for some time. What just happened is that it got a whole lot bettercode generation, or even suggesting the next few words in a sentence you are writing in an email or document (just like what happened as I was writing this sentence). A co-pilot mode helps us be more productive, as opposed to replacing us (thanks again AI for completing that last sentence).But we do need to keep our wits about us. It is hard to imagine using any tool that "hallucinates" for serious purposes. Imaging using a ruler, a pen, or a calculator that hallucinates. Imagine using a hallucinating tool for air traffic control? Nonetheless, you can use any tool by acknowledging its limitations, being clear just how far you can rely on it, and where we humans still need to be in the loop. By "being in the loop", it means we are still using the tools, rather than blindly accepting an output. So where does all leave us? AI capability continues to accelerate, and we are now being forced to seriously consider the implications of what AI can be made to do. Moral and ethical challenges have been debated for some time with regard to the use of AI, but now we also need to consider such issues such as if AI can "own" an invention or a patent, if the style of a human creation should be protected, and even if use of AI should be prevented in certain domains.We also need to think about how we can appropriately use tools that are inherently unreliable and unexplainable, but which are still powerful. A hallucinating AI can still find a lot of useful connections, but we might use a different AI tool to check on the first one. Essentially providing independent assurance. We also need to deal with the rising tide of "noise" or deliberate misinformation. Over the years, people have generated a lot of useful material for the internet, as well as a lot of cat videos. AI could out pace us by orders of magnitude. Already we have trouble with determining the validity of information out there. If AI starts generating hallucinated, or deliberately false information at a million times the rate of humans, finding the "signal" in the "noise" may be something that only AI can do. In the last few paragraphs, I have been accepting many of the sentence completion suggestions from my AI powered document editor. So, after using AI to check spelling and grammar, I leave you with this final thought: could AI have written this opinion piece? It most definitely could have. Would it have been as good? I will leave you to decide for yourself. Ian Oppermann
< Page 8 | Page 10 >