THANK YOU FOR SUBSCRIBING
AI Is No Longer Explainable: Except in the most trivial cases, the depth and complexity of the neural networks (number of layers and number of weights), coupled with the incomprehensibly large training datasets means we have little chance of describing how an output was derived even if it were possible to unpick all of the levels and the impact of each training element. Any explanation would be largely meaningless.
For any decision which matters, there must always be an empowered, capable, responsible human in the loop ultimately making that decision. That “human-in-the-loop” cannot just be a rubber stamp extension of the AI driven process.
Any Regulation Must Not Refer to the Technology: There have been numerous calls to ban, pause, or regulate use of AI. The first LLMs hit the scene in November 2022, emerging into our lives with a bang, and with the accelerator planted to the floor. Every day seems to announce new frontiers in AI capability. Buckle up when quantum supercharges AI.
The orders of magnitude difference between the pace that technology moves, and that regulation adapts, means the closer the regulation gets to the technology, the sooner it is out of date. Regulation must stay principles based, and outcomes focussed. Regulation must remain focussed on preventing harms, the requirement for appropriate human-based judgement (even if AI assisted), dealing with contestability, and remediation.
Blanket Bans Will Not Work: Various departments of education around the world (including Australia) have announced comprehensive banning of student use of generative AI. The intention of these bans is to prevent students unfairly using the AI to generate responses to assignments or exams, then claiming it to be their own work.
"Regulation must provide the oversight to allow us to stay vigilant to any negative consequences from AI use individually, for our society, and for the environment"
Such bans are extremely unlikely to be effective simply because those who are not banned from use have a potential advantage (real or perceived) by accessing powerful tools or networks. The popularity of AI platforms also means that workarounds are likely to actively explored including use of platforms in environments outside of the restrictions. The bans arguable address symptoms rather than root causes. In the case of education, rethinking how learning is assessed is core to the challenge of appropriate use of generative AI.
We Need To Think Long Term: AI is technology which has been with us for a long time. It is suddenly renewed, and we are looking at it with little understanding of the long-term consequences. By analogy, electricity was the wonder of the 19th century. From an initial scientific curiosity, electricity is embedded everywhere and has profoundly changed the world.
AI is likely to have as profound an impact as electricity. As AI becomes embedded in devices, tools and systems, it becomes invisible to us. Our expectations of these devices, tools and systems are that they are smarter: better aligned to the tasks at hand; better able to interpret what we mean rather than what we ask for; and improve over time. We do not expect to be manipulated by, or harmed by the tools we use.
Regulation must provide the oversight to allow us to stay vigilant to any negative consequences from AI use individually, for our society, and for the environment.
There is no chance we can put the AI genie back in the bottle.
Our focus must be on ensuring a safe and level playing field for users of AI as it continues to amplify, accelerate and adapt. That focus also has to stand the test of time.
Read Also