Experts Call For Limits On AI Research
Without limitations, the technology may pose a “profound risk” to society…
An article review.
Citing “profound risks to society and humanity,” a group of researchers, CEOs and technology leaders recently signed an open letter calling for a moratorium on advanced AI research. In the letter the experts, including Apple co-founder Steve Wozniak, state that there has been a race to create and deploy new technologies without considering the risks that may be involved.
During the proposed pause in AI development independent outside experts could oversee the development of safety protocols and implement some form of oversight for those deploying the technology, which according to one computer scientist could result in the end of humanity if not properly restrained.
While not everyone who signed the open letter believes that the technology poses such dire risks, they do believe that the potential impact to society is significant enough that those working on it need to stop and consider the long term impact of what they are doing. At the moment such claims may seem exaggerated, with current models such as ChatGPT-4 being more of a novelty than a threat to humanity, but the rapid acceleration in the capabilities of these once-simple chatbots suggests that these experts’ claims may be worth considering.
Whether there is a pause in development or not, it seems clear that the AI genie is out of the metaphorical bottle, and concerns about the ethical use of this technology won’t be able to be ignored for much longer. This open letter may be the first major call for caution and reflection when it comes to AI, but it certainly will not be the last.
Original article by James Felton, writing for iflscience.com.