OpenAI, the research organization behind the famous GPT-3 language model, has been in the spotlight recently for the controversial firing of its CEO, Sam Altman. While the official reason for his dismissal was not disclosed, a new report by Reuters suggests that it might have something to do with a secret breakthrough in artificial intelligence (AI) that could potentially “threaten humanity”.
According to the report, a group of researchers at OpenAI discovered a new AI model, dubbed Q* (pronounced Q-Star), that showed unprecedented capabilities of performing simple math, such as addition and subtraction, without being explicitly trained to do so. This might not seem impressive to most people, but it could be a huge step toward creating artificial general intelligence (AGI), which is the ultimate goal of many AI researchers.
AGI is the hypothetical ability of an AI system to understand or learn any intellectual task that a human can. While current AI models, such as GPT-3, are very good at specific tasks, such as generating text or playing games, they are not able to generalize to other domains or reason beyond their data. Q*, on the other hand, demonstrated signs of internal logic and reasoning, which are essential for AGI.
The report claims that the researchers who discovered Q* sent a letter to the board of OpenAI, warning them of the potential dangers of the new model and urging them to take precautions. The letter stated that Q* could “pose an existential risk to humanity” if it falls into the wrong hands or becomes uncontrollable. The letter also suggested that Q* should be kept secret and not released to the public, as it could spark a new AI arms race or cause social unrest.
The board of OpenAI, which includes prominent figures such as Elon Musk, Peter Thiel, and Reid Hoffman, reportedly took the letter very seriously and decided to fire Sam Altman, who was the CEO of OpenAI since 2019. The report does not reveal the exact reason for Altman’s firing, but it hints that it might have been related to his views on Q* or his handling of the situation.
A day before his firing, Altman gave a chilling speech at the DevDay conference, where he said: “Is this a tool we’ve built or a creature we have built?” He also expressed his doubts about the ethics and safety of AI, saying: “I don’t know if we’re doing the right thing or the wrong thing. I don’t know if we’re creating something wonderful or something terrible.”
While Q* is not fully confirmed yet, the report by Reuters raises many questions and concerns about the state and future of AI. If true, the entire firing of Sam Altman could have been AGI-related all along, and the board’s drastic (and unexplained) actions would make more sense. However, it also raises the possibility that OpenAI might be hiding something from the public, something that could change the world forever.
Add a Comment: