OpenAI is concerned about the threat Planet Q poses to humanity

According to Reuters, before Sam Altman's firing, several OpenAI researchers wrote a letter to the board warning that new discoveries in AI could threaten humanity.

Researchers see mathematics as the limit of development for generative AI, which can currently write and translate by statistically predicting the next word, so answers to the same question can vary widely.

Solving mathematical problems means that AI can have powerful reasoning skills similar to humans, because in mathematics, unlike writing, composing and translation, there is only one correct answer.

Currently, if you ask ChatGPT to solve a math problem, ChatGPT uses its predictive text approach to compile the answer from a huge database of texts and determine literally how a human will respond.

This means that he may or may not get the answer correct, even if he has no mathematical skills either way.

OpenAI seems to have made a breakthrough in this field and the Q Star AI model has been able to solve real-world mathematical problems like never before.

Some at OpenAI believe Q Star could be a major breakthrough in the startup's pursuit of artificial general intelligence, which the company describes as autonomous systems that outperform humans at the most profitable tasks.

The new model is able to solve some arithmetic problems, although his current arithmetic skills are at the level of an elementary school student. With these tests successfully completed, researchers are optimistic about the success of Q-Star in the future.

AI researchers believe this can be applied to new scientific research, as AI in general can think, learn and understand.

The letter and the Q-Star AI algorithm were major developments before the board ousted Altman, who apparently failed to inform the board of the scientific breakthrough.

The letter was among a long list of factors cited by the board that led to Altman's dismissal, including concerns about marketing progress without knowing the consequences, sources said.

OpenAI learned about the project, called Q Star, in an internal letter to employees and a letter to the board before Sam Altman was fired.

An OpenAI spokesperson explained that the letter sent by CEO Mira Moratti alerted employees to certain media reports but did not comment on their accuracy.

In their letter to the board, the researchers highlighted the diversity of AI and potential risks, but did not specifically address the safety concerns mentioned in the letter.

There has been a long debate among computer scientists about whether highly intelligent machines believe it is in their interest to destroy humanity.

The researchers also highlighted the work of a team of AI scientists, a combined group of the Code Gen and Math Gen teams, who are investigating how existing AI models can be improved to improve their thinking and capabilities. Doing scientific work.



Save 80.0% on select products from RUWQ with promo code 80YVSNZJ, through 10/29 while supplies last.

HP 2023 15'' HD IPS Laptop, Windows 11, Intel Pentium 4-Core Processor Up to 2.70GHz, 8GB RAM, 128GB SSD, HDMI, Super-Fast 6th Gen WiFi, Dale Red (Renewed)
Previous Post Next Post