OpenAI made a ‘powerful discovery’ of artificial intelligence that ‘could threaten humanity’

Following dangerous advances in superintelligent machines, the company fired its CEO Sam Altman.

Before being fired Sam Altman CEO of OpenAI, several company researchers after ChatGPT They sent a letter to the board .

In the letter warned of a powerful discovery of Artificial intelligence What according to them, could threaten humanity as two sources close to the matter admitted to Reuters.

The letter and the Artificial Intelligence algorithm which until now had not been reported, was a key development before the board fired Altman.

Before his triumphant return Tuesday evening, more than 700 employees had threatened to resign in solidarity and join him. Microsoft .

The sources cited the letter as one factor in a long list of board complaints that led to the executive’s dismissal.

According to one of the sources, veteran executive Mira Murati mentioned the project, called Q* (pronounced Q-Star), to employees Wednesday and said a letter was sent to the board before the events of the recent weekend.

After the news was published, An OpenAI spokeswoman said Murati briefed employees on what the media would report, but did not comment on the accuracy of the information. .

The creator of ChatGPT made progress during the quarter*, where some internal experts believe it could be a breakthrough in the search for superintelligence, also known as general artificial intelligence (IAG) one of the sources told Reuters.

OpenAI defines AGI as AI systems that are smarter than humans.

The new model, equipped with vast computational resources, was able to solve some mathematical problems the person said, speaking on condition of anonymity because they were not authorized to speak on behalf of the company.

Although it only solves mathematical operations intended for primary school students, The success of these tests makes researchers very optimistic about the future success of Q* according to the source.

Reuters could not independently verify the capabilities claimed by Q* researchers.

Superintelligence

Researchers consider that mathematics is a frontier to cross in the development of Artificial Intelligence .

Currently, Generative AI is effective at writing and translating languages ​​by statistically predicting the next word, and answers to the same question can vary widely .

Achieving the ability to perform mathematical calculations – where there is only one correct answer – implies that AI would have a greater capacity for reasoning, which would resemble human intelligence. .

This could apply, for example, to new scientific research, AI researchers say.

Unlike a calculator capable of solving a limited number of operations, the AGI can generalize, learn and understand .

In his letter to the board of directors, OpenAI researchers highlighted AI’s prowess and its potential risks, the sources said, without specifying the exact security concerns flagged in the letter. .

Computer scientists have long debated the danger posed by super intelligent machines For example, If they can decide they are interested in the destruction of humanity .

With this context, Altman led the effort to make ChatGPT one of the fastest growing software applications in history and attracted the necessary investment and computing resources from Microsoft to move closer to superintelligence, or AGI . .

In addition to announcing a number of new tools in a demo this month, Altman suggested last week at a meeting of world leaders in San Francisco that he believed AGI was within his reach. .

Almost all of the history of OpenAI, the most recent in the last few weeks, has given me the opportunity to be in the sala that leads to the bike of ignorance and the frontier of the discovery of the long-haul, and to have achieved my professional honor of my life “, he said at the Asia-Pacific Economic Cooperation summit.

One day later, OpenAI board fired Sam Altman .

Source: Latercera

Facebook
Pinterest
LinkedIn
Twitter
Email

Leave a Reply

Your email address will not be published. Required fields are marked *