OpenAI, the creator of ChatGPT, highlights the necessity for scientific and technological breakthroughs to regulate AI systems that potentially outperform human intellect in the post.

Paytm CEO Vijay Shekhar Sharma has expressed concern about the potential disempowerment, if not extinction, of humans as a result of the development of extremely advanced AI systems. He expressed his concerns in a tweet, citing a recent blog article by OpenAI. 

Paytm CEO Vijay Shekhar Sharma noted some concerning results from the OpenAI post in his tweet. He remarked that he is “genuinely concerned with the power that some people and select countries have already accumulated.”

Sharma cited another section of the blog post that stated, “In less than 7 years, we have a system that may lead to the disempowerment of humanity => even human extinction.”

What Does OpenAI Have to Say to Us?

In the blog post “Introducing Superalignment,” OpenAI highlights the need for scientific and technological breakthroughs to govern AI systems that potentially outperform human intelligence. To solve this issue, OpenAI has dedicated significant processing capacity and formed a team led by Ilya Sutskever and Jan Leike.

While the development of superintelligence may still seem far off, OpenAI believes it might happen within the next decade. According to the post, managing the risks connected with superintelligence necessitates the creation of new governance institutions as well as the resolution of the difficulty of aligning AI systems with human intent. 

Currently, AI alignment solutions rely on human supervision, such as reinforcement learning via human feedback. These strategies, however, may not be sufficient for building superintelligent AI systems that outperform human skills. According to OpenAI, new scientific and technological discoveries are required to overcome this dilemma.

The OpenAI strategy entails developing an automated alignment researcher with nearly human-level intelligence. They intend to scale their efforts and align superintelligence by utilizing substantial computational resources. Developing scalable training methods, verifying models, and stress-testing the alignment pipeline are all part of this approach.

OpenAI is forming a new team.

OpenAI recognizes that its research objectives will shift over time, and they intend to publish more information about its roadmap in the future. They are putting together a team of top machine learning researchers and engineers to tackle the problem of superintelligence alignment.

The goal of OpenAI is to present evidence and arguments that will persuade the machine learning and safety communities that superintelligence alignment has been reached.

According to OpenAI, their work on superintelligence alignment is in addition to their continuing efforts to improve the safety of existing AI models and address other AI dangers.