OpenAI's Superalignment team, amidst the turbulence of Sam Altman's controversial exit and return to the company, remained focused on their crucial task: developing methods to control AI systems that surpass human intelligence.

This dedication was evident during a recent conference in New Orleans, where team members Collin Burns, Pavel Izmailov, and Leopold Aschenbrenner presented their latest research on ensuring AI systems' alignment with intended behaviors, TechCrunch reports.

The Superalignment team, formed in July and led by Microsoft Corp (MSFT  ) OpenAI co-founder Ilya Sutskever, regulates and governs superintelligent AI systems.

This initiative gained additional significance considering Sutskever's involvement in Altman's departure and his ongoing leadership role following Altman's return.

The team's efforts are crucial, especially considering Sutskever's belief in AI's potential as an existential threat.

Currently, the team is crafting governance frameworks for robust future AI systems, a task complicated by the nebulous concept of "superintelligence."

Their strategy involves using a simpler AI model to direct a more sophisticated one, ensuring it adheres to desired objectives and safety protocols. This approach also mitigates AI "hallucinations" - instances where AI generates false or misleading information.

OpenAI announced a $10 million grant program to further this research, drawing support from notable individuals like former Alphabet Inc (GOOG  ) (GOOGL  ) Google CEO Eric Schmidt. Despite Schmidt's vested commercial interests in AI, his involvement underscores the escalating concern and interest in managing superintelligent AI.

The Superalignment team's work and the broader research community's contributions, funded by this grant, will be publicly shared. This commitment reflects OpenAI's dedication to the safe and beneficial development of advanced AI technologies for the betterment of humanity.

Price Actions: Microsoft shares traded higher by 1.22% at $370.45 on the last check Friday.