On Friday, Elon Musk announced that his artificial intelligence firm, xAI, will launch its technologies on Saturday.
“Tomorrow, xAI will release its first AI to a select group,” Musk wrote on X, formerly known as Twitter. That “in some important respects, it is the best that currently exists.”
Musk’s July-launched AI venture aims to “understand the true nature of the universe,” competing with OpenAI, Google, and Anthropic, the inventors of ChatGPT, Bard, and Claude. Musk reportedly bought thousands of Nvidia GPU processors in the spring to build a huge language model like OpenAI’s ChatGPT or Google’s Bard.
Elon Musk’s New AI Company xAI Assembles Team of Top Researchers
According to LinkedIn profiles accessed by CNBC, the team behind xAI includes ex-employees of DeepMind, OpenAI, Google Research, Microsoft Research, Twitter, and Tesla who have worked on projects such as DeepMind’s AlphaCode and OpenAI’s GPT-3.5 and GPT-4 chatbots.
In an April filmed interview with Fox News, Musk revealed his ambitions for a new artificial intelligence tool codenamed “TruthGPT,” expressing concern that established AI firms are giving preference to “politically correct” systems.
To take artificial intelligence to the “next level,” xAI co-founder Greg Yang says the company will study the “mathematics of deep learning” and “develop the ‘theory of everything’ for large neural networks.”
According to social media posts made in August by several xAI employees, the company appeared to be actively recruiting new team members. Toby Pohlen, co-creator of the xAI platform, shared a blue and white logo on the site that month with the caption, “Getting everything ready for the first alpha testers.”
Filings show that in March of this year, Musk incorporated xAI in the state of Nevada. After rebranding Twitter as “X Corp.” in some SEC filings, xAI now clearly states its independence from X Corp. on its website and says it will “work closely with X (Twitter), Tesla, and other companies to make progress towards our mission.”
xAI Advisor Dan Hendrycks: AI Safety is a Global Priority
Dan Hendrycks, executive director of the San Francisco nonprofit Centre for AI Safety, advises the startup. Tech leaders signed a letter in May stating that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Many in the tech world, particularly academics, and ethicists, responded negatively to the letter, arguing that the letter’s focus on the future dangers of AI diverts attention from the present dangers that algorithms pose to marginalized groups.
A spokeswoman for xAI was not immediately available to comment.