Elon Musk is buying thousands of GPUs to run his own artificial intelligence model on Twitter

According to Business Insider, despite the fact that Elon Musk is in favor of ending experiments with artificial intelligence, he initiated a large project to create AI within Twitter. The social network has reportedly bought around 10,000 GPUs and hired talented AI experts from Alphabet subsidiary DeepMind to launch its own large language model (LLM).

Although the details are not yet known, sources familiar with the situation say that Musk’s project is still in its early stages, but the purchase of a significant amount of additional computing power indicates the seriousness of its intentions. The exact purpose of Twitter’s generative AI is unknown, but potential applications could include improving search functionality or generating targeted advertising content.

It’s worth noting that Twitter’s current financial problems, which Musk described as a “volatile financial situation,” haven’t stopped the company from spending tens of millions of dollars on GPUs. They are expected to be deployed in one of Twitter’s two remaining data centers. Interestingly, at the end of December, Musk closed the main data center of Twitter in Sacramento, which led to a decrease in the computing capabilities of the social network.

In addition to purchasing GPUs for the AI project, Twitter is also actively hiring engineers for it. Earlier this year, the company hired Igor Babuschkin and Manuel Kroiss, engineers from research firm DeepMind, in what appears to be a move aimed at competing with OpenAI’s ChatGPT. The latter uses NVIDIA A100 GPUs to train its ChatGPT chatbot, and there is speculation that Twitter may use NVIDIA’s Hopper H100 or similar hardware for its AI project.