According to reports, Elon Musk's AI project within Twitter has acquired around 10,000 GPUs and has recruited top AI talent from DeepMind to work on a large language model (LLM). This move is particularly interesting, as Musk has been advocating for a halt to AI training, but his dedication to this project cannot be denied. While the exact purpose of the generative AI is unclear, it is speculated that it could be used to improve search functionality or generate targeted advertising content.
Twitter has reportedly spent tens of millions of dollars on the compute GPUs despite its ongoing financial problems, which Musk describes as an 'unstable financial situation.' The GPUs are expected to be deployed in one of Twitter's two remaining data centers, with Atlanta being the most likely location. Interestingly, Musk closed Twitter's primary datacenter in Sacramento in late December, which lowered the company's compute capabilities.
Twitter has also been recruiting additional engineers, including top talent from AI research DeepMind, in a bid to compete with OpenAI's ChatGPT. It is expected that Twitter will use Nvidia's Hopper H100 or similar hardware for its AI project. However, as the company has yet to determine the exact use case of the project, it is hard to estimate how many Hopper GPUs it may require.
It is worth noting that when large companies purchase hardware, they receive special rates due to their bulk orders. However, when purchased separately from retailers like CDW, Nvidia's H100 boards can cost north of $10,000 per unit. This gives an idea of the amount Twitter may have spent on hardware for its AI initiative.
Overall, Musk's involvement in Twitter's AI project and the company's investment in GPUs and AI talent shows a growing interest in AI in India and a push towards developing cutting-edge technologies that can transform the way we live and work.