According to three people with knowledge of the situation, Nvidia and Malaysian power-to-property giant YTL are in early negotiations about a data centre agreement.

According to one of the persons, the possible partnership would revolve upon working together on cloud infrastructure and be anchored at YTL’s data centre complex in the Singaporean state of Johor, which is located in southern Malaysia.

According to a second individual with knowledge of the situation, the cooperation would target companies in Southeast Asia and give them cloud computing access to Nvidia’s AI hardware.

The value of the deal was not immediately apparent.

The sources chose not to reveal their identities since the talks are confidential.

Furthermore, during an AMD investor event on Wednesday, Meta, OpenAI, and Microsoft announced that they would be utilising AMD’s newest AI chip, the Instinct MI300X. It’s the clearest indication yet that tech firms are looking for less costly graphics processing units (GPUs) than Nvidia, which have been necessary for developing and implementing artificial intelligence applications like OpenAI’s ChatGPT.

AMD’s newest high-end CPU may reduce the cost of creating AI models and put competitive pressure on Nvidia’s rapidly increasing sales of AI chips if it is good enough for the tech businesses and cloud service providers creating and serving AI models when it begins arriving early next year.

According to AMD, the MI300X is built on a revolutionary architecture, which frequently results in notable performance improvements. Its most notable feature is its 192GB of HBM3, a state-of-the-art, high-performance memory type that can accommodate larger AI models and transports data more quickly.

In order to compete with Nvidia’s industry standard CUDA software, AMD announced to investors and partners on Wednesday that it has updated its ROCm software package. This improvement addressed a significant weakness that was a major factor in why AI developers currently preferred Nvidia.

Prior to this, Yotta Data Services had announced  that it will be working with Nvidia to provide platforms and GPU computing infrastructure for its Shakti-Cloud platform.

According to Nvidia, the partnership would boost the creation of AI solutions in India by making AI capabilities accessible to a wide range of organisations, companies, startups, and AI researchers nationwide.

Customers of Yotta will be able to train sizable language models and other AI workloads with this solution, meeting the expanding demands of Asian, Indian, and wider global markets.