AI-Ready Infrastructure

Data centres which house complex matrix of servers in steel racks provide massive computing power to leverage AI applications. Apart from servers, what we require are optical fibres (OF), power supply cables, computer fans, CPUS, GPUs and routers

These computers generate heat and require cooling through layers of water pipes across the whole data centres and airconditioners.

The traditional data centres are designed with all this hardware, but for AI applications, we do require computing power manifold. These data centres require advanced computing capacity and power management. Existing hardware could be used for AI, but it is a lot slower. AI runs on huge clusters of power-dense GPUs.

AMD has introduced data centre accelerated processing unit (APU) for high pertormance computing (HPC). To handle LLMs, AMD introduced the M1250x accelerator. It is used in supercomputers. AWS focuses on hardware to deal with AI. It has developed AWS Trainium, a high performance ML chip.

LLM training has increased the requirements of storage at the data centres. There is a shift from CPU-centric traditional architecture to GPU or TPU-centric set ups capable of parallel processing.

Data centre now need more number of servers or servers of high density. Workloads going up to 40 KW

per rack are required to cater to AI.

There are hyperscale data centres to handle complex AI workloads. Liquid cooling systems are extensively used to reduce cabling and operational expenses.

AI is evollving by leaps and bounds. It requires hardware and physical infrastructure. At the same time it requires software management. Microsoft offers hyperscaler service and Microsoft Cloud. AI solutions are adopted by more functional areas of business. India is matching this trend.

print

Leave a Reply

Your email address will not be published. Required fields are marked *