Blackwell Chips

The chip making company Nvidia is now the third most valuable company in the US, behind only Microsoft and Apple. This Santa Clara-based chip maker has earned the title of the world’s most valuable chip maker, eclipsing the celebrated competitors such as Intel and AMD.

There is an AI boom and there is edge computing. The firms are moving from exploration to deployment. AI computing basically requires high performance graphical processing units (GPUs). Traditionally computers used central processing units (CPUs). Intel and AMD dominated the CPU market. GPUs are relatively new additions to the computer hardware market. These were initially sold as cards that were plugged into a personal computer to add computing power to an Intel or AMD CPU.

The graphics chips can handle the kind of surge in computing power that is needed in high-end graphics for gaming or animation applications. Standard processors cannot handle this surge. AI applications too demand high computing power. In their backend hardware, these apps are becoming GPU-heavy.

In most advanced systems for training generative AI models, for every one CPU, at least half a dozen GPUs are deployed. The equation when GPUs were just an addition to CPU has completely changed. This lead will be maintained by the GPUs in the near future.

Nvidia first popularised the term GPU in 1999. Its chip was called GeForce256. This chip was coveted for graphics. These chips were more expensive than most CPUS (on a per unit basis). It resulted into better margins. TSMC, the Taiwan-based foundry specialist is the important player in the backend semiconductor business. Intel, AMD, Samsung and Qualcomm are the front-end players.

The most popular AI chip of Nvidia is H100, which was launched in 2023. It has 80 billion transistors. The company has now introduced B200 Blackwell, the new chip. It has 208 billion transistors. It can do some computational tasks 30 times faster than the current blockbuster H100. The new chip with its more computational power and optimised power consumption will strengthen the dominance of Nvidia in the niche space. It is twice as powerful while training AI models, and five times more capable while inferencing. (the inferencing is done by models such as Gemini, or ChatGPT while tackling queries and generating response).

A training GPT model (which powered ChatGPT) had 1.8 trillion parameters and 8000 Hopper GPUs. It consumed 15 MW of electricity. The new 2000 Blackwells can do the job while consuming just four MW of power.

Major buyers such as Google, Amazon, Microsoft and OpenAI are expected to use the new chip in their cloud-computing service and in their AI products.

Nvidia is ahead in AI race because of its hardware as well as its proprietary software that facilities the leverage of its GPU hardware for AI apps. Nvidia also has developed systems that back its processors and software to run all of this. It is thus a full-stack solution company.

print

Leave a Reply

Your email address will not be published. Required fields are marked *