Nvidia, the American chip making company, claims that its technology is a generation ahead of the industry. There is speculation that Google is inching towards a big place in AI space by using TPUs or tensor processing units.
What is central to the development of AI are the semiconductors or chips which enable machines to process huge amounts of data. Nvidia occupies the leadership position on this frontier. Its chips run every AI model and its influences every place where computing is done.
Of late, it is reported that Meta, the parent company of Facebook and WhatsApp, could possibly strike a deal with Google to use its TPUs for the data centers. Traditionally, it has been using Nvidia chips. Nvidia has secured a $ 5 trillion valuation in late October, 2025. It is the first company to do so. Alphabet, the the parent company of Google, has also acquired the status of crossing the $ 4 trillion mark in November 2025. These two developments highlight the rivalry between Nvidia and Google. In fact, there is a recent slide in Nvidia stock.
In the early stages of LLM training, Nvidia’s graphics chips played a vital role in number crunching. It led to a surge of demand for Nvidia’s GPU chips such as Hopper or recent Blackwell chips. These are more flexible and more powerful than Google’s TPUs.
The TPUs are an altogether a different chip category called application-specific integrated circuits or ASICs. These are designed to run AI-based compute tasks. These are more specialized than CPUs and GPUs. It is too early to compare TPUs and GPUs in terms of cost and performance. It is always a welcome proposition to have more suppliers of accelerated compute. Still Nvidia has a 70 per cent margin.
TSMC, the Taiwan-based chip maker, is cautious about the enhancing the supply like crazy. It is possible that we AI bubble may burst, and if that happens, there will be no orders and lots of idle capacity. The new entrants such as Google will have to consider this factor.
TPUs have been developed for the last one decade, and these have been sold for the cloud business for the last five years.
Still Nvidia retains an edge by providing software to complete the whole ecosystem along with chip hardware. The software of API or Application. Programming Interface consists of a set of defined instructions that enable different apps to communicate with each other. This is called CUDA. It facilitates parallel programmes using GPUs. Thus, GPUs are deployed in all supercomputing sites around the world. In mobile computing, Tegra mobile processors are used. These are also used in vehicle navigation and entertainment systems.
TSMC from Taiwan is a backend player in semi-conductors. Nvidia, Intel, AMD, Samsung, and Qualcomm are the front-end players.
In computers, the most important component is the CPU. Here Intel and AMD are the market leaders. GPUs are the new addition to computer hardware. Initially, these were sold as cards that can be plugged into a PC’s motherboard to add computing power to an AMD or Intel CPU.
Nvidia chips powered the compute surge needed in high-end graphics for gaming and animation apps. AI apps later adopted GPUs by relying on their tremendous computing power. Computers are thus getting GPU-heavy in the backend hardware.
Advanced systems used for training generative AI tools now deploy half a dozen GPUs for every CPU. GPUs are no longer just add-ons to CPUs.
Google has to sneak into this market with its specialized chip. There are manufacturing constraints. And it is a matter of being a part of the ecosystem created by Nvidia.
Leave a Reply