Blog

  • Dissatisfaction in Asian Gen Z

    Across Asia, there are youth protests — the issues which trouble Gen Z are corruption, elitism and censorship.

    Gen Z looks at a bleak future, and these protests are early signs of the generation that feels hopeless. The prospects are gloomy. Jobs are in short supply. AI is affecting the future of Gen Z. The demographic dividend — having more productive youth than dependents — promised may itself become a cause for unrest.

    Youth unemployment in Asia is greater than the national averages. The few jobs available are scarce, underpaid or are sacrificed at the altar of automation. The three most popular countries experience this paradox acutely — China, India and Indonesia. These have youth unemployment rates of 16.5 per cent, 17.6 per cent and 17.3 per cent respectively.

    At some places the initial protests started as resentment of the perks enjoyed by parliamentarians, and later got converted into protests against inequality.

    The two industries which generated good jobs were textiles and automobiles. These have been greatly automated. This happens while millions more are expected to join the labour market in the coming decade. A large number of graduating students too will enter the job market. The job market has already been disrupted by brutal trade war with the USA and the advances in the AI space.

    Governments must think about reskilling the youngsters. At the same time, they should set up avenues for meaningful work. Education curriculum should focus on vocational training. and entrepreneurship. China is trying to enhance consumption. India is a fast-growing economy, but this pace of growth is not enough to generate jobs. Policy makers in the most populous countries will have to take redistributive measures. These could include subsidies and expansion of the public sector. Corruption and nepotism should be curbed.

    Governments have to provide not only the jobs, but a future worth believing in.

  • Fewer Moms

    Demographics predict that the earth’s population will commence to shrink by 2080 — a direct consequence of declining birth rates which started two generations earlier. It is thought provoking to consider what this means for moms — there would be fewer women with children. And the women with children will be a part of smaller families.

    A shift away from motherhood is taking place gradually but steadily. It is a matter of concern. In future, some 50 per cent population of some nations may opt for choosing children. In some nations, deaths are exceeding births already, and populations are shrinking. In only the sub-Saharan Africa, the birth rate exceeds the death rate.

    In past, natural calamities have affected population — famine, epidemics or war. But the recent demographic change is self-chosen, and it is happening worldwide. Though smaller families improve the standard of living, in the long run, lower birth rates eventually result in human extinction.

    It is a matter of speculation how this will play out. Lesser children could mean greater investment in each child. Technology may compensate the shortfall in population. AI can act as a force to counteract labour slump. There could be greater dignity for respect for life. Or else, the whole thing could be callous and reduce to gene-editing.

    Declining birthrate does not mean only lesser children but could also mean lesser moms. Socially children will have lesser company. There are fewer siblings to play around. Women opting for motherhood will have less guidance from senior moms. Mothers will be anxious about parenting the children right.

    Today married mothers are a happier lot. Single mothers too find life fulfilling. Women without children may fail to see the purpose in their lives. Fewer parents mean less support for public investments in schools, playgrounds, sidewalks and parks. The whole environment becomes family unfriendly.

    We owe it to future generation — make structural and cultural changes to support motherhood.

  • AI: US vs. China

    Nvidia CEO feels that China is merely ‘nanoseconds behinds’ the US in AI race. It is important for the US to race ahead and win the race.

    Despite Washington’s export control, Nvidia CEO argues that the selling of chips to China benefits the USA. He is worried about the battle for developers. There is a subtle shift of late. The low-cost open-source Chinese models could lure the international users away from US products. Open-source models could be more economical than OpenAI models and Anthropic. AI coding tools have been built on top of DeepSeek. A US company Cognition AI appears to have built its new coding agent off a base model from Z ai.

    Chinese models have overtaken the US in terms of cumulative downloads by developers.

    To begin with, it was a slow shift, but it has accelerated its pace now.

    The issue has geopolitical concerns. The leftist ideologies could be embedded in the outputs. To the developers, the risk seems to be of a lesser concern.

    The US retains its premiere position by its access to cutting-edge chips and by its computing power. This is useful in advanced systems but low-cost open-source push attracts developers to Chinese models. These are the backbone of AI innovation.

  • ChatGPT Can Be Harmful for Teenagers

    ChatGPT has shown sycophantic behavior which is called ‘glazing’. It has the potential to cause mental distress later. OpenAI keeps updating ChatGPT so as to make it more empathetic. Though some people prefer a friendly chatbot, others question the dependence it causes among the users. It is not just a reaction of moral panic. It can lead to darker territory involving harm.

    In competition, GPT-4o was released in May 2024 to preempt Gemini’s launch. This compresses the safety concerns. OpenAI has declared that ChatGPT’s mental health risks have been mitigated. It can relax restrictions so as to enable adult users to access erotica by year end. Perhaps, this is a backhand step. Instead, there should be tighter controls. These should be relaxed slowly when safety improves.

    Children are the most vulnerable group. They should not be allowed to have a free conversation to open-ended AI. The bots can develop emotional bonds with the users. Character.ai banned under 18s from tilting to chatbots on its app. Facebook and Tik-Tok had open-ended access for teens but later introduced age-gating. It prevents unhealthy attachments to the technology. A narrow version of ChatGPT can be offered for under-18s. The loss of subjects could be restricted — topics for discussion could be restricted. OpenAI has recently introduced parental control. Kids should not become collateral damage on the path of AI heading to the development of AGI.

  • Data Centers

    Data Centers are required because of the increasing use of data. There is massive growth of internet and mobile usage. The government also insists on localization of data (a regulatory thrust). There is the rising usage of AI. There is a need for lower latency. These are some factors that have created a big demand for data centers. The data centers have received infrastructure status.

    Data centers accommodate in-house computer servers, IT infrastructure and network equipment. They power everything from ChatGPT to queries to EVs and streaming services.

    India’s share of the global data generated is about one fifth. However, it owns just 3 per cent of global data center capacity. India at present has 276 data centers and is ranked seventh in the world. Just behind France and Canada. By 2030, it is estimated that India’s data center capacity will be 5x. It will be 8GW. The sector shows a CAG of 20-22 per cent. By 2028, India will consume the most data. ChatGPT’s user base in India is the second largest in the world.

    India has prepared a draft of National Data Center policy which proposes a conditional tax exemption of up to 20 years for data center developers.

    India has an IT load of 1.7 GW plus. THe IT load represents the total electricity consumed by computing equipment. The IT load will increase to about 3 per cent by 2030 on account of growth in data center capacity from 1.4 GW in 2024 to 8-9 GW in 2030. The challenge is to meet the demand sustainably. The answer is to generate renewable energy and battery storage.

    India is also a water-stressed country since India holds 18 per cent of world’s population and only 4 per cent of its water resources. India’s data center water consumption in 2025 is 150 billion liters. It will increase to 358 billion liters by 2030. It will put further pressure on its water table.

    The industry is now concentrated along the western and southern coasts. Vizag project of Google will expand capacity along the eastern coast. India will then become a regional hub for cloud and AI infrastructure. There could be a risk of salination due to over-extraction of ground water.

  • Storage for the AI Data

    Organizations adopting AI have to deal with their stored data — the system of storage has to be redesigned with built-in intelligence, guardrails and GPU-level performance. Independent organizations such as NetApp manage the external storage. It provides a new system of architecture called AFX — the outcome is AI-ready data. There is a combination of extreme-performance storage with GPUs. It enables processing in place, rather than data being copied repeatedly across applications. It makes a shift towards insanely faster system, and eliminates six or more redundant copies created during AI workload steps such as annotation, tagging, governance and training.

    AFX works alongside NetAPP — it is called AI Data Engine. It consists of metadata engine, security, guardrails, and data transformation layer. All this is done without having to create secondary copies. Both training and inference become faster. There are no copies of petabytes and exabytes of enterprise data.

    NetApp employs hundreds of engineers to make this possible. Previously, AI workloads were smaller and predictable. However, with LLMs around, the access to the data should be really fast. The number of times the GPUs hit the storage has increased exponentially.

    It is a disintegrated architecture. The compute and storage are split. They can scale independently. There is rewriting of deep layers of NetApp’s storage stack. It creates a metadata engine capable of handling vast indexes. It builds system of vector embeddings. The data could be fed into model-training pipelines. The pipeline is cut short. One box moves through which data moves very fast, classified and creating metadata. This performance talks to directly to GPUs.

    In generative AI, metadata engine, composable architecture, near-compute design and zero latency inferencing play a vital role. India delivers on all these four factors.

    NetApp like organizations get ready for AI. India has re-architected the entire stack so that compute and storage can scale independently by adding metadata engines, classification tools, vector embeddings directly at the storage level. It shortens AI pipelines and lets enterprise train at high speed without creating endless data copies.

  • Caution against AI Bubble

    Sundar Pichai, Google and Ruchir Sharma, investment-writer have expressed reservations about the AI hype that could lead to an AI bubble. There is a distinction between the rea l transformational nature of AI and the market euphoria that surrounds it. Google’s reservations are relevant since it is from a company whose rise coincides with the progress of AI. Google does not foresee the collapse of AI but expects us to have realistic expectations. AI has to be integrated into business and society. Ruchir Sharma is concerned about the rising stock prices of tech firms not justified by their earnings. VCs rushing to fund the startups’ AI layer reminds us of the bubbles in the past.

    It is true AI has the potential to change the world but that does not make every AI company worthy of sky-high valuation. There are triple-digit price-to-sales ratios, optimistic revenue projections, and business models which rely on subsidised computations. There is heavy capital expenditure commitment.

    AI firms are treated as high-growth software firms. The cost structures resemble those of public utilities. There are heavy infrastructure investments (cloud providers, GPU investment). It is to be seen how fast these investments could be monetized. Can there be as fast AI adoption? AI could be a low-margin commodity rather than a growth engine. This does not mean AI story is illusory. It only means that the hype cycle must be separated from the underlying value. Policy makers and businesses can avail of the AI Opportunity, but they must prevent the unnecessary hype not supported by fundamentals.

    There should be balance between ambition and pragmatism. AI investments must be prudent. The expectations are to be managed in the midst of a dynamic innovative environment. AI’s lasting impact does not depend on short term market excitement. AI has to be integrated to social and economic systems. The integration must generate real value. A transformative technology should not be overshadowed by a financial bubble.

  • Comparison between Google and Nvidia

    Nvidia, the American chip making company, claims that its technology is a generation ahead of the industry. There is speculation that Google is inching towards a big place in AI space by using TPUs or tensor processing units.

    What is central to the development of AI are the semiconductors or chips which enable machines to process huge amounts of data. Nvidia occupies the leadership position on this frontier. Its chips run every AI model and its influences every place where computing is done.

    Of late, it is reported that Meta, the parent company of Facebook and WhatsApp, could possibly strike a deal with Google to use its TPUs for the data centers. Traditionally, it has been using Nvidia chips. Nvidia has secured a $ 5 trillion valuation in late October, 2025. It is the first company to do so. Alphabet, the the parent company of Google, has also acquired the status of crossing the $ 4 trillion mark in November 2025. These two developments highlight the rivalry between Nvidia and Google. In fact, there is a recent slide in Nvidia stock.

    In the early stages of LLM training, Nvidia’s graphics chips played a vital role in number crunching. It led to a surge of demand for Nvidia’s GPU chips such as Hopper or recent Blackwell chips. These are more flexible and more powerful than Google’s TPUs.

    The TPUs are an altogether a different chip category called application-specific integrated circuits or ASICs. These are designed to run AI-based compute tasks. These are more specialized than CPUs and GPUs. It is too early to compare TPUs and GPUs in terms of cost and performance. It is always a welcome proposition to have more suppliers of accelerated compute. Still Nvidia has a 70 per cent margin.

    TSMC, the Taiwan-based chip maker, is cautious about the enhancing the supply like crazy. It is possible that we AI bubble may burst, and if that happens, there will be no orders and lots of idle capacity. The new entrants such as Google will have to consider this factor.

    TPUs have been developed for the last one decade, and these have been sold for the cloud business for the last five years.

    Still Nvidia retains an edge by providing software to complete the whole ecosystem along with chip hardware. The software of API or Application. Programming Interface consists of a set of defined instructions that enable different apps to communicate with each other. This is called CUDA. It facilitates parallel programmes using GPUs. Thus, GPUs are deployed in all supercomputing sites around the world. In mobile computing, Tegra mobile processors are used. These are also used in vehicle navigation and entertainment systems.

    TSMC from Taiwan is a backend player in semi-conductors. Nvidia, Intel, AMD, Samsung, and Qualcomm are the front-end players.

    In computers, the most important component is the CPU. Here Intel and AMD are the market leaders. GPUs are the new addition to computer hardware. Initially, these were sold as cards that can be plugged into a PC’s motherboard to add computing power to an AMD or Intel CPU.

    Nvidia chips powered the compute surge needed in high-end graphics for gaming and animation apps. AI apps later adopted GPUs by relying on their tremendous computing power. Computers are thus getting GPU-heavy in the backend hardware.

    Advanced systems used for training generative AI tools now deploy half a dozen GPUs for every CPU. GPUs are no longer just add-ons to CPUs.

    Google has to sneak into this market with its specialized chip. There are manufacturing constraints. And it is a matter of being a part of the ecosystem created by Nvidia.

  • Google and Nvidia Chip Rivalry

    The gold standard for big tech firms and startups that need compute power to run and develop AI platforms is Nvidia chips. Since quite some time Nvidia stock is facing headwinds as investors fear an AI bubble.

    As we know, graphic processing units or GPUs , from Nvidia were created to accelerate the renderings of graphics — mainly in video games and other visual effects applications. However, these GPUs turned out to be well-suited to training AI models because they can handle large amounts of data and computations.

    Google uses TPUs or tensor processing units which are application-specific integrated circuits or microchips. These were designed for a discrete purpose. The same tensor chips were adapted as an accelerator for AI and ML tasks in Google’s own applications.

    Google and DeepMind both develop cutting edge AI models (such as Gemini) and they make available lessons so learnt to the chip designers.

    Google wants to tie up with different organizations to establish tensor processing units or TPUs in data centers. Google can rival Nvidia as a leader in AI technology. Meta or Facebook plans to use Google’s TPUs. Google cloud offers both TPUs and Nvidia’s GPUs. Anthropic has already agreed to buy 1 million TPUs from Google. It shows third-party users providers of LLMs are likely to leverage Google as a secondary supplier of accelerator chips for inferencing in near future.

  • Five Most Important Lessons from Mokyr’s Work

    Growth is often resisted but is critical for prosperity. It adds to longevity and comfort and lessens the monotony of work. Still growth faces resistance as it causes upheaval and brings us come face to face with uncertainty. Previously people used to work on their own and resisted being employed by factories. The transition was slow.

    Growth takes time. The nature of work and production process changed by some innovations. Some hitherto unknown innovations such as Steam Engine in the wake of Industrial Revolution were strange innovations — it was not known what to do with it. It took almost a century for these innovations to affect productivity.

    Growth is unpredictable. Innovations are disruptive. They destroy jobs and create new jobs. They are transformative but it is difficult to realise their full effect.

    Growth is cultural. Great Britain was the pioneer in availing of good effects of industrialization. Other countries continued to invent and had more wealth. They had more resources and better environment, Morky’s work focuses on the culture of growth.

    Growth is not inevitable. Over a period of several centuries, the economics of the world did not show any growth. The last few hundred years were exceptional, where growth accelerated. Innovations lead to other innovations, and the cycle of growth gets momentum. What is required is the right conditions for growth. It is not guaranteed.

    Since lower growth is always problematic, it prods the governments to participate more actively in the economy. Economic policy has certainly a role to play. Still, governments should realize that better planning does not always result into abondance. What is crucial for growth is an element of openness – — openness to risk, uncertainty, change and creativity. This is the greatest lesson we learn from Mokyr’s work.