New AI Paradigm

Yann LeCun, Facebook-Chief Scientist AI, predicts a new paradigm of AI architectures likely to emerge in the next three to five years, which will have the capabilities far beyond the present systems.

LeCun also predicts that the coming years could see emergence of robotics combined with AI to unlock a new class of applications.

LeCun spoke at Davos in January 2025 in a session called Debating Technology. He feels the present generative AI and LLMs, though useful, fall short of the expectations. The new architecture will overcome the limitations of the present system which lack the understanding of the physical world, lack of persistent memory, lack of reasoning and lack of complex planning.

Another AI revolution is in the offing. Maybe, it will acquire new nomenclature as the present nomenclature of generative AI will not be appropriate for it. At the heart of this technology are the world models which will make the machines understand the dynamics of the real world. The qualities the new models would possess are persistent memory, common sense, intuition, and reasoning abilities. The current systems have not gone beyond pattern-recognition. Though LLMs process natural language, they are not good at thinking. If the research succeeds, they will have a different paradigm.

In robotics, the focus will be on general purpose, adaptive and versatile robots with intelligence equaling human intelligence. The coming decade could be a decade of robotics.

Nvidia’s Robot Training Tech and Gaming Chips

Nvidia unveils in January 2025 new products such as chips to better train robots and cars. These will also enhance the gaming chips and its first desktop computer.

Cosmos foundation models generate photorealistic video which can be used to train robots and self-driving cars. This will be more economical than using conventional data.

These models create synthetic data for training. They make the robots and cars understand the physical world similar to the way LLMs have helped chatbots to generate responses in natural language.

Today the data is generated by putting the cars on the road to gather video. Robots are taught by humans to teach repetitive tasks. Cosmos takes a text description to generate a video that obeys the laws of physics.

Cosmos is available on open license similar to Meta’s Llama3 language models.

Nvidia also unveiled gaming chips that use Nvidia’s Blackwell AI technology. These chips are called RTX50 series. They make images more realistic. There are more accurate human faces.

Toyota will use its Orin chips and automative operating system to provide advanced driver assistance in several models.

Molecules Trapped for Quantum Operations

So far molecules have not been used in quantum computing, despite the possibility of their potential to make the technology faster. Till today, smaller particles have been used, as molecules have rich internal structures which are complicated, delicate and unpredictable.

The ice was broken by the Harvard scientists who succeeded in trapping molecules to perform quantum operations. Qubits are the units of information, and for this experiment they used ultra-cold polar molecules. They were working on this for the last 20 years.

Quantum computing exploits the findings of quantum mechanics for computation, making it exponentially faster than classical computing. This system uses tiny particles trapped reliably to serve as qubits and constitute the logic gates. The Harvard research uses molecules to form an iSWAP gate — a key quantum circuit that creates entanglement. It is the entanglement that makes quantum computing so powerful.

The molecules used were of NaCs — sodium-cesium. They were trapped using optical tweezers (lasers). The environment was ultra-cold. The dipole-dipole interactions between the molecules were used to perform a quantum operation. The rotation of molecules was carefully controlled. That led to the entanglement of two molecules — creating a quantum state (known as two qubit Bell state with 94 per cent accuracy.)

As we know, information processing is done in logic gates in both quantum computers and traditional computers. Classical gates manipulate binary bits (0s and 1s). Quantum gates operate on qubits — which achieve superposition — by existing in multiple states simultaneously. In other words, quantum computers can do things which are impossible for traditional computers. They create entangled states and even perform operations in multiple computational states at once.

Quantum gates are reversible too. They can manipulate qubits with precision while preserving their quantum nature. In this experiment, they used iSWAP gate which swapped two qubits and applied a phase shift (an essential step to generate entanglement where the states of two qubits become correlated irrespective of the distance that sets them apart).

This trapped molecule technology is an important milestone and the last building block of a molecular quantum computer. Scientists can avail of the nuclear spins and nuclear magnetic resonance for quantum computing. So far molecules were considered unstable for use in quantum operations since their movements can interfere with coherence. Trapping them successfully, researchers have overcome this hurdle.

AI and Fashion

Even Fashion Weeks take place in the digital space these days. The catwalks are embedded into several digital monitors. The clothes showcased could be non-existent. These are displayed by avatar models. So many emerging designers participate. They each create virtual designs on AI software such as Midjourney. It is an Immersive and innovative fashion experience.

AI is redefining creative processes and customer interaction. Generative AI has the potential to affect the entire fashion ecosystem by leveraging technology to help create better designs, by reducing marketing costs and by personalizing communications, and accelerating the processes.

AI models are being created, and they are trained on the basis of several years of archive. The trained model understands the DNA of an artist. The model can be used to create new collections.

AI is revolutionizing the approach towards creativity in fashion. It makes the artists smarter. AI is a highly intelligent assistant. The artist focuses on art aspects, and AI handles data analysis, trend predictions and other time-consuming tasks.

AI can help the artist visualize how fabrics will drape and move. Time is saved. Materials are saved. And a prototype is created.

AI is great help in understanding and forecasting fashion trends. The artist can exploit new market opportunities.

Of course, craftsmanship remains central in fashion industry and cannot be replaced. The feel of a hand embroidered stitch is unique. The feel of the fabric too is vital. Technology is a great help, but there will always be place for human spirit and creativity that breathe life into the fabrics.

AI at best is a muse. It is not a threat. AI can handle routinized work. It is not meant to overshadow the artisan’s work. Rather, AI complements an artisan’s work.

AI is great facilitator in marketing. It can create future catalogue shoots and e-commerce visuals for upcoming collections. It is also a cost saving tool — virtual samples and collections. It improves supply chain efficiency. It gives access to audience reach — identifies target audience.

AI will democratize fashion design by making it accessible to emerging talents and consumers a like.

Crime Literature

Despite selling well, crime literature has been looked down upon. It is considered ugly aspect of the literary world. It is uncalled for. Though some people may dislike it, a vast majority of people read crime fiction, as they find it interesting. Looking forward, there is great scope in crime fiction in all the mediums. This genre will grow in future.

Crime stories and novels are written as fiction. There are some who write the stories and novels of real crime. It is not necessary for a writer to go into the graphic detail about the crime or the act of violence. Crime can be used as a peripheral tool. Stories can start after the crime has happened. Readers are not interested in the gory details of the crime. They interested in the mystery that follows. Crime books and stories are sometimes called rahasyakatha in vernacular. The story narrates how the crime was solved.

Surendra Mohan Pathak is a veteran crime fiction writer in Hindi. At the second edition of Crime Literature Festival held at Dehradun, he was given the Lifetime Achievement Award for his contribution to the genre of crime writing with over 300 novels. Another prolific crime writer in Marathi was the late Baburao Arnalkar. The characters these writers create to bust the crime become celebrated names.

Algo Trading

The SEBI proposes significant changes to its framework governing algo trading. Algo trading (algorithmic trading) refers to any trading activity that automates trades, and does not require manual intervention to place any orders, or monitor prices.

Algo trading is carried out in two ways. In straightforward method, the algorithms provided by the broker are used. The second route is that of API — application programme interface. This is the connection of electronic systems. It is like a data pipe which carries the algorithm. APIs enable the transmission of information. Consequently, a third party can create a code that will execute itself on the brokers platform. In the context of algo trading, third parties provide their algo on say, platform X which is connected to broker’s platform through an API. Thus, orders placed by the client on platform X get passed on to the broker. A broker can identify that an order is coming through an API and it cannot verify that it is an algo order.

In 2021, SEBI proposed to treat all API orders as algo orders. That has been scrapped. SEBI has now suggested with respect to API orders that an order per second (OPS) threshold be specified. All API orders above such threshold would be treated as algo orders.

SEBI has also suggested that algo providers or APIs should be brought within the regulatory ambit. These would be agents of stockbrokers who will register with the stock exchange and get their algos approved by the exchange. Thus, these will be accountable to customers as far as grievances are concerned. The clients will have the redressal mechanism deployed by SEBI.

The algos are classified into two categories — white box algos and black box algos. White box algos (execution algĂ´s) execute orders based on fully transparent algorithms. The logic, decision-making processes and the rules are accessible, understandable and replicable. Black box algos are those whose logic is not known to the user and are not replicable. To provide black box algos, one would be required to register as a research analyst. For each algo, a research report would have to be maintained. If there is any change in algo logic, it would have to be registered afresh, together with a new report.

Stock exchanges will have to define the roles and responsibilities of brokers and the vendors on panel. A turnaround time must be specified for registration of algos. There would be post-trade monitoring by exchanges and there would be SOP for algo testing. A killer switch can shut down the malfunctioning algos. Broker supervision is necessary to assess their ability to distinguish between algo and non-algo orders. There would be risk management for API orders. SEBI has granted approval for a ‘Past Risk and Return Verification Agency (PaRRVA).

In future algos may be designed by AI. The arising risks must be envisaged.

It is natural that regulation of algos or AI-based algos will create some bureaucracy. Still, the regulator must have a grip on something that can have systemic impact on the markets.

Breaking Nvidia Monopoly

AI has become a competitive market and there is a race among chip makers — AMD, Intel and Nvidia to have a pie in this market.

In AI computing, Nvidia has a commanding lead. Nvidia’s GPUs dominate AI training. However, when the AI system is deployed in corporates and to individuals to make predictions and decisions, there is a need for AI inferencing. Business could see tangible returns on AI investments at the inferencing stage. AMD and Intel are positioning themselves here to capitalise on this opportunity.

No doubt, Nvidia’s GPUs are gold standard for AI training. AI inferencing soon is going to be a larger market than training over a period of time. AMD and Intel are positioning their CPUs and GPUs to capitalise on this transition. These would prove to be power-efficient and cost-efficient alternatives for the enterprises. It will force Nvidia to lower its prices.

Currently, Nvidia draws its major revenue from data centers and therefore AMD and Intel eye inferencing market. This can alter competitive landscape. AI training is concentrated in data centers. Inferencing is expected to take place closer to users on edge devices — smart phones, autonomous vehicles and IoT systems.

Nvidia too is not sitting idle. It is expanding its portfolio to include CPUs and optimized GPUs for inferencing.

No AGI Yet

Whenever we speak about AI, we think about AGI and singularity. San Altman, OpenAI CEO, predicted that AGI will arrive as soon as 2025, and Elan Musk predicted its arrival in 2026. This is just hype and is not reality. In 2025, there is not going to be AGI, and we will only deal with large language models (LLMs).

Let us define AGI and singularity. Artificial General Intelligence (AGI) is a type of advanced AI that can think, learn and solve problems across a wide variety of tasks, just like human beings. Singularity is the idea of AI surpassing human intelligence. The system will improve continuously and will have substantial impact on the society.

Next Word Prediction is Not Intelligence

ChatGPT is a generative AI model that converses with us just like a human being. It can generate new content. All this is amazing. But this remains confined to recognizing patterns and predicting the next word or token by considering probabilities. This is achieved by getting trained on vast corpus of data. The model is not self-aware.

o1 Model — Whether the First Step for AGI

It is not the first step for AGI. o1 has been released in 2024. It does not answer a question directly. Instead, it makes a plan to answer it the best way. It then scrutinizes its response. It improves upon it. It continues to make it better. It is chained output. In 2025, we will witness many such chains but not AGI. It is a fixed framework which is neither dynamic nor scalable. It consumes more time. o1 is amazing. It takes us closer to achieve AGI.

Barriers to AGI

Human thinking is rapid and instinctive. AI works on patterns and misses the bus. AI struggles with context and misses important details. At present, AI outputs are linked to previous ones — these are autoregressive models. Mistakes can magnify.

In 2025, there will be more o1 models integrated to Chains of Thought. They will excel at specific tasks, will boost up productivity and surpass humans in some areas. It is an exciting development, but these would not constitute AGI.

Sam’s Claim on AGI

It seems more marking. It arrests attention. It could be considered over-the-top. However, it focuses on its biggest breakthrough.

Split-Electrons

As we know, electrons are sub-atomic particles and are considered indivisible and fundamental particles. Recent research reveals there could be split-electrons — a feature of quantum mechanics — and these mimic the behavior of half an electron. It is an important milestone in quantum computing. (Physical Review Letters, Adrew Mitchell and Sudeshna Sen, from Dublin School of Physics and IIT, Dhanbad respectively).

Already there is miniaturization of electronics. A circuit components are nanometers across. The rules here are governed by quantum mechanics. An electric current blowing through a wire is made up of lots of electrons. If the wire is made smaller and smaller, the electrons will pass one-by-one. We can make transistors which work just with a single electron.

If the nano-electric circuit is designed to give electrons the choice of two pathways, there is quantum interference — similar to that which is observed in Double-Slit Experiment.

Double-slit Experiment

The double-slit experiment demonstrates the wave like properties of quantum particles (such as an electron). This led to the development of quantum mechanics in the 1920s.

Individual electrons are fired at a screen through two tiny apertures. The place they hit is recorded on a photographic plate on the other side. Electrons can pass through either slit. Therefore, they interfere with each other. In fact, a single electron can interfere with itself. It is similar to what a wave does (as it passes through both slits at the same time).

The result is an inference pattern of alternating high and law intensity stripes on the backscreen.

The probability of finding an electron in certain places can be zero due to destructive interference. (Two waves colliding ( peaks and troughs — cancelling each other).

Majorana Fermions

A nonelectric circuit is similar — electrons going down different paths in the circuit can destructively interfere and block current from flowing. It is a phenomenon observed in quantum devices.

The new thing detected was that when electrons are forced close enough together, they strongly repel each other. The quantum interference changes. Collectively, they behave as if electron has been split into two.

The result is Majorana fermion. It is a particle first theorized by mathematicians in 1937. It has not yet been isolated experimentally.

This finding is useful in building new quantum technologies. If Majorana fermion can be created in an electric device, the device can be manipulated.

Research continues on this as these are key ingredients for proposed topological quantum computers.

Quantum Era Has Begun

Nvidia CEO states that quantum computing is still 15-30 years away. It raised misgivings about the readiness of this technology, resulting into the price decline of quantum computing stocks. The remark from the CEO is applicable to fully scalable, general-purpose quantum systems. It ignores the fact that quantum computing already delivers results today.

Quantum has not remained a preserve of physicists and futurists. It has already started helping industries to solve problems which classical systems cannot tackle. Quantum systems help in predictive analytics and decision-making. Quantum era is unfolding right now.

Quantum computing scores over the classical computing by processing data differently taking advantage of qubits and superposition and uses an entire spectrum of possibilities. Entanglement provides interconnected qubits, and changes in one affect others, no matter how far apart they are. A complex problem is tackled by quantum systems more effectively. The challenges of optimization are solved.

Quantum plus AI is a real game changer. Quantum excels at optimization whereas AI unlocks the potential even more. AI is good at pattern recognition and predictive modelling. However, AI suffers from computational bottlenecks.

Though quantum computing offers unparalleled speed and efficiency, it is only as good as the data it processes. Quantum applications require clean, structured and actionable data. Quantum requires data preparation. If businesses invest in better data pipelines, quantum’s transformative potential could be fully realized.

Quantum at present requires certain hardware conditions — cryogenic cooling nearing absolute zero. In addition, the systems are not general-purpose. They are suitable for optimization and simulations. However, the technology is evolving a rapid pace. In the meantime, there could be hybrid models of quantum-classical systems to bridge the gap.

Quantum and us are not 30 years apart. Quantum is already solving problems which classical system cannot. The quantum era has begun.