Blog

  • AI and Fashion

    Even Fashion Weeks take place in the digital space these days. The catwalks are embedded into several digital monitors. The clothes showcased could be non-existent. These are displayed by avatar models. So many emerging designers participate. They each create virtual designs on AI software such as Midjourney. It is an Immersive and innovative fashion experience.

    AI is redefining creative processes and customer interaction. Generative AI has the potential to affect the entire fashion ecosystem by leveraging technology to help create better designs, by reducing marketing costs and by personalizing communications, and accelerating the processes.

    AI models are being created, and they are trained on the basis of several years of archive. The trained model understands the DNA of an artist. The model can be used to create new collections.

    AI is revolutionizing the approach towards creativity in fashion. It makes the artists smarter. AI is a highly intelligent assistant. The artist focuses on art aspects, and AI handles data analysis, trend predictions and other time-consuming tasks.

    AI can help the artist visualize how fabrics will drape and move. Time is saved. Materials are saved. And a prototype is created.

    AI is great help in understanding and forecasting fashion trends. The artist can exploit new market opportunities.

    Of course, craftsmanship remains central in fashion industry and cannot be replaced. The feel of a hand embroidered stitch is unique. The feel of the fabric too is vital. Technology is a great help, but there will always be place for human spirit and creativity that breathe life into the fabrics.

    AI at best is a muse. It is not a threat. AI can handle routinized work. It is not meant to overshadow the artisan’s work. Rather, AI complements an artisan’s work.

    AI is great facilitator in marketing. It can create future catalogue shoots and e-commerce visuals for upcoming collections. It is also a cost saving tool — virtual samples and collections. It improves supply chain efficiency. It gives access to audience reach — identifies target audience.

    AI will democratize fashion design by making it accessible to emerging talents and consumers a like.

  • Crime Literature

    Despite selling well, crime literature has been looked down upon. It is considered ugly aspect of the literary world. It is uncalled for. Though some people may dislike it, a vast majority of people read crime fiction, as they find it interesting. Looking forward, there is great scope in crime fiction in all the mediums. This genre will grow in future.

    Crime stories and novels are written as fiction. There are some who write the stories and novels of real crime. It is not necessary for a writer to go into the graphic detail about the crime or the act of violence. Crime can be used as a peripheral tool. Stories can start after the crime has happened. Readers are not interested in the gory details of the crime. They interested in the mystery that follows. Crime books and stories are sometimes called rahasyakatha in vernacular. The story narrates how the crime was solved.

    Surendra Mohan Pathak is a veteran crime fiction writer in Hindi. At the second edition of Crime Literature Festival held at Dehradun, he was given the Lifetime Achievement Award for his contribution to the genre of crime writing with over 300 novels. Another prolific crime writer in Marathi was the late Baburao Arnalkar. The characters these writers create to bust the crime become celebrated names.

  • Algo Trading

    The SEBI proposes significant changes to its framework governing algo trading. Algo trading (algorithmic trading) refers to any trading activity that automates trades, and does not require manual intervention to place any orders, or monitor prices.

    Algo trading is carried out in two ways. In straightforward method, the algorithms provided by the broker are used. The second route is that of API — application programme interface. This is the connection of electronic systems. It is like a data pipe which carries the algorithm. APIs enable the transmission of information. Consequently, a third party can create a code that will execute itself on the brokers platform. In the context of algo trading, third parties provide their algo on say, platform X which is connected to broker’s platform through an API. Thus, orders placed by the client on platform X get passed on to the broker. A broker can identify that an order is coming through an API and it cannot verify that it is an algo order.

    In 2021, SEBI proposed to treat all API orders as algo orders. That has been scrapped. SEBI has now suggested with respect to API orders that an order per second (OPS) threshold be specified. All API orders above such threshold would be treated as algo orders.

    SEBI has also suggested that algo providers or APIs should be brought within the regulatory ambit. These would be agents of stockbrokers who will register with the stock exchange and get their algos approved by the exchange. Thus, these will be accountable to customers as far as grievances are concerned. The clients will have the redressal mechanism deployed by SEBI.

    The algos are classified into two categories — white box algos and black box algos. White box algos (execution algôs) execute orders based on fully transparent algorithms. The logic, decision-making processes and the rules are accessible, understandable and replicable. Black box algos are those whose logic is not known to the user and are not replicable. To provide black box algos, one would be required to register as a research analyst. For each algo, a research report would have to be maintained. If there is any change in algo logic, it would have to be registered afresh, together with a new report.

    Stock exchanges will have to define the roles and responsibilities of brokers and the vendors on panel. A turnaround time must be specified for registration of algos. There would be post-trade monitoring by exchanges and there would be SOP for algo testing. A killer switch can shut down the malfunctioning algos. Broker supervision is necessary to assess their ability to distinguish between algo and non-algo orders. There would be risk management for API orders. SEBI has granted approval for a ‘Past Risk and Return Verification Agency (PaRRVA).

    In future algos may be designed by AI. The arising risks must be envisaged.

    It is natural that regulation of algos or AI-based algos will create some bureaucracy. Still, the regulator must have a grip on something that can have systemic impact on the markets.

  • Breaking Nvidia Monopoly

    AI has become a competitive market and there is a race among chip makers — AMD, Intel and Nvidia to have a pie in this market.

    In AI computing, Nvidia has a commanding lead. Nvidia’s GPUs dominate AI training. However, when the AI system is deployed in corporates and to individuals to make predictions and decisions, there is a need for AI inferencing. Business could see tangible returns on AI investments at the inferencing stage. AMD and Intel are positioning themselves here to capitalise on this opportunity.

    No doubt, Nvidia’s GPUs are gold standard for AI training. AI inferencing soon is going to be a larger market than training over a period of time. AMD and Intel are positioning their CPUs and GPUs to capitalise on this transition. These would prove to be power-efficient and cost-efficient alternatives for the enterprises. It will force Nvidia to lower its prices.

    Currently, Nvidia draws its major revenue from data centers and therefore AMD and Intel eye inferencing market. This can alter competitive landscape. AI training is concentrated in data centers. Inferencing is expected to take place closer to users on edge devices — smart phones, autonomous vehicles and IoT systems.

    Nvidia too is not sitting idle. It is expanding its portfolio to include CPUs and optimized GPUs for inferencing.

  • No AGI Yet

    Whenever we speak about AI, we think about AGI and singularity. San Altman, OpenAI CEO, predicted that AGI will arrive as soon as 2025, and Elan Musk predicted its arrival in 2026. This is just hype and is not reality. In 2025, there is not going to be AGI, and we will only deal with large language models (LLMs).

    Let us define AGI and singularity. Artificial General Intelligence (AGI) is a type of advanced AI that can think, learn and solve problems across a wide variety of tasks, just like human beings. Singularity is the idea of AI surpassing human intelligence. The system will improve continuously and will have substantial impact on the society.

    Next Word Prediction is Not Intelligence

    ChatGPT is a generative AI model that converses with us just like a human being. It can generate new content. All this is amazing. But this remains confined to recognizing patterns and predicting the next word or token by considering probabilities. This is achieved by getting trained on vast corpus of data. The model is not self-aware.

    o1 Model — Whether the First Step for AGI

    It is not the first step for AGI. o1 has been released in 2024. It does not answer a question directly. Instead, it makes a plan to answer it the best way. It then scrutinizes its response. It improves upon it. It continues to make it better. It is chained output. In 2025, we will witness many such chains but not AGI. It is a fixed framework which is neither dynamic nor scalable. It consumes more time. o1 is amazing. It takes us closer to achieve AGI.

    Barriers to AGI

    Human thinking is rapid and instinctive. AI works on patterns and misses the bus. AI struggles with context and misses important details. At present, AI outputs are linked to previous ones — these are autoregressive models. Mistakes can magnify.

    In 2025, there will be more o1 models integrated to Chains of Thought. They will excel at specific tasks, will boost up productivity and surpass humans in some areas. It is an exciting development, but these would not constitute AGI.

    Sam’s Claim on AGI

    It seems more marking. It arrests attention. It could be considered over-the-top. However, it focuses on its biggest breakthrough.

  • Split-Electrons

    As we know, electrons are sub-atomic particles and are considered indivisible and fundamental particles. Recent research reveals there could be split-electrons — a feature of quantum mechanics — and these mimic the behavior of half an electron. It is an important milestone in quantum computing. (Physical Review Letters, Adrew Mitchell and Sudeshna Sen, from Dublin School of Physics and IIT, Dhanbad respectively).

    Already there is miniaturization of electronics. A circuit components are nanometers across. The rules here are governed by quantum mechanics. An electric current blowing through a wire is made up of lots of electrons. If the wire is made smaller and smaller, the electrons will pass one-by-one. We can make transistors which work just with a single electron.

    If the nano-electric circuit is designed to give electrons the choice of two pathways, there is quantum interference — similar to that which is observed in Double-Slit Experiment.

    Double-slit Experiment

    The double-slit experiment demonstrates the wave like properties of quantum particles (such as an electron). This led to the development of quantum mechanics in the 1920s.

    Individual electrons are fired at a screen through two tiny apertures. The place they hit is recorded on a photographic plate on the other side. Electrons can pass through either slit. Therefore, they interfere with each other. In fact, a single electron can interfere with itself. It is similar to what a wave does (as it passes through both slits at the same time).

    The result is an inference pattern of alternating high and law intensity stripes on the backscreen.

    The probability of finding an electron in certain places can be zero due to destructive interference. (Two waves colliding ( peaks and troughs — cancelling each other).

    Majorana Fermions

    A nonelectric circuit is similar — electrons going down different paths in the circuit can destructively interfere and block current from flowing. It is a phenomenon observed in quantum devices.

    The new thing detected was that when electrons are forced close enough together, they strongly repel each other. The quantum interference changes. Collectively, they behave as if electron has been split into two.

    The result is Majorana fermion. It is a particle first theorized by mathematicians in 1937. It has not yet been isolated experimentally.

    This finding is useful in building new quantum technologies. If Majorana fermion can be created in an electric device, the device can be manipulated.

    Research continues on this as these are key ingredients for proposed topological quantum computers.

  • Quantum Era Has Begun

    Nvidia CEO states that quantum computing is still 15-30 years away. It raised misgivings about the readiness of this technology, resulting into the price decline of quantum computing stocks. The remark from the CEO is applicable to fully scalable, general-purpose quantum systems. It ignores the fact that quantum computing already delivers results today.

    Quantum has not remained a preserve of physicists and futurists. It has already started helping industries to solve problems which classical systems cannot tackle. Quantum systems help in predictive analytics and decision-making. Quantum era is unfolding right now.

    Quantum computing scores over the classical computing by processing data differently taking advantage of qubits and superposition and uses an entire spectrum of possibilities. Entanglement provides interconnected qubits, and changes in one affect others, no matter how far apart they are. A complex problem is tackled by quantum systems more effectively. The challenges of optimization are solved.

    Quantum plus AI is a real game changer. Quantum excels at optimization whereas AI unlocks the potential even more. AI is good at pattern recognition and predictive modelling. However, AI suffers from computational bottlenecks.

    Though quantum computing offers unparalleled speed and efficiency, it is only as good as the data it processes. Quantum applications require clean, structured and actionable data. Quantum requires data preparation. If businesses invest in better data pipelines, quantum’s transformative potential could be fully realized.

    Quantum at present requires certain hardware conditions — cryogenic cooling nearing absolute zero. In addition, the systems are not general-purpose. They are suitable for optimization and simulations. However, the technology is evolving a rapid pace. In the meantime, there could be hybrid models of quantum-classical systems to bridge the gap.

    Quantum and us are not 30 years apart. Quantum is already solving problems which classical system cannot. The quantum era has begun.

  • New York Times Game Stumps AI Models

    It is claimed by OpenAI that there are glimmers of AGI in its latest reasoning models. Still the different models currently available in the market such o1 from OpenAI, Anthropic from Google and Amazon and Microsoft’s model could not solve the Connections puzzle of the New York Times. The puzzle is solved by countless people everyday.

    Connections refer to a word game which is deceptively simple. You are given 16 terms, and you have to figure out what terms have in common, within groups of four. The commonality could be as simple as the ‘titles of the book’ or as the words that start with ‘fire’. In fact, it is a challenging puzzle.

    All the models failed to solve the puzzle despite the hype created around them.

    At least o1 could get some of the groupings right but the other groupings were bizarre.

    It was clear that LLMs work well while regurgitating already well-documented information but struggle while facing novel queries.

    OpenAI claims that it has reached close to AGI or has achieved the start of it. Perhaps the company is keeping it wrapped, because this is not AGI manifestation at all.

  • Levels of AI Agents

    We are transiting to the second quarter of the century, and by now are aware of the potential of AI. Since then, we have shifted attention from AI to AI agents. These agents will follow a pattern of evolution, passing through different levels.

    The very beginning of AI agents is through Reactive Agents. These do not rely on memory or learn from their past. They follow predefined rules to respond to inputs. A basic chatbot is a Reactive Agent. Later comes Task-specialized Agents who excel in specific tasks and outperform humans in these tasks. A recommendation engine of an e-commerce site is an example. Domain experts train these systems. Next, we shall examine Context-Aware Agents. They analyze complex scenario, historical data, real-time streams and unstructured information. They adapt to all these and then respond. Neural network initiated by Geoffrey Hinton and Yam LeCun are examples. Beyond this are Socially Savvy Agents lying at the intersection of AI and emotional intelligence. They can deal with customer service. Self-reflecting Agents try to improve themselves. They are aware of the philosophical discussions about consciousness. They refine algos governing them. AGI-powered Agents are integrators and coordinators. They can handle data from multiple spheres. Last, we can envisage Superintelligent Agents who will surpass human intelligence.

  • OpenAI’s March to Superintelligence

    Sam Altman on a personal blog writes that OpenAI knows how to build AGI and is focusing its attention now on superintelligence.

    He says the current products they market are highly satisfactory but the futuristic superintelligent tools would accelerate scientific discovery well beyond the humans are capable of doing on their own. It will lead to greater abundance and prosperity.

    Previously, Altman speculated that super intelligence could be ‘a few thousand days’ away. He also emphasized that its arrival would be ‘more intense than people think.’

    AGI or artificial general intelligence is a hazy term, but OpenAI has formulated its own definition. AGI represents highly autonomous systems that outperform humans at most economically valuable work. Microsoft that backs up OpenAI has given its own definition of AGI. AGI systems generate at least $100 in profits. When OpenAI achieves this target, Microsoft will lose access to its technology. This is in accordance with the agreement between the two companies.

    Altman has not specified which definition he has in mind. However, the former definition is the likeliest.

    AI agents may soon join the workforce by working autonomously, and may materially change the output of the companies.

    Progressively, they believe, in putting great tools in the hands of people.

    At present, the AI technology has limitations. These hallucinate and make mistakes.

    Altman is confident that these limitations can be overcome. They have also learnt that the timelines could shift.

    In the next few years, they would be able to put more effective systems. It is humbling to be able to play a role in this work.

    As OpenAI shifts its focus on superintelligence, it is hoped that the company would allocate sufficient resources to ensure the safety of such superintelligent systems. At present they do not have a solution for steering or controlling a potentially superintelligent AI and prevent it from going rogue (blog July 2023).