Blog

  • New Automobile Industry

    The entire automobile industry will undergo a metamorphosis on account of gigacasting or megacasting. Tesla started the trend using large-scale die-casting in automobile industry. Dozens of chassis pieces were combined into one entire section for its model Y in 2020. The company used Giga Press equipment from an Italian supplier. The whole process curtails the number of welds and bolts and reduces weight. Megacasting machines are massive and apply 9000 tons of force upon the molten aluminum alloys within a casting mould. The punched out panels are larger and weigh 200 kg. apiece. The whole assembly process is rethought and reconfigured. It requires upfront investment. While Tesla pioneered gigcasting with their model Y, other makers are catching on recognizing its potential to revolutionize car manufacturing.

    The change in production process changes the economics of the automobile industry. There could be a reduction of 20 per cent in traditionally stamped and joined body parts type cars by 2030.

    The current car making process is modular, but it allows easy repairs. In a collision case, the whole burnt was borne by the front and rear bumpers sticking out from the chassis. In low-speed accidents, there was little structural destruction. The rest of the car was immune to damage. A few parts were required to be replaced. It could be done in a few hours.

    In a gigacast car, repairs are costly and complicated. Large sections of the car are affected.

    Modern cars are computers on wheels. There are levels of autonomy available on cars. Sensors and software have been deployed on the cars. There are rear cameras. In an accident, a car’s electronics get affected. The electronics is built into panels, doors, bumpers, fenders and the trunk. A car after accident is examined to analyze all the sensors and controllers to check what is damaged. The damaged components are replaced and recalibrated. It is an expensive and time-consuming process.

    Even gigacasting does not make a car foolproof. There could be cracks in aluminium castings on models (such images could be seen online).

    There are costs involved in repairing large sections. It is complex too. Industry groups and insurers are concerned. Gigacast cars may attract more premiums.

    A new industry can come up — an industry to refurbish car and then sell them. It is on account of replacement (rather the repairs) that the whole car becomes brand new. Automotive recycling can become a big industry.

  • Superhuman AI: Some Observations

    There were rumours that OpenAI is close to developing superhuman AI when Sam Altman was unceremoniously dismissed from the organization. OpenAI has built a Super-alignment team to control AI that surpasses the human beings. The members of this team include Collin Burns, Pavel and Leopold. They see to it that AI systems behave as intended. The team was built in July, 2023 to steer, regulate and govern superintelligent AI systems.

    These days we tend to align models that are dumber than us. The idea is to find ways to align models that are smarter than us.

    We know Sutskever played an active role in Altman’s ouster. After Altman’s come back, he is in a state of limbo. Still, he heads the Super-alignment team.

    To the AL community, super-alignment is a sensitive subject. To some, it is a red herring. To others, it is a premature subfield.

    Surprising Altman compares OpenAI and the Manhattan project. Both are treated as projects which require protection against catastrophic risks. Many scientists are skeptical about AI gathering world-ending capacity anytime soon, or for that matter ever.

    Instead, attention should be focused on AI bias and toxicity. Sutskever believes that AI, either from OpenAI or some other can threaten humanity. At OpenAI, 20% computer chips are available for Super-alignment’s team research.

    The team is currently developing the framework for AI’s governance and control.

    It is a moot point to define superintelligence, and whether a particular AI system has reached that level. The present approach is to use less sophisticated models such as GPT-2 so as to guide the more sophisticated models towards the desired direction.

    The research will also focus on a model’s egregious behaviour. Human beings are trading off between weak models and sophisticated models. But can a lower-class student direct a college student? The weak-strong model approach may lead to some breakthroughs, as far as hallucinations are concerned.

    Internally, a model recognizes its hallucination — whether what it says is fact or fiction. However, the models are rewarded, either thumbs up or thumbs down. Even for false things, they are rewarded at times. Research should enable us to summon a model’s knowledge and to let it discriminate with such knowledge whether what is said is fact or fiction. This would reduce hallucinations.

    As AI is reshaping our culture and society, it is necessary to align it with human values. The most important thing is the readiness to share such research publicly.

  • Regulation of AI

    The European Union passed Artificial Intelligence Act for oversight and regulation. India too is holding a Global Partnership on AI Summit to reach a global consensus on AI regulation.

    AI has affected the manufacturing sector. AI facilitates drug discovery, material science research. It transforms healthcare and diagnostics, autonomous transport and small but efficient and smart power grids. It assists financial systems and telecom networks. It enhances the provision of a host of public and private services.

    AI has its demerits. It can promote criminal activities. It consolidates power in authoritarian regimes through face recognition, surveillance and discriminatory systems. At present, human beings are in charge of ‘pulling the trigger’ of dangerous military weapons. This power gets transferred to AI. Then self-aware AI possesses traits such as inquisitiveness and has an instinct of self-preservation. These issues must be tackled holistically. As AI spreads across economies, there should be consensus on regulation. The ideal oversight exercises control and mitigates the possibility of harm without crippling research and the rollout of useful AI.

    The European regulation attempts a technology-neutral uniform definition for AI applicable to all future system. There is a classification of AI systems as per the risks. The higher the risk, the greater the oversight and the more the obligations imposed on providers and users.

    According to AI Act, the limited risk systems should comply with transparency requirements. Users should be made aware that they are interacting with AI. To illustrate, systems that generate images should warn against deepfakes and image manipulation. There should be disclosure that the content is AI-generated. This puts curbs on the generation of illegal content. There should be public summary of copyrighted data used in training.

    High-risk AI systems affect safety and fundamental rights. There are two categories — AI in products such as toys, aviation, cars, medical devices and lifts. Then there is another category — AI used across specific areas such as biometric identification, critical infrastructure, education and vocational training and AI-managed access to essential private and public services. These are registered in EU databases. Both categories of high-risk AI systems must be assessed before roll-out and must be reviewed throughout their life cycles.

    Some systems pose unacceptable risks in the AI Act. One such example is behavioral manipulation of people, or vulnerable groups. Then there are biometric identification systems. These must be used with court approval to identify criminals and apprehend them after a serious crime is committed.

    There can tweaking in this basic framework as per the needs of the country. But it is a reasonable framework for global regulation. The framework excludes military research and development.

    The GPAI summit in India could adopt some version of this Act.

  • Walled Gardens

    A San Francisco jury decided Google was running an illegal monopoly to recover huge fees from app developers. It marked a victory for Epic Games. Though a huge relief, Google’s and Apple’s walled gardens still stand.

    Epic was ousted from Google Play Store and Apple’s App Store for attempting to bypass the payment systems (avoiding a cut of 30 per cent commission from transaction that go through the payment system). Epic lost its case against Apple but won the case against Google. It is heralded as a turning point in the mobile app economy.

    The walled gardens of iOS and Android are built on strong foundations. Of course, there is a dent on the wall. The fees charged for in-app purchases has been a bone of contention since long. At present, almost $200 billion a year are collected for both these companies. It is treated as a fair compensation for the security these stores provide. However, developers take a different view.

    Google discourages efforts of the developers to launch mobile distribution efforts of their own. It has launched Project Hug luring top game developers with financial incentives. It actively steers them away from agitating for better terms. Sweetheart deals are offered asking for a smaller percentage of commission on in-app transactions. Google ties up with hand-set makers such as Samsung so as to prioritize Google’s store over any other.

    Apple, by contrast, treats every developer equally on its store. There is no need to exert pressure on handset maker because it manufactures its own handsets. Thus, Google’s behaviour is particularly egregious. And as the judge put it ‘success is not illegal.’ Apple was asked to abandon its ‘anti-steering’ rules.

    Apple and Google are called walled gardens as they are pleasant and well-maintained. However, more choice is good for consumers.

  • Epic Wins against Google

    Epic Games makes Fortnite games. As we know, on cell phones, there is a duopoly of Google Play and Apple which run app stores generating close to $200 billion a year.

    Google was facing a case from Epic. The jury in San Francisco ruled after a three-year legal battle that Google has turned is play app store and billing service into a legal monopoly.

    American video game-maker Epic Games has sued Google in 2020. Google is the Mountain View headquartered search engine company.

    The jury has observed that the company hurt competition by tying its Google Play Store with its billing services.

    Epic had secretly installed its own payment system to bypass the up-to-30 per cent revenue share that the two tech giants, viz. Google and Apple take from in-app purchases and subscriptions on their platforms. If this is removed from the eco-system, consumer prices will get better.

    There is a fortune at stake for both Apple and Google. The Digital Markets Act in the European Union will bring about changes.

    Both the companies are making adjustments. Apple allows the so-called reader apps (say software for cloud storage, watching video and reading books) link to outside websites to let users pay. That bypasses Apple’s revenue cut. Both Apple and Google have changed their policies to take commission on subscription apps.

    Epic’s win against Google has the potential to bring major changes in the USA, its home country. That will take internet software back to a more open environment. App stores are a closed eco-system.

  • Creativity, Data and Technology

    Creativity was once considered to be the sine qua non for advertising. The advent of digital advertising changed the perspective. The idea now is the to keep the message rolling across the media so as to have as wider a reach as possible. This puts creativity in the backseat. Acceptable creativity is now average creativity. However, one must try to have an effective amalgam of creativity, data and technology. These three put together will produce impactful and real campaigns. The work created must solve real problems. It should stir a whole mass of people. This unlocks the true power of creativity.

  • Gemini — A Giant Stride or a Leap of Faith

    On December 6, 2023, Google launched its most powerful AI model — Gemini, calling it a huge leap upward in AI model.

    Gemini has three versions — Pro, Ultra and Nano. Nano, as the name itself indicates is the lighter version. It runs natively and off-line on Android devices. Gemini Pro is the heavier version that is expected to power Google’s AI services. It will be the backbone of Bard, Google’s chatbot. Gemini Ultra is the most advanced LLM designed for data centers and enterprise applications. Though all these models currently support English, soon they will support other languages too.

    Gemini will be integrated with Google’s search engine and Chrome browser, and other Google products.

    In comparison to GPT-4, Gemini scored 90 per cent on the massive multi-task language understanding test. It surpassed human beings with a score of 89.8 per cent while GPT-4 had a score of 86.4 per cent.

    However, the testing methods used are different. Google uses chain-of-thought with 32 samples, whereas GPT-4 uses 5-shot prompting technique to get the score.

    Bard will benefit a great deal with Gemini Pro by getting the advanced reasoning capacity. Bard Advanced, a second version of Bard, will follow in 2024. It will access Gemini Ultra.

    While doing all this, Google will have to take care of changing tech regulations and AI ethics and tackle problems such as LLM hallucinations.

  • Generative AI: What Next?

    It was reported when Sam Altman was fired as CEO of OpenAI it was on the brink of a breakthrough — a new algorithm Q* which can solve math problems of high-school standards with great accuracy, though GPT-4 could do this with 70 per cent accuracy. Q*’s. perfect scores vested it with logical reasoning, thus deviating from identification and replication of patterns learnt during training.

    If true, we are one step closer to what is being described as AGI — artificial general intelligence. In fact, here there is absorption, deciphering, and replication of various patterns learnt in the training phase, and in addition reasoning ability. This power could improve in subsequent iterations. AGI then could be equated with high intelligence.

    AI, as we know it today, is narrow. Its algorithms are designed to perform a narrow range of tasks, though LLMs are more versatile. Generative AI is good at writing and language translation. It works by statistically predicting the next likely word, and by logging the contextual association of words to each other. Even while solving math or writing a code, they are working through statistical association. In order to solve novel problems of math, they must have greater reasoning capabilities.

    Real AGI will perform a lot of tasks and tackle problems far better than humans can. By definition, AGI will perform new tasks without instructions. This model could be self-aware or conscious. It may possess traits such as curiosity, self-will or a desire for self-preservation. All these traits which we associate with the living beings.

    Could such a model be ethical or altruistic? Such concepts have variations across the cultures. However, AI that is not aligned with the goals good for humans could be dangerous.

    Sam uttered a sentence a day before he was fired — ‘push the veil of ignorance back and the frontier of discovery forward’. Was he hinting at Q*? Many such rumours float around.

  • Vinod Khosla on AI

    We know investor and venture capitalist Vinod Khosla (68). He has also invested in AI startups including OpenAI. He had invested $50 million in OpenAI in 2019. He also poured funds into other startups like Replit as well. .He believes that we will have access to free professional services (medical, legal etc.) and human-like robots on account of AI.

    He believes that in the next 10 years, the world will have free doctors, free tutors for everybody and free lawyers to access the legal system.

    In the next 25 years, say by 2048, we will have a whole team of robots which will stand upright the way we do. These will be bipedal robots. It will form a large industry, just like automobile industry of today.

    LLMs will have capabilities that have not so far been seen. We have not yet realized the limits of AI capability. There is a fear about AI turning sentient or robots turning conscious. All this is non-sensical. Just think positive how AI will benefit humanity. Why should we focus on dystopian angle of one per cent probability of something untoward happening?

    The path to make lives better for 7 billion plus people on the earth runs through AI.

  • Perplexity AI

    Several former Google AI researchers such as Andy, Srinivas, Yarats, Ho have founded a startup Perplexity AI which is responsible for chatbot Perplexity Co-pilot. This has the potential to be a market leader in web search combining web index and chatbot interface. It can affect the market leader Google’s position. It has also released its LLMs-pplx-7b-online and pplx-70b-online where the digits indicate their parameter sizes. They are fine-tuned and augmented versions of open-source models from Mistral and Mela. Parameters refer to the number of connections between model’s artificial neurons. They indicate how ‘intelligent’ and powerful the modes are. The higher the parameters, the more knowledgeable the model.

    They have Perplexity API so that others can use and build their own apps.

    They also offer helpful, factual and uptodate information.