Blog

  • Chatbot of Deloitte

    Deloitte has rolled out a chatbot for its 75000 employees across Europe and Middle East so that they can do PPTs and write emails and code. This will boost their productivity.

    First, they launched a precursor to a full-fledged chatbot called PairD in the UK in October 2023. It was an indication that the full thing is work in progress and the employees were cautioned that the tool may go wrong, and they should exercise due diligence.

    Other competitors tied up with existing AI firms for their chatbots, while Deloitte developed it internally through its AI institute.

    It shows how professional service providers are automating their tasks. PwC is using chatbots in its legal and tax divisions. Allen & Overy, a law firm, uses it for drafting agreements subject to amendment and acceptance by lawyers.

    Deloitte employees will take a training module before using the AI tool.

  • AI’s Effect on Jobs

    Mustafa Suleyman, a cofounder of DeepMind with Hassabis and Shane in 2010 is an AI heavyweight. DeepMind was later taken over by Google in 2014. Suleyman continued with the new company for five more years, but left Google to form a new company Inflection AI that offers personalized AI assistants.

    Suleyman, though optimistic about AI’s efficiency increasing capability leading to increased productivity, is also apprehensive about its job replacement potential at the workplace. It is estimated that almost 47 per cent of jobs could be at risk of being automated by mid-2030s. According to a Mckinsey study, there would be 12 million job switches as AI takes over their roles by 2030.

    Suleyman calls AI a truly transformative technology and not just a hyped concept. However, it can displace workers, if not regulated.

  • Corrective Retrieval Augmented Generation: (CRAG)

    In LLMs, we come across inaccuracies and hallucinations in what they generate. We have previously discussed the concept of Retrieval Augmented Generation (RAG) where the LLM is integrated to relevant external knowledge in the generation process.

    Though this addresses the problem to a large extent, there is an issue here. RAG’s success depends on the accuracy and relevance of the retrieved documents. If the retrieval process fails, we face the inaccuracies in the generative process.

    Researchers have devised a pathbreaking Corrective Retrieval Augmented Generation process (CRAG). Here a lightweight Retrieval Evaluator is introduced. It assesses the quality of retrieved documents. The documents could be correct, ambiguous or incorrect. They are subjected to knowledge refinement. They are subjected to knowledge refinement and knowledge searching if they are ambiguous. They are subjected to knowledge searching if they are incorrect. Thus, the documents are corrected to x. These are then generated. This is the dynamic approach of document retrieval. It uses a decompose-recompose algorithm if the documents are sub-optimal. Thus, the generative process gives most relevant and accurate information.

    CRAG surfs the vast web resources to augment its knowledge base. It goes beyond static corpus of data.

    This is a significant leap forward for the LLMs. It sets a new standard for integrating superficial knowledge in the generative process.

    There is fluent text with factual integrity.

  • HERE Technologies

    Google maps are popular for navigation. However, organizations such as HERE offer maps with detailed mapping attributes, say for example ‘bridge attributes’. It tells the truck drivers the location of the bridges, their height, their load bearing capacity, and alternative routes if it is a low bridge and so on. Google maps are good, but the kind of detailed mapping HERE provides makes it suitable for autonomous driving.

    Here is also developing 3D models for 100 cities. They are working on indoor mapping. They would map the height of the buildings and power lines. It will facilitate drone deliveries. They will prepare indoor maps of buildings. It will make easier to locate your parked car. At the same time, you can locate a spa or a restaurant in a big hotel easily. There are no flying cars now, but they are preparing routes for flying cars.

    HERE is backed by Audi, Mercedes-Benz, Intel, NTT, Robert Bosch, Continental and Pioneer. It is a location tech platform.

    In Germany, BMW’s Personal Pilot Level 3 automated driving function will be available in March 2024. HERE HD Live Map plays a central role here. In 2021, the same maps assisted Mrecedes-Benz Level 3 autonomous driving. Uber chose HERE maps to aid its geolocation functionalities and improve its pick-up and drop-off locations.

    HERE has on its payroll 6500 employees, of which 3700, almost half, are located in India. Data processing, crunching and managing is done in India.

    Though backed up by nine corporations, they have full operational autonomy. Over time, it will be an independent company. The three German companies will be their biggest customers, instead of owners. Though Audi BMW and Mercedes are the owners now, they do not get any preference over other customers. Mike Nef Kens is the CEO of HERE Technologies.

  • Driverless Cars

    The idea of a driverless car first emerged as soon as the inventors realized that radio signals could be used to control vehicles remotely. It was the early 20th century. A radio engineer Francis Houdana guided a car by ‘phantom control’ at New York’s Fifth Avenue. However, the car could be called driverless in name only, since the remote driver was never far behind.

    Some inventors envisaged a highway system with electromagnetic tracks that would guide the movement of automobiles in addition to radio signal control. One such inventor was Geddes from General Motors.

    Despite all this, driverless cars remained elusive. This was not because there were no efforts for the idea to become a reality. By 1960s, the zeal for automatic highways waned, but then a new dream of a computer-chip controlled car emerged.

    In the 1970s, there was investment into developing a prototype of self-driving car and there was some success. Inventors imagined a car that was a computer on wheels. And yet there was little achieved to materialize the concept.

    The recent efforts make use of ‘internet of things’ (IoT) to make realize this dream.

    There are levels of automation introduced in vehicles, where some functions are automated and some driver-assisted. It lulls a driver into complacency. It is a half-baked approach.

    Elon Musk predicted a driverless car in 2023, but the federal regulators asked him to recall the vehicles whose autopilot system was found detective. Musk will be busy fixing the problem this year.

  • Autonomous Cars

    Autonomous cars have been put into various categories or levels. The Society of Automotive Engineers (SAE) has defined ‘levels of Driving Automation’ the carmakers follow. These are:

    Level 0: The driver is fully responsible for driving the car.

    Level 1: The driver has the freedom to take the feet off the pedals in some instances.

    Level 2: There is automatic acceleration and braking to support the driver. Tesla’s Pilot 2 is Level 2 car

    Level 3: The driver can disengage from driving, but at times. Honda Legend Level 3 is available. Mercedes Benz has launched Drive Pilot, a Level 3 car.

    Level 4: The car drives on its own all the time, but a driver accompanies.

    Level 5: The driver is eliminated. There is no driver seat.

    The sensors determine the position of the car and road conditions using data from 3D maps, and the global navigation system. Sensors are fitted all over the body. The ECU controls the acceleration, braking and steering to assist the driver.

    Honda sensing that provides automation between Level 1 and 2 in Honda City and Elevate. Others with similar tech are Tata Safari, Mahindra XVV700 etc.

  • Advertising Video on Demand: AVOD

    OTT channels are witnessing an emergence of new model — offering free content and getting revenue through Advertising Video on Demand (AVOD). This model is especially successful with live cricket content. Adoption of AVOD benefits everyone. Consumers get access to streaming content for free. Platforms get a large viewership. Marketers get an opportunity to promote their products.

    The target group for the ad and content consumption is Gen Z and millennials. YouTube and AVOD attract maximum audiences and ad spend in the digital advertising market.

    Most OTT players (except Netflix and Amazon Prime) operate a hybrid model — advertising plus subscription. Broadcasters own some streaming platforms, e.g. Sony owns Sony Liv. These OTT platforms get TV catch up content at no cost. However, they charge a fee for original content e.g. web series, movies and sports.

    As it is tough to monetize through subscriptions alone, there is dependence on advertising revenue.

  • AI Push by India

    India has announced AI Mission in December 2023. The Cabinet may soon approve a fund of Rs 10000 crore plus. There would be a Sovereign AI programme.

    There would be capacity building both within the government and through PPP or public private partnership.

    The aim of AI Mission is building the computing powers of AI within the country. It will benefit the startups and entrepreneurs. It will promote AI applications.

    The country wants to build a compute capacity between 10,000 GPUs and 30,000 GPUs (under the PPP programme). There would additional 1000-2000 GPUs under C-DAC.

    The government wants to encourage private computing centres. They can charge a ‘usage’ fee.

    An AI system requires computing capacity or compute. It also requires algorithmic innovations. It, in addition, requires datasets.

    The government is working on building datasets and making them available to Indian startups.

  • Transformer Alternative: State-Space-Model (SSM)

    Since 2017, when Vaswani et al from Google published Attention Is All That You Need paper, transformer architecture has been used in large language models (LLMs).

    Of late, there is emergence of non-attention architecture for language modelling, e.g. Mamba which shows promising results in various experiments.

    In fact, Mamba belongs to state-space model (SSM) which is a mathematical model used to describe the evolution of a system over a period of time.

    The key concepts of SSMs are State Variables (x) representing the internal state of the system, State Equation showing how the state variables change over time, both in continuous time and discrete time, Output Equation shows observed outputs of the system to its internal state. Matrices A, B, C, and D which are parameters of the SSM (A represents system dynamics, B the input matrix, C the output matrix and D the feed forward matrix).

    The SSMs are designed as linear time invariant systems where A, B, C and D are constants and system’s behaviour is linear.

    SSMs are formulated both in continuous time (using differential equations) and discrete time (using difference equations).

    Mamba serves as a versatile sequence model foundation. Mamba-3B model surpasses similarly sized transformers and competes on par with Transformers twice its size.

    SSMs offer a different lens on sequence modelling. As SSMs focus on internal state evolving over time (hidden dynamics), It captures long-range dependencies and context effectively.

    As we know, LLMs based on attention show Blackbox effect. SSMs provide structured representation of the system. These provide greater interpretability.

    Mamba as an SSM shows more computational efficiency compared to transformers.

    The most impressive thing about transformers is their impressive expressive power. They excel at capturing intricate relationship. They can generate diverse output.

    SSMs may require more data for training as compared to transformers. (They have to master both state transitions and observation equations).

    SSMs are appealing theoretically but their implementation and optimization may be more complex than transformers.

    Though a promising development, their working requires further exploration and comparison with transformers. Both these architectures may possibly evolve and co-exist serving different needs and domains.

  • Young AI Professionals

    If an IT or AI company employs young manpower under 24, it has a good opportunity to mould them the way they want. They can be trained in those sunrise areas which are in demand. They can be made to imbibe the organization culture (OC) and values. Organizations hire candidates with experience and higher age groups. They could be in their thirties, or forties or even fifties. Even when youngsters are hired, they are five and six levels below the CEO. Since they are far removed from the top management, they do not learn about their thinking and the road map they have in mind for the organization. They do not get mentored by the seniors. They remain fuzzy about organizational policies and the rationale behind these policies. They may not get to meet any of the top management and middle management personnel. It is necessary to think of organization structure when this happens.