Blog

  • Generative AI Applications

    Generative AI must go beyond the proof of concept (POC) stages. To do so, we have to address several critical factors.

    POCs are small in scale. They just test the feasibility. They assess the impact.

    Generative must integrate with the existing IT system. POCs must scale seamlessly into long-term consistent revenue streams.

    Generative AI must handle large volumes of data and perform consistently. It calls for robust infrastructure. There should be compute resources, say GPU chips.

    Models used should not be opaque. There should be explanation for their working.

    There are issues of ethics and regulation. There are issues of bias and fairness. The organizations must comply with data protection laws. They should comply with AI governance standards.

    Generative AI requires talent and training of talent. There should be collaboration with academia.

    Generative AI applications vary across industries. The solutions must be customized. The specific needs of an industry must be understood, along with pain points. Generative AI can immensely help new drug development, personalized medicine, fraud detection and risk management in finance.

    Generative AI is evolving very fast. There should be investment in R&D.

    Thus generative AI can travel from PoC to a successful revenue stream by adopting a multi-pronged approach.

  • Evaluation of AI

    Is generative AI going to play a vital role in enterprises? Will it live up to expectations of the market?

    AI capital expenditure is huge — in the coming years it will reach $ 600 billion to $ 1 trillion. Along with it, the spending on IT will rise 8 per cent in 2024.

    In early 2000s, there was dotcom boom and the results were disastrous. These days businesses focus on market potential and adopt metrics for measuring gains and cost savings. The businesses use more sophisticated risk models and more robust RoI calculations. They have learned from their mistakes in the past from the dotcom era.

    Generative AI is promoted as something magical and a general-purpose solution for enhancing the capability of many business processes. This cannot be done in one stroke. Companies roll out AI tools without telling how to use them effectively. There should be necessary training to use the tool.

    Generative AI is being used increasingly in certain sectors — in video games, for concept art and for asset generation. AI is being put to use in trading, hedging, research, apps and database. It replaces humans. Customer service chatbots of different companies can d the work of several hundred human employees. It can save lot of costs for finance companies.

    It is difficult to measure RoI of AI. There we will have to consider abstract values such as efficiency and productivity. These values will have to be quantified.

    There are issues of bias, copyright infringement and erosion of human agency.

    All said and done, the skepticism about AI is healthy and is required. Still AI has come to stay in industries such as customer service, gaming, marketing and other creative industries.

  • AI Technology

    Let us get acquainted with some common terms used in the field of artificial intelligence or AI.

    What is artificial intelligence or AI? It is a field in computer science to develop computer systems which can think like a human being. AI is considered a technology and is also entity. It has a dynamic definition that varies more often.

    Goole has invested in artificial intelligence and has developed a model like Gemini that is intelligent. OpenAI has developed a series of GPT models. AI powers chatbots and ChatGPT is a chatbot developed by OpenAI.

    What is Machine learning or ML? Here machines are trained with data with an aim to make then predict with new information. It is a way of learning. ML is a part of AI.

    Artificial general intelligence (AGI) makes a machine as intelligent as a human being and at times more intelligent than a human being. Companies are interested in achieving this level of intelligence — make super-intelligent machines.

    Generative AI is capable of generating new text, images, codes and more. These models are called large language models (LLMs) which are trained on vast amounts of data.

    Hallucinations are shown by machines where they provide factually incorrect or flawed answers. At times the answers are incoherent.

    Bias gets introduced into algorithms since the machines get programmed by human beings. AI tools show biases.

  • EU’s AI Regulation

    AI could benefit humanity and at the same time it poses certain risks. These risks could harm individuals, societies, ecosystems and environment. AI can promote biases, can affect privacy, and spread disinformation. It could facilitate socio-economic divide. It also poses environmental risks. It could be misused on the battlefield.

    Lawmakers all over the world, are in favour of regulating AI by promoting transparency, explainability, safety, security and accountability.

    The AI Act in the EU was proposed in July 2024 and came into effect in August 2024. It creates a risk-based regulatory regime.

    AI systems can pose unacceptable risks, e.g. cognitive behavioral manipulation and social scoring. Such systems are prohibited. Some AI systems pose high-risks which can endanger people or their fundamental rights, e.g. using AI for creditworthiness, insurance, education and employment-related decision-making. These systems are subjected to a host of regulatory requirements. Some AI systems pose limited risk, e.g. chatbots and deepfakes. These must meet certain information and transparency standards. Some systems pose minimal risk, e.g. spam filters. These are not regulated at present. They are, however, subject to other regulations such as General Data Protection Regulation — GDPR.

    General purpose AI models may have high impact capabilities. They could pose a systemic risk. The Act lays down stringent rules for such models.

    Countries can adapt the EU law as per their national priorities.

  • Search GPT

    As we know, the internet search business is dominated by Google Search with a share of 91.1 per cent share. OpenAI proposes to launch Search GPT, an AI-powered search engine with real-time access to information from internet. This puts AI in competition with its financial backer Microsoft which runs Bing Search and Perplexity which runs a search-focused chatbot and is backed by Amazon and Nvidia.

    OpenAI has commenced the sign-ups for Search GPT which is currently in the prototype stage. It is being tested with a small group of users and publishers. Its best features will be integrated into ChatGPT in future.

    OpenAI and Perplexity search tools reaffirm search as content engagement model. It will compel Google to better at its own game. Search GPT will provide summarized search results with source links in response to user queries. Users can put follow-up questions and would receive contextualized responses.

    Publishers will be given access to tools to look for how their content appears in search results.The publishing partners of Search GPT are NewsCorp and the Atlantic. There will be a closer collaboration between publishers and OpenAI. There would be content licensing agreements with other publishers such as Associated Press, News Corp and Axel Springer.

    Google and Microsoft too are trying to integrate technology to their existing searches by providing AI-powered summaries.

  • AI: Enviromental Concerns

    There are reports that emissions footprints by Google has increased by 13 per cent in 2023 as compared to 2022, and the entire rise can be attributed to electricity consumption in its data centers and supply chains. The electricity consumption in 2023 for Google increased by 17 per cent. The same trend is going to sustain because of the deployment of AI tools.

    AI could be of help in climate change and could be transformative across various sectors. The same AI is responsible for heavy emissions.

    A simple query put to ChatGPT could use between 10-33 times more energy than consumed by a regular Google search. Image-based searches could consume even more energy.

    LLMs sift through more data while processing and formulating apt responses. It requires more electric signals. More work generates and releases more heat. It requires cooling by ACs and other forms of cooling in data centers.

    With the spread of AI, the electricity consumption is likely to go up. Data centers at present account for 1-13 per cent of global electricity demand. This could double by 2026. In Ireland, the share has reached 18 per cent as it has a large number of data centers because of incentivization.

    The US has the largest number of data centers. There is consumption between 1.3 per cent and 4.5 per cent.

    AI takes a huge environmental toll. Apart from electricity, there is water consumption. GPT-4 at Iowa (US) is reported to have consumed 6 per cent of the districts water supply in July 2022.

    In the coming years, in India, there will be deployment of AI and data centers. Its environmental effect will be huge. The expansion should be planned so that there is minimum adverse impact. The processes should be efficient and should minimize emissions footprint.

    There is a positive outlook too. AI could reduce emissions globally. A BCG study puts this reduction to 5-10 per cent by 2030. It will generate a value $1.3 trillion to $2-6 trillion through additional revenues and cost savings. AI can facilitate monitoring and predictions of emissions in existing processes. It can optimize these by eliminating wastages and inefficiencies.

  • Brave New World of AI

    Amongst the AI, a subset called generative AI has created waves. Most of us have come across ChatGPT, DALL-E and Midjourney. What is precisely generative AI?

    Generative AI is capable of creating content or new content. This content could be in the form text, images, music or even code.

    AI models developed so far learn from vast amounts of existing data to generate content.

    The most popular generative AI model is GPT or Generative Pre-trained Transformer made by OpenAI. This is a pioneering innovation. It is versatile — it can write essays; it can speak and can create image. As OpenAI has made it accessible through ChatGPT — it has democratized AI. It is being upgraded continuously — GPT-3, GPT-4 and beyond.

    There are other models too — Gemini from Google, Claude from Anthropic, LLaMA from Facebook, Stable Diffusion from Stability AI and Copilot from Microsoft. Each of these models have its pros and cons. However, GPT is still at the top of the ladder.

    GPT in particular and other generative AI models have created a technological gold rush. It is the time for companies to integrate AI into their systems and operations. There are AI-powered digital assistants, code developers and therapists.

    All is not well. There are issues. There are issues of copyright violations, job displacement and deepfakes. The field is evolving by leaps and bounds. This is just the beginning. Future iterations of the model would be even more capable. There is cut-throat competition. The landscape changes constantly. Generative AI is a technological revolution that will transform the world. We have seen just the tip of the iceberg. We have yet to see its future potential. Be ready to see the wonderful AI ride.

  • AI Law

    India proposes to enact an Act related to AI. AI’s benefits outweigh its downsides, and hence the Act will not have any penal provisions.

    The Act may ask social media platforms to add watermarks to identify AI generated content.

    The government will have decided the parameters of LLMs which are India specific. Of late, Gemini answered some queries in different ways.

    There should be control over deepfakes as they are a new threat to democracy. There should be detection of deepfakes, their prevention by removing or reducing their virality, strengthening of reporting mechanisms and spreading of awareness about the technology.

    While regulating AI, the government will take care that it does not hurt innovation. It should not be stifled. Even DPDP Act (Digital Personal Data Protection Act) balances the interests of innovation and vital issues.

    Some companies such as Microsoft has created a joint project called Coalition for Content Provenance and Authenticity (C2PA). It focuses on systems to provide context and history for digital media.

    Facebook is building tools that can identify ‘invisible markers’ in content relying on standards set by C2PA. It facilitates image labelling.

  • Emotional Element in Digital Ads

    Digital advertising is seen as a performance marketing, rather than brand building medium. It is changing since quantifying performance is difficult. It is difficult to prove that it brings about conversion.

    Digital has become a mass medium now. It has to occupy the top of the advertising funnel.

    Digital with strong emotional element contributes to building long term equity. It is more likely to build salience and more likely to go viral.

    We should have more made-for-the-digital ads with an emotional connect. We come across more digital ads that rely upon humour. Humour may work. However, we cannot underestimate emotional stories. Many digital ads just capture the trend of the moment and create content fast. Some content is generated by AI. This is not effective.

    A copy created for TV could be repurposed for digital. However, it should be subjected to the digital best practices. To illustrate, in digitals the brand is introduced much earlier, as attention spans are shorter, and people go through a lot of content in a small-time window. We have to ensure that the brand and its moment in the story must happen as soon as possible. In addition, the branding assets should be stronger to facilitate relatability in digital.

  • Bold Advertising

    Indian advertising needs to be a little bolder. It should push the envelope further. It should break the category codes. It has to take the risks.

    At present, the bulk of advertising is undifferentiated. It does not build brand differentiation. To do so, we will have to reintroduce the element of courage in advertising. Secondly, our humour ads have become banal as we fear the cultural sensitivity around us. We are scared that our humour may hurt someone and fail to create genuinely humorous ads. If necessary, the ads should be pre-tested. Ads should be courageous enough to push the boundaries.