Free Llama from Facebook

In July, 2023, Facebook released its open source artificial intelligence model Llama as an alternative to expensive proprietary models of OpenAI and Google. The new version of the AI model is called Llama 2. Its distribution will be done by Microsoft through Azure cloud service. The version will run on Windows operating system.

Previously, the version was available to only select academic institutes for research. It will be made available through direct download and through Amazon Web Services (AWS) and Hugging Face and other service providers.

Opensource provides opportunities to other developers to innovate.

The new version has been trained on 40% more data than its previous counterpart. It has 1 million plus annotations by humans to fine tune its output. Any incremental improvement on open source model eats up the market share of closed source models.

Microsoft wants to give an opportunity to developers to choose a model of their choice. It wants to become a go-to cloud platform for AI.

GPT-5

GPT-3.5 was revolutionary. It collected many users within a short time. Some five months later, GPT-4 was launched (say in March, 2023). It is now time to have a new version of GPT, but OpenAI has just introduced Code Interpreter now. It is a very useful feature, but indicates the company is just adding incremental features. Or is it that the company has already introduced GPT-4.5 with code Interpreter, without officially declaring so. This is just speculation, nothing official so far. Some say that the next version would be super intelligent AI. Altman expected the journey to superintelligence to be of four years.

Visualisation with ChatGPT

ChatGPT can be used to visualize data and make diagrams. ChatGPT has introduced code Interpreter. You have to enable it. You can ask data visualisation abilities. It can create line graphs, bar diagrams, pie charts, scatter plots, histograms, heat maps and so on. You have to upload the data in XLS, XLSX and other file formats. You can ask then the code interpreter to ‘make the graph’. You can as well ask the code interpreter to find insights from the data, and visualize it.

If you have subscribed to ChatGPT Plus, you can enable ChatGPT Plug ins to visualize data. You can create diagrams by using Mermaid language syntax. It generate code which is posted on diagram app for visual output.

Translation Models

LLMs, as we know, are good for natural language processing including translations. There are now thousands of models being open sourced on platforms such as GitHub or Hugging Face. Every week about 5000 new translation models are being added.

Big Translate, an LLM developed by a team of Chinese researchers supports multi-lingual translations across 100 languages and it is available on GitHub.

Big Translate is built upon LLaMA of Facebook (introduced in February, 2023). It is designed to handle translation of low-resource language with high accuracy. It is focused on Chinese, and has parallel dataset of 102 languages. The corpus is drawn from various public and proprietary resources.

The model has been tested against Google Translate and ChatGPT. It surpassed ChatGPT in BLEU scores. It closely matches Google Translate.

It can translate Tibetan and Mongolian language. That makes it saleable in the Chinese market. Alibaba Group has released POLYLM to compete with this product.

Short Format Advertisements (15 Seconds)

Advertisers have to choose whether a brand is to be established by telling a drawn-out story or a quick story before people tune out. Today one has to make an impact on the viewer’s mind within 15 seconds. There is a growing trend of shorter ads. 15-secorders are the 30-secorders of today. The attention spans are falling and this is the ideal duration for most of the ads.

Such ads are mostly funny and eye-catching. Consumers switch channels when ads are shown. On digital, it is easier to skip the ad. It is better to have non-skippable ads of 15 seconds.

The impact is created by personalisation, relevance and emotional appeal. There is a hook. The message must bring out the USP. The viewers can be engaged by giving a creative twist.

It is to be noted that ultimately a creative ad engages the audiences in any format.

Autonomous AI Agents

Almost after a decade of the appearance of online digital assistants such as Siri and Alexa, a new wave of digital assistants could be seen — they are powered by the generative AI technology. And they have greater autonomy.

Silicon Valley wants to leverage the advances in AI and systems powered by generative AI are attracting billions of dollars of investment.

The new digital assistants are called agents or co-pilots. They promise to perform complex personal and professional tasks when directed by the humans. And they do not need close supervision. They act as personal AI friends.

There is a rush to leverage the technology behind foundation models. Individuals, startups and Big Tech –all are in the race.

There is a possibility of human biases sneaking in, and the potential for misinformation. Some fear murderous HAL 9000 from ‘2001 : A Space Odyssey’.

Twitter Ads

Twitter decided to limit the number of views on posts. It could affect the advertisers. Traditionally, Twitter provided equal opportunities for all advertisers’ followers to see their tweets. The recent change of limiting the views may decline visibility. Consequently, retweets and replies could also drop.

In India ad spending on social media platforms is $1.28 billion (less than 2% of the digital ad spends).

Brands would like stability of the social media platform. Policy changes do affect advertisers. Twitter restricts reading to 1000 posts a day for Twitter Blue accounts, and to 10000 posts for Verified accounts. It harms the bottom line of advertisers. The organic reach is reduced. That means lower brand awareness and engagement. There could be competition for advertising space. It could increase the cost of advertising. Precise targeting will be needed in order to avoid wasting impressions on users who are less likely to engage. Brands must have talents to create content that could maximise impact despite the limited views. The campaign will have to be monitored closely. Engagement rate, click throughs and conversions will matter much more.

Film Making Challenged by AI

These days Hollywood is on strike. It is for the first time in the last 60 years that writers and actors have jointly participated in the strike.

Writers are fighting the disruption to the residual payments. Actors are agitated over their digital likenesses without their consent.

Both the writers and actors feel a threat from AI. AI can generate watchable screen-plays, thus threatening the writers, affecting both their livelihood and reputation. Actors are worried about their digital replicas. In the Netflix series Black Mirror, Salma Hayek signed away the rights of her digital likeness to act out unwittingly. The results are disgusting.

Can we say that the likeness of a human being does not belong to that human being? Are we reducing human being to zeros and ones? The tools of AI in future can deploy anybody to recreate a person’s likeness.

There is a connect between humans and the moving images.

AI has the potential to eliminate the whole process of film making. The technology is already ready, and is further developing at a rapid pace. The background performers or junior artists would be engaged for a day. Their work will be scanned, and the future shots will be produced by their likenesses for ever.

Of course, AI-generated content will be tested by the audience, who will vote with their wallets, and their vote will be for great works with human connections.

AI in Advertising

  1. AI is helpful in making hyper-personalised ads by using information of user behaviour and purchase history. Such ads would lead to higher engagement and conversion rates. Tata Motors used AI to launch personalised ads for Nexon for different audience segments. The AI algorithms analyze customer data and preferences to design the ads.
  2. AI is helpful in segmentation of the audience. At present, advertising uses broad and vague audience groups. AI can make advanced data analytics. Thus it reduces the ambiguity. Highly specific audience segments can be carved out based on nuanced parameters such as unique behaviour patterns and interests.
  3. AI, as we have observed, generates personalised ads. This is helpful in automating the various stages in an ad campaign. The campaign could be adjusted on real-time data in terms of narrative. It improves campaign performance. There is tracking of campaign in terms of impressions, clicks, conversions. The advertisers, thus, could make timely adjustments.
  4. AI ultimately is helpful in budget re-allocations since it assists in ad performance tracking.
  5. AI could be used to generate an ad copy by using natural language processing and descriptions. Myntra does so.

Deep Learning Hardware

Google claims that its Tensor Processing Units (TPUs) are faster (1.7 times) than the A100 chips from Nvidia which power most AI applications. TPUs are more energy efficient (1.9 times) than the A100. Thus Google’s processing of AI is greener.

Nvidia’s A-100 core GPU is based on the new Nvidia Ampere GPU architecture. It adds many new features and delivers faster performance for HPC, AI and data analytics workloads.

Google’s TPUs are application-specific integrated circuits (ASICs) especially to accelerate AI. These are liquid cooled and designed to slot into server racks. They deliver up to 100 petaflops of compute and power Google products like Google Search. Google Photos, Google Translate, Google Assistant, Gmail and Google Cloud AI APIs.

CPUs are central processing units. GPUs are graphical processing units to enhance the graphical performance. TPUs are optimized for tensor operations. The CPU architecture is general purpose. GPUs offer flexibility and precision options. TPUs are optimized for tensor operations. GPUs have greater memory bandwidth than TPUs but higher power consumption. TPUs are energy efficient and performance efficient.