Cognitive Bias in AI

Many think AI will create destructive humanoid robots. Actually, this is not the real danger. The real danger are the cognitive biases which can be programmed into AI unknowingly by the scientists who created them.

A cognitive bias is a systematic error in how we think. To illustrate, if a person in Britain is asked what is likely to kill him — a terrorist attack or having a bath. Cognitive bias may make you opt for a terrorist attack but statistics prove more people die by drowning in a bath.

Then there is systemic bias called the availability heuristic. Information about terrorist attack is flashed in media and is more available to the mind. Drowning in bath-tubs do not make headlines. Such cognitive biases could be unknowingly programmed into AI computer programmes which run the robots.

Cultured Meat or Meat in the Lab

Meat can be made outside of the animal. Such meat could come from vats and plants rather than slaughtered animals. There is research going on for plant-based alternatives to meat and milk. There are attempts to grow cell-based chicken in vats.

The first ever prototype of cultured meat in the form of a hamburger was very expensive. They have also produced the first cell-based meatball, and cell-based shrimp, crab and lobster. There are cell-grown beef steak. The toughest challenge is to get the alternative taste like the real thing.

Scientists of synthetic biology and tissue culture work in this field. There are experiments to produce sheep and goat meat.

To produce cultured meat muscle stem cells are isolated from the animal without killing it. They are introduced in liquid medium. They are allowed to mature until there is enough density of cells to make a solid mass. They are fed a combination of nutrients including vitamins, amino acids and fatty acids. Researchers would like to mimic scaffolds that will hold the cells together. The culture medium used is serum drawn from foetal calves or horses. They would like to replace this by a plant-based culture medium.

There is a need for cellular agriculture. It requires lesser water.

It is to be seen how the lab model could be scaled up. It is to be seen how the lab model could be scaled up. It may require large bio-reactors and not vats and flasks.

Computer Vision (CV)

Face recognition systems have been spreading owing to advances in AI. It relies on machine learning ( a subfield of AI ) in which computers teach themselves to do tasks that their programmers are unable to explain to them explicitly. They are rewarded when they correctly identify a face, and penalised when they do not. Thus it can be taught to recognise images that contains faces from those that do not. Once it has an idea of what a face consists of, it begins to distinguish one face from the other. The specifics vary depending on the algorithm. Usually, it involves mathematical representation of a number of crucial anatomical points, such as the location of a nose relative to the other facial features, or the distance between the eyes. In lab tests, such systems could be very accurate.

There are many weaknesses in computer vision which is nothing like human vision.

CV Dazzle

Make up can, fool face recognisers. Bright colours, high contrast, graded shading and asymmetric stylings confound an algorithm’s assumptions of what a face looks like.

Hyperface aims to hide faces from dozens of fakes. The idea is to disguise the real thing.

Baseball cap fitted with tiny light-emitting diodes that project infra-red dots onto the wearer’s face could be used.

FaceNet

It is a face recognition system developed by Google. The researchers find that the right amount of infra-red illumination prevent a computer from recognising that it was looking at a face at all.

More sophisticated attacks are possible by searching for faces that are mathematically similar.

Adversial Machine Learning

Training one algorithm to fool another is known as adversial ML. Images are created to mislead CV. Innocuous looking abstract patterns printed on paper and stuck onto the frame of a pair of glasses could convince a CV system that a male research worker is a female actress.

Thus all these systems have constraints.

Neural Networks and AI

The Chinese researchers are developing an autonomous bicycle which navigates with the help of a neuromorphic chip, modelled after human brain. It highlights the effort to achieve new levels of AI with novel kinds of chips. China is investing heavily in the idea of ‘ AI chip .’

Existing robots can learn to open a door or toss a ping-pong ball into a plastic bin, but the training takes hours to days of trial and error. Even then, the skills are viable only in very particular situations. With help from neuromorphic chips and other new processors, machines could learn more complex tasks more efficiently, and be more adaptable in executing them.

AI is being developed through neural networks. These are complex mathematical systems that can learn tasks by analysing vast amounts of data. A neural network can learn to recognise a cat by metabolising thousands of cat photos. This technology is Face recognition technology. Many smart phones use this. It facilitates the development of autonomous robots and self-driving cars.

A neural network does not learn on the fly. Engineers train a neural network for a particular task before deploying it in the real world. It learns after absorbing enormous number of examples.

Researchers are developing the neuromorphic processore including chips which imitate the network of neurons in the brain. These systems include faux neurons. Instead of confining to processing of 0s and Os, these neurons operate by trading tiny bursts of electrical signals. It is called firing or spiking when input signals reach critical thresholds. This is what biological neurons do. It is unifying computer science and neuro science.

By mimicing the brain, AI systems are helped to learn skills and executive tasks more efficiently.

Faux neurons fire on demand rather than continuously. Neuromorphic chips thus consume lesser energy than traditional processors. As information is processed in short bursts, it could lead to systems that learn on the fly, from much smaller amounts of data.

Walled Gardens

Google, Facebook and Amazon attract large chunk of ad spend, and the rest of the industry shrinks. It is because of a walled garden approach. It is a closed eco-system in which all operations are controlled by the eco-system operator, e.g. Apple controls the whole iOS ecosystem. In digital marketing, such operators force the clients to use their whole marketing stack to run their campaigns.

DMP targets audience on their data. DSP pushes ads on selected inventory. DCO handles ad personalisation or ad hosting.

The walled garden directs user’s navigation within particular areas, to allow access to a selection of material or prevent access to other material. Thus it is a closed platform or eco-system. In open platform consumers have unrestricted access to applications and content.

With GDPR and other upcoming privacy laws, the walled gardens are definitely under more scrutiny. Independent players, however, will always have a significant portion of digital ad spend.

Rewarded Videos

Rewarded video ads are popular both with the advertisers and publishers. They are more popular in the mobile games sector. Rewarded video integrates ads into the app. Consumers can choose to watch it in exchange for points, lives or virtual goods. Publishers are able to monetise easily with rewarded video. Advertisers can communicate brand message effectively. Users can continue their game. this video does not lead to false clicks. Rewarded video ads cannot be skipped once the user has opted in. Rewarded videos have become one of the top video formats.

Rewarded videos cost up to 86% lesser than the other video ads. Apart from games, other verticals are also realising its benefits. Retail and media brands use this format.