AI and EQ

AI is an evolving concept. It processes information but overlooks emotional intelligence (EQ). AI has to account for intellectual capabilities and at the same time should understand and respond to human emotions.

AI has shown proficiency in tasks ranging from data analysis to language translations. Its limitation lies in the fact that it is oblivious to emotions of its users. A conversational bot dealing with a customer is adept in giving factual answers but if the user is disappointed with the service and is upset, it fails to respond to this emotional turbulence. The interaction is thus transactional and not empathetic.

EQ covers self-awareness, self-regulation, motivation, empathy and social skills. AI must be infused with these attributes. It will enhance user experience. Such interactions will facilitate the resolution of healthcare issues. Emotionally intelligent AI will appreciate a patient’s anxiety about a diagnosis. It will converse reassuringly.

Humans and AI interact on the basis of trust. EQ is the key to build this trust and maintain it. In mental health problems, EQ-based AI could provide emotional support.

AI could help us overcome stress. Emotionally intelligent AI must safeguard the privacy. Such systems must supplement human support and must not supplant it.

The integration of AI and EQ is a transformative leap. It is a combination of intellectual capability and emotional response. AI need not stop producing smarter machines but should go further and build understanding and caring machines.

Agentforce

Salesforce has launched Agentforce which is a code platform to facilitate the creation of personalized AI agents for organizations. AI agents accelerate the scaling of AI adoption. Enterprises cannot leverage the value of Copilots. It is a hit and miss world.

Agentforce facilitates the streamlining of business processes, delivers insights and enhances productivity.

Copilots do perform basic tasks. However, they are inefficient. That led to AI agents. It is in fact next iteration of deep learning models. AI agents go beyond and take actions on behalf of the users.

Organizations spend money on third party models — either as APIs or in the cloud. These are fine-tuned for specific uses.

Salesforce intends to democratize AI adoption. It conveys a message — do not build but buy. The pricing could be consumption-based.

Indian companies such as Mahindra and Mahindra, Tata Consumer and TVS are Saleforce customers.

When you build the model, you are required to enhance its capabilities continuously to create an ecosystem. However, when you buy the model, you depend on the vendor who keeps developing its product.

Agentforce is powered by Atlas, advanced AI reasoning engine. It simulates human thinking and planning abilities. Atlas analyzes data autonomously. It makes decisions. It completes complex tasks across business functions. Atlas can deploy custom made agents for specific needs. Agentforce makes decisions based on the most current and pertinent data by its integration with Salesforce’s Data Cloud and other systems.

These AI agents automate simple tasks first. Later they will be able to address complex tasks.

Agentforce has the underlying foundation of Data Cloud. It is a customer data platform (CDP).

To optimize their usage, AI agents must have access to all the data points of the organization. Data Cloud enables that. Even unstructured data is brought on record. It is crucial for AI agents.

AI Agents and RPA

Software companies are offering software-as-a-service (SaaS). AI companies too bet on this. However, automation being promised is akin to Robotic Process Automation (RPA). Such automation relies on software robots.

There is a difference between AI agents and RPA.

RPA automates repetitive and tedious tasks, e.g. transferring data between systems (APIs are not involved). Autonomous AI agents process information like humans (adapting to situations on account of changing conditions). This results into an efficient and effective workflow.

Autonomous agents may not kill RPA. RPA will be used for repetitive and tedious tasks. AI agents will be used for complex tasks. Both can co-exist.

Explainable AI (XAI)

Different AI models are being built by various organizations. These models are used for decision making in various parts of the organization. The decisions are based on ML number crunching and algorithms. However, these are not clear to others. Thus, explainable AI becomes relevant to comprehend how the models arrive at a decision.

Till the 1980s, AI systems were generally rule-based. The decision-making pathway could be traced. Later the models become complex with billions of parameters. Some pathways to decision making are clear such as Decision Trees, while most other pathways are opaque. Explainable AI should play a role here, especially in sectors such as healthcare and finance.

These are the days of generative AI. The issue is here the model generates new data or content and whether explainable AI has any relevance here. Explainable AI makes the model transparent and accurate. There are LLMs for different domains. The output generated by these models must not be fallacious. There is constantly evolving technology, and LLMs still have not reached a stable state. The outputs from these models would not be effective, and there could be hallucinations. Explainable AI can rein in LLMs.

The models should be assessed on how they predict responses, the coherence shown by them and the quality of output. Explainable AI increases the trust quotient of generative AI models. If the process is understood, it is easier to implement safety protocols.

It goes without saying that explainable AI itself should evolve to remain relevant for generative AI. There should be a unified definition of explainable AI. XAI methods should adopt feedback loops to improve the models and their outputs.

Orion Glasses

Augmented reality (AR) could be brought to computer screens, headsets, goggles and helmets. There are attempts to bring it to glasses. Facebook has introduced Orion AR glasses prototypes at its annual Connect conference in Menlo Park, California. They would continue to improve on these glasses to make them small enough and stylish enough. At present, being prototypes, they are not for sale and are used internally at Facebook for testing and improving.

These AR glasses show a combined view of the digital and physical worlds. In future, they may become a substitute for a smart phone, keeping the users hands-free.

The new glasses introduced September 25, 2024, are named Orion. They look like thick, black reading glasses. Their lenses display text messages, video calls and even YouTube videos. There is a wristband that detects nerve stimulation. Cameras built into the frames track eye movement. The wearers thus are able to ‘click’ or ‘scroll’ on the display using just their hands.

In the next decade, the glasses could be commercialized. They are working on next two models. The pricing would be affordable for the consumers and would attract developers too for the glasses.

Facebook has made a significant investment in this technology. It already sells Ray Ban branded smart glasses with cameras and speakers. Facebook wants AR glasses to be mobile, hands-free computer. In future, it could rival smartphones. It will become a new way to communicate and interact online.

Initially these things move slowly, but then there could be a sudden surge later.

Brain-Computer Interface (BCI)

We ask Alexa to turn on the lights by a voice command. Can we ask Alexa to do the same just by a mental command? You can say this is science fiction. However, it has become a reality today. The medium to do so are the brain implants. These can translate our thoughts into actions.

This area of science is called Brain Computer Interface (BCI). These devices are integrated with digital assistants (Alexa, Siri) to offer seamless interaction and accessibility.

Brain implants function by the interpretation of brain’s electrical impulses. Neurons communicate basically through electrical impulses. BCIs capture these signals (using microelectrodes implanted in the brain). The captured signals are decoded and translated into commands. These commands are taken by external devices such as computers, robotic limbs and home appliances.

This technology makes the life of paraplegics comfortable. Paraplegics cannot send commands to intended targets such as arms or legs. BCIs facilitate the by-passing of the damaged pathways. These open up a new communication channel.

Neuralink of Musk, Synchron and Blackrock Neurotech are the front runner companies in this field. The paraplegic and sclerosis patients can send text messages using their thoughts. They can command their phones and Alexa. This enables them to operate phones and computers. With digital assistant integration, they can command lighting system at home, watch TV, make video calls, play music and read books on Kindle. The technology is thus a lifeline to independence.

These also allow them to control motorised wheelchairs and robotic arms. It grants them mobility.

Instead of relying on caretakers, those with motor impairment can engage with the world around them directly. It is on their terms.

There are some challenges here. Interpretation of brain signals is still an imprecise science. Out of the large amount of data a brain generates, the device has to distinguish the right signal ignoring the noise around. It is difficult to do so. Besides, implants are invasive devices. These can be misused — they deal with personal neural data.

The researchers would like to make more reliable devices, which are long-lasting and less invasive. Neuralink devices are wireless and minimally invasive. These use flexible threads inserted into the brain by a precision robot.

All said and done, it is a great leap forward. It brings the helpless paraplegic closer to their able-bodied peers.

Beware of Machines While Trading

The movie HER reminds us of the risks involved when AI models are used in finance. AI is evolving its tools, and financial firms are ever ready to adopt them as quickly as possible. It is prudent to confine them specific and limited tasks for as long as possible. That will acclimatize the user and regulators to their pros and cons.

LLMs and other algorithmic tools pose new challenges. There could be automatic price collusion or breaking of rules. It is an opaque operation difficult to explain.

The securities and Exchange Commission monitors the potential risks. CrowdStrike Holdings was responsible for IT crash. It was a cyber-security firm. It reminds us of the potential pitfalls.

Generative AI and some algorithms fall into a different league. However, they pursue copycat strategies, exposing the markets to sharp reversals.

The more sophisticated the machines become, the dicier the risk becomes. There could be collusion between algorithms. It could be deliberate or accidental. This is likely when RL is used.

There could be cases of dishonesty. A chatbot could trade anonymously. It could be fed inside information. Based on this, it could trade, though it is forbidden to do so. It can conceal the fact from human element.

AI is given a singular objective — maximise profits. It does so more clandestinely than the human beings. With the arrival of AI agents, the same task could be performed more dexterously. There could be compliance with the letter of the regulations rather than with the spirit of it.

Who is accountable if the machine shows idiosyncrasy? Some wail over loss of control in trading. They are reduced to being a DJ who plays the tune of the song. Could we hold the trader responsible who funds and employs these tools? Could we hold the IT department responsible? Or the supplier who provided this tool?

It is a reminder that we should be cautious and should not fall in love with our machines.

Arrival of AGI

There is endless talk about AGI and the date of its arrival. It is being counted now — how many steps we are short of achieving AGI. OpenAI has come out with o1 model. Altman recapitulates the development of AI. He puts ChatGPT as level 1. The next stages are agents, innovators and fully autonomous organizations.

So far, we have travelled from level 1 of ChatGPT to level 2 of reasoners — o1 model. We should reach level three pretty quickly. It is called agentic AI. OpenAI will build AI agents for customer service. It will be a mix of o1 model and APIs. In fact, model o1 with reasoning ability has thrown open the gates to AGI.

It was previously believed that AGI will require a new approach, and it may not materialize with LLMs. The results given by o1 model has changed this thinking. We can visualize a GPT-5 powered model with CoT reasoning. The chips here would be more powerful, say Groq chips. This perhaps would make AGI possible.

Many think that OpenAI is on the right track. Of course, it is hard to fill the AGI until a model materializes which is better than humans at a domain.

The third level of AI agents is being tried by Oracle, Salesforce and Microsoft. These are integrated across organizations’ processes. Oracle has announced 50 + AI agents. Salesforce has announced 100+ agents. These are semi-autonomous now. Open AI may be referring to fully autonomous agents.

The next level could be of those fully autonomous agents. This technology would have a great impact. Altman calls this stage as level 4. The final stage 5 is that of innovators and organizations.

AI systems thus can run processes. They go beyond and improve them significantly. They will do this independently.

Organization is that stage where the entire work of a company will be performed independently. There would be no human intervention.

Though it is difficult to predict the arrival of AGI, but the path is now clear, in fact clearer than ever before. Level 3 of Agent AI has already started playing. The Big Tech is only two steps away from the real thing — AGI.

Altman calls this superintelligent age. Humanity is on the brink of a transformative era. The change is not genetic. It is a change in infrastructure.

According to Altman, superintelligent AI could arrive in a few thousand days, though it may take longer. But he is confident that we will get there.

Deep learning is the accelerator. It has worked, and is getting better with scale and the allocation of resources.

He is concerned about the cost of computing power. AI may become a scare resource, meant for the affluent. If we want to spread AI widely, we should curtail the cost of compute and make it abundant — lots of energy and chips.

AI, as Altman puts it, will lead to massive prosperity. The future will fast forward a hundred years from today.

Data Centers

In near future, India could emerge as a key market for data centers. It could face competition from countries such as Malaysia and Vietnam.

It is estimated that this sector would attract a capital investment of $100 billion in this region over the next five years. There are some factors to spur this growth — strong growth of data, the rise of AI, cloud computing and digitalization.

Indian government is looking to subsidize the setting up of data centers to avail of the AI boom. There should be ready infrastructure to avail of computing power for startups, small entities and research workers. AI system depends on compute (computing capacity), algorithmic innovations and datasets.

The lower costs make the Asia-Pacific markets attractive for setting up such infrastructure. India’s current leased data center capacity is 1-3 GW. It is the highest in the emerging markets. Malaysia may overtake India, though India has an edge with strict data sovereignty requirements.

India is already home for Big Tech data centers of companies such as Microsoft. Google and Amazon.

India will face competition from both emerging markets and developed markets. The government in India should create an enabling environment.

Johor Bahru in Malaysia has become destination for new data centers.

It is a fact that operators in established market face lower operational risk than that faced by them in emerging markets.

India is promoting the setting up of semiconductor industry in India. It is going to establish an infrastructure of 10,000 GPUs. There would be public-private partnership with 50 per cent viability funding. The private partner will add more computing power if the GPUs price come down.

Agentic AI Is the Future

As a concept, Agentic AI combines language models, custom code, data and APIs to create intelligent workflows capable of solving business problems.

It represents a shift towards more autonomous decision-making systems. Here an agent is a piece of code capable of perceiving its environment. It could be done through sight, sound or text. The decision is based on such inputs.

It could be applied to simple code generation on WhatsApp to complex functions such as SCM- supply chain management or customer engagement enhancement.

An agent takes the initiative and makes decisions to solve problems autonomously.

AI agents will be the next big thing. Zuckerberg said recently there could be more AI agents in the world than humans. Google too is fiddling with AI agents. Kyndryl is a tech player who is going big on this. It is a spin-off from IBM in 2021.

Generative AI is being used in production. Agentic AI will bring about a significant shift in how things work. Agentic AI is used when there is mixed problem solving.

The queries are directed to the most suitable agent. One agent might resort to RAG. Another accesses real-time data. The former uses internal data and the latter external data (through APIs). What results is a collaborative workflow. Multiple agents work on different aspects of a problem based on a query. Agentic AI is the future.

Agentic AI is designed to run specific functions within an organization without human intervention. Agentic AI technology is gaining traction as businesses look to automate business workflows. It also augments the output of human workers.

Some organization try to build Agentic AI alone. Most of them fail. They then turn to outside firms to build these agents for them. Or else they use embedded agents from their vendors.

Building agents is a complex process. Organization may lack this expertise in-house.

Agentic AI facilitates the use of generative AI from basic tasks to more complex actions.

The architecture is convoluted. It requires multiple models — RAG, advanced data architecture and specialized expertise.

It is a nascent field. In a couple of years, it is likely to mature.

There are some open-source models too. These models could be linked to turn them into agents. They will then perform their assigned tasks without human prompts. Building on open-source model is more efficient than creating AI agents from scratch.

There should be MLOPs plan in the organization. It was aspirational technology a few years back. It is now being realized.

Though these agents run autonomously, building successful Agent AIs requires human supervision.

Many in-house projects spiral out of control in terms of cost and complexity.

Some companies train their own Agent AIs, but many lack this expertise. In addition, there are maintenance costs in future. It is complex and resource intensive.

When outsourced from suppliers, we get ready-made agents which have been tested and refined by thousands of users. This process is faster too.

In an agentic AI system, there comes the management of robust memory management.

Building Agentic AI from scratch involves designing data structures, implementing search algorithms and fine-tuning its ability to interpret and prioritizing information. It calls for ML, NLP and data engineering expertise.

By outsourcing, we are leveraging the experience and expertise of others who have navigated such systems.

Work-Life Balance

An employee has to attend his official duties. At the same time, an employee has his own family and social life. If an employee is overburdened at the workplace, this may affect his life at home. To overcome this anomaly, there are week-offs, casual leaves, and vacation leaves. The idea is to have a balance between work and life.

The employment scene has become competitive. Even C-Suite executives at corporates are expected to show results in terms of healthy bottom lines. They try to cut costs, and at times eliminate costs. In order to achieve the so called ‘best’ results, they try to extract the work of 2 employees out of one employee. The working hours are extended — some remain in office from 9.30 am to 9.30 pm and may be more. It is 12-15 hours a day at times. Some superiors ask employees to attend duties on weekends. There is so much of multi-tasking. There is reduced variable pay and there are lower entry-level salaries.

There is some fear psychosis in office.

Younger employees become a victim of such toxic environment. These younger employees may be living away from their homes. They are cut off from families and friends.

There is lack of job security. If the employees cannot fall in line, their appraisal becomes negative, and they may be sacked.

In fact, the crux of the problem is the management — the superiors who take credit for the results, and who ignore the social needs of the employees. They burden the subordinates with too many tasks with impossible and constantly changing deadlines. These bosses never stand up for their staff.

It is for the HR to see that the company has the right managerial manpower. At times, an exceptionally performing subordinate is promoted to a managerial post but may lack managerial qualities. He becomes a mediocre or substandard manager. Every good salesman cannot become a good sales manager. HR must spot the candidates with the right mental abilities, personal interests and personality traits to become a leader.

Even an ordinary technical person can be promoted as a project manager, provided he has the right managerial abilities. And an outstanding technical person cannot always become a good manager.

An employee should not feel disrespected at the workplace. It has the highest negative impact on corporate culture. Managers cannot have a sustained hostile behaviour towards their teammates.

Work culture should be inclusive, respectful, ethical, collaborative and non-abusive. The company can incorporate these in their core values.

Work stress is inevitable. There should be positive stress, that inculcates achievement motivation in employees.

Anna Sebastian Perayil, a young CA, working for EY India died and her mother wrote a moving letter to the CEO attributing the death to overwhelming workload and toxic work environment. Health costs of stress is much more. There is anxiety, depression and burn-out.

There is a necessity of good communication channels in the organisation, grievance redressal mechanism, addressing grapevine communication. A company’s management should be candid and open. Just a sugar-coated letter of a CEO is not enough. There should be communication between different levels of employees. Everyone has a vital role to play to make the organisation successful.