Both OpenAI and Google’s generative AI models which are large language models (LLMs) have taken steps to enhance emotional intelligence of these models. Of course, these models are known to commit factual errors, and in the searches if these models are used the cost of these errors is high. However, if the model is invested with emotional companionship, the factual fallacy would not lower the trust placed in the model.
LLMs make mistakes, and it is inherent in the data on which they are trained. Even the designers put fluency over accuracy on higher rank. However, these models are good at mimicking empathy, by learning from the text scraped from the web which includes emotive reactions on the social media. The other inputs which go into their training are TV show scripts, dialogues from novels, research papers on emotional intelligence. All these make these models empathetic.
Of course, it is synthetic empathy and still it counts. It comforts a child using the model in the dark. It can assure the child that there is no cause to worry, as there are lot of things you can do to feel safe and comfortable in the dark. AI can adequately fill a void for people who need support.
AI cannot compensate a sense of comforting touch or an understanding when to speak and when to listen in real interactions. AI is not panacea for your loneliness. Still AI has come to a stage where it could show better emotional skill than a grasp of facts.