Liquid Neural Networks

As we know, neural networks are ML algorithms mimicking the structure of the human brain. There are interconnected neurons in the network to process and transmit information. They have the capability to recognise patterns and to perform tasks such as understanding natural language and facial recognition and predictive analysis.

Networks are AI tools. They have some limitations — they must be fed with huge amount of labelled training data, and they process data non-sequentially. This makes them inefficient in handling real time data.

To overcome this, researchers at MIT’s AI Lab introduced Liquid Neural Networks (LNNs). These learn while on the job, apart from when they are being trained.

LNNs

An LNN is a time continuous Recurrent Neural Network (RNN). It processes data sequentially and retains past inputs memories. LNNs adjust its behaviour in the light of new inputs. It can tackle variable length inputs to enhance the understanding of tasks, at the level of neural networks (NNs).

Architechture-wise, LNNs are different since they deal with continuous processing of data or time series. LNNs can change the number of neurons and connections per layer.

Hasani, Lechtier and others pioneered LNNs. They were inspired by microscopic nematode C. elegans, a one mm long worm with a peculiar nevous system capable of performing complex tasks. With just 302 neurons, it deals with complex dynamics.

LNNs are fashioned after electrical connections ( impulses of the worm) which are interlinked. This helps in predicting behaviour. The system state at any given time can be expressed. It is a departure from the traditional neural network that corresponds to specific time for expression of system state.

LNNs are thus dynamic and do learn and adapt continually. They do not need vast amounts of labelled training data to generate accurate results.

LNN neurons are smaller and makes it easy for the researchers to explain how a decision is made. There are lesser computations. Thus the model is scalable. These are resilient models.

LNNs are used for time series data processing and forecasting. They are used for image and video processing. They understand natural language.

The limitations of LNNs are the vanishing gradient problem, the challenge of parameter tuning, and lack of literature on their working.

print

Leave a Reply

Your email address will not be published. Required fields are marked *