Meta-Learning in Machine Learning
In this blog, we will talk about Meta-learning in Machine Learning, which is the part of artificial intelligence. This blog is related to the basics or introduction of meta-learning. so let's start through the definition of metadata.
We all must hear once about meta-data which means data about data. we apply the same meaning in Artificial intelligence also. In machine learning, there is one concept that is related to meta which is "Meta-Learning" this is our main topic. so let's jump on the main thing.
Meta-Learning is the most useful concept in machine learning. It is an algorithm which is used for learning purpose. it is also called a Meta-Learning algorithm. The purpose of meta-learning is to learn the learning process through another machine. The types of meta-learning are as follow-
Types of Meta-Learning
1. Optimizer - Meta-learning is commonly used to optimize the performance of the associate already existing neural network. Optimizer meta-learning strategies usually perform by tweaking the hyperparameters of a special neural network so as to enhance the performance of the bottom neural network. The result's that the target network ought to become higher at playing the task it's being trained on. One example of a meta-learning optimizer is that the use of a network to enhance gradient descent results.
2. Metric- Metric-based mostly meta-learning is that the utilization of neural networks to work out. if a metric is being employed effectively and if the network or networks are touched the target metric. Metric meta-learning some examples are accustomed train the network and has it learn the mathematical space. the constant metric is employed across the various domain and if the networks diverge from the metric they're thought of to be failing.
3. Recurrent Model - Recurrent model meta-learning is that the application of meta-learning techniques to repeated Neural Networks and similar Long remembering networks. this method operates by coaching the RNN/LSTM model to consecutive learn a dataset and so exploitation this trained model as a basis for an additional learner. The meta-learner takes on board the precise improvement formula that was accustomed to training the initial model. The hereditary parameterization of the meta-learner allows it to quickly initialize and converge, but still, be able to update for brand spanking new eventualities.
Working of Meta-Learning
Now let see how meta-learning works. The meta-learning tasks involve repetition over the parameters of the primary network into the parameters of the second network the optimizer.
There are two types of training processes in meta-learning. The meta-learning model is typically practiced after various steps of training on the base model have been carried out. after various steps of optimization, it trained the model and then it cleared the first round of optimization. For example- after three or four steps of training on the base model, a meta-loss is figured. After the meta-loss is figured, the gradients are measured for each meta-parameter. After this happens, the meta-parameters in the optimizer are renewed.
One possibility for estimating the meta-loss is to complete the forward training pass of the initial model and then combine the losses that have already been computed. The meta-optimizer could even be another meta-learner, though at some point a discrete optimizer like ADAM or SGD must be used.
Many deep learning models will have many thousands or perhaps lots of parameters. making a meta-learner that has a completely new set of parameters would be computationally valuable, and for this reason, a plan of action known as coordinate-sharing is usually used. Coordinate-sharing includes managing the meta-learner in order that it learns one parameter from the bottom model and so simply clones that parameter in the position of all of the opposite parameters. The result's that the parameters the optimizer possesses don’t rely upon the parameters of the model.
image src - Meta-Learning in Machine Learning
Thanks & Regards,