How Can Transfer Learning Help You in 2020?
INTRODUCTION
Data mining and machine learning technology have already achieved remarkable success in many knowledge engineering areas, including classification, regression, and clustering. However, many machine learning processes work well only under a standard assumption: the training and test data are drawn from the same feature of space and equal distribution. When the distribution changes, most models need to be rebuilt from scratch using newly collected training data. In many real-world applications, it is expensive or impossible to re-collect the required training data and restore the models. It would be nice to reconstruct the need and effort to collect the training data. In such cases, transfer learning between task domains would be desirable.
What is Transfer Learning?
In this blog, we give a detailed overview of transfer learning for classification regression and clustering processed in machine learning areas. There has been a considerable amount of work on transfer learning for reinforcement learning in the machine learning literature. However we only focus on transfer learning for classification, regression, and clustering problems that are associated more closely to data mining jobs. Through this blog, we desire to provide a useful resource for the data mining and machine learning community. Research on transfer learning has attracted more attention since 1995 in different names: learning to learn, life-long learning, knowledge transfer, inductive transfer, multi-task learning, knowledge consolidation, context-sensitive learning, knowledge-based inductive bias, Meta-learning, and incremental/cumulative learning.
What is The Need for Transfer Learning?
The need for TL may arise when the data are outdated. In this case, the detailed data obtained in one time period may not follow the same distribution in a later period. For example, in an indoor WiFi’s localization problems, which aim to detect a user’s current location based on previously collected WiFi data. It is costly to calibrate WiFi data for building localization models in a large scale environment because a user needs to label an extensive collection of WiFi signal data at each location. However, the WiFi’s signal-strength data may be a function of time, device, or other considerable dynamic factors. A model that is trained in one time period or on one device may cause the performance for location estimation in another period or on another device to be reduced. To lessen the re-calibration effort, we might wish to adopt the confined model trained in one period of time (the source domain) for a new period (the target domain) or to adapt the localization model. They are prepared on a mobile device (the source domain) for a new mobile device (the target domain), as done.
Conclusion
In this blog, we have reviewed several current trends in transfer learning. Transfer learning is classified into three different settings; inductive TL, transductive transfer learning, and unsupervised transfer learning. Unsupervised transfer learning may attract more attention in the future.
Furthermore, each of the approaches to transfer knowledge can be classified into four contexts, which are based on “what to transfer” in transfer learning. It includes the instance-transfer approach, the feature-representation-transfer approach, the parameter transfer approach, and the relational-knowledge-transfer approach, respectively. The former three contexts have an assumption on the data, while the last circumstantial deals with TL on relational data. Most of these approaches assume that the desired source domain is connected to the required field.
In the future, many significant research problems need to be labeled. First, how to steer clear from the negative transfer is an open problem. As mentioned above, many proposed TL algorithms are taken into granted that the source and required domains are always inter-connected to each other in some sense.
Natural Language Processing:
Natural Language Processing (NLP) is a branch of AI that deals with the interaction between computers and humans using the natural language. NLP was conceived to decipher, read, understand and make sense of the human languages for a specific purpose. It can be possible by relying on machine learning.
ONPASSIVE, the AI-powered smart business solution ecosystem has been designing an NLP tool that could support multi human languages between machines that they use. If a pattern has to be drawn between the machine and humans using NLP