This research contributes to the field of dynamic learning and classification in case of stationary and non-stationary environments. The goal of this PhD is to define a new classification framework able to deal with very small learning dataset at the beginning of the process and with abilities to adjust itself according to the variability of the incoming data inside a stream. For that purpose, we propose a solution based on a combination of independent one-class SVM classifiers having each one their own incremental learning procedure. Consequently, each classifier is not sensitive to crossed influences which can emanate from the configuration of the models of the other classifiers. The originality of our proposal comes from the use of the former knowledge kept in the SVM models (represented by all the found support vectors) and its combination with the new data coming incrementally from the stream. The proposed classification model (mOC-iSVM) is exploited through three variations in the way of using the existing models at each step of time. The mOC-iSVM.AP model selects the previous support vectors according to their age ; the mOC-iSVM.EP model selects the support vectors according to their efficiency, and the mOC-iSVM.nB selects vectors from the n-best models in the history. Our contribution states in a state of the art where no solution is proposed today to handle at the same time, the concept drift, the addition or the deletion of concepts, the fusion or division of concepts while offering a privileged solution for interaction with the user. The experiments, at the same time on stationary and non-stationary environments, provide very good classification scores close or even better than those obtained with the most successful incremental classifiers at this moment. Furthermore, in contrary to our method, most of the other dynamic approaches are applicable only to particular environments.
|Product dimensions:||6.00(w) x 9.00(h) x 0.57(d)|