Αρχειοθήκη ιστολογίου

Δευτέρα 10 Απριλίου 2017

Effective sparsity control in deep belief networks using normal regularization term

Abstract

Nowadays the use of deep network architectures has become widespread in machine learning. Deep belief networks (DBNs) have deep network architectures to create a powerful generative model using training data. Deep belief networks can be used in classification and feature learning. A DBN can be learned unsupervised, and then the learned features are suitable for a simple classifier (like a linear classifier) with a few labeled data. In addition, according to researches, by using sparsity in DBNs we can learn useful low-level feature representations for unlabeled data. In sparse representation, we have the property that learned features can be interpreted, i.e., correspond to meaningful aspects of the input, and capture factors of variation in the data. Different methods are proposed to build sparse DBNs. In this paper, we proposed a new method that has different behavior according to deviation of the activation of the hidden units from a (low) fixed value. In addition, our proposed regularization term has a variance parameter that can control the force degree of sparseness. According to the results, our new method achieves the best recognition accuracy on the test sets in different datasets with different applications (image, speech and text) and we can achieve incredible results when using a different number of training samples, especially when we have a few samples for training.



from #MedicinebyAlexandrosSfakianakis via xlomafota13 on Inoreader http://ift.tt/2ohD1Ux
via IFTTT

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου