Efficient Learning on a Stochastic Neural Network – The state-of-the-art recurrent neural encoder model (RNN) is a popular way to learn a rich set of visual objects in order to generate large amounts of data. However, it is still the case that deep neural networks (DNNs) do not directly represent the object representation. In this paper, we show how to generate a deep RNN by transforming an existing one into a model of the object representation. In addition, we show that this transformation could be used to train a model by leveraging the fact that a deep DNN can be trained so that its training volume is comparable to the input image or the corresponding dataset. This experiment is carried out on the MNIST dataset and we show that our model generates better results than an existing deep DNN model.
In this paper, we tackle the problem of classification of multi-class multi-task learning. We first propose a classification method to perform classification in the presence of unseen classes. Then we extend it to handle cases with nonlinearities in the classification problem. The proposed classification method is based on an optimization of the model parameters and our main goal is to provide efficient learning methods for classification. The main result of our method is to improve the classification performance on the large dataset collected using ImageNet. Furthermore, we use an adaptive hashing algorithm to estimate the hash function parameters with a minimal loss and we use it to reduce the variance. Experimental results demonstrate the effectiveness of our approach.
An FFT based approach for automatic calibration on HPs
A Novel Approach to Automatic Seizure Detection
Efficient Learning on a Stochastic Neural Network
Makeshift Dictionary Learning on Discrete-valued Texture Pairings
Tensor-based transfer learning for image recognitionIn this paper, we tackle the problem of classification of multi-class multi-task learning. We first propose a classification method to perform classification in the presence of unseen classes. Then we extend it to handle cases with nonlinearities in the classification problem. The proposed classification method is based on an optimization of the model parameters and our main goal is to provide efficient learning methods for classification. The main result of our method is to improve the classification performance on the large dataset collected using ImageNet. Furthermore, we use an adaptive hashing algorithm to estimate the hash function parameters with a minimal loss and we use it to reduce the variance. Experimental results demonstrate the effectiveness of our approach.