Hierarchical Constraint Programming with Constraint Reasonings – This paper proposes a new method for extracting feature representations using probabilistic model representations. It assumes that the model is parametrically parametrized, and that the input data is modeled as a probabilistic data structure. We show that with a strong inference structure, we obtain a probabilistic representation of the model and that one can use this representation to provide representations with natural visualizations, such as semantic annotations and informative representations. The method is efficient and can be used for image classification and image captioning applications. Experimental results show that our method outperforms the state-of-the-art classification methods by over 70% accuracy while being much more accurate.
The recently proposed feature learning method of the Gaussian process (GP) achieves much higher accuracy than the previous gradient descent based GP methods. This paper presents the first step towards a general GP methodology and shows that the GP can be efficiently applied to the MNIST data set. The GP learns a new image using a sparse matrix and a vectorial model. The first layer of GP consists of three components. The first layer is a deep convolutional neural network with a matrix representing the input and a dictionary representation of the image. The output of the generator is then sampled from the dictionary representation by a weighted linear combination of the input and the dictionary representation. The first layer of GP is trained from an initial MNIST dataset with a loss function that estimates the loss of the gradient of the generator. Then, in a two step learning method, the GP learns a new MNIST dataset in which the generator is sampled from the dictionary representation. The gradient of the generator is then calculated as a weighted sum of data and dictionary representation. The feature learning method is then applied to MNIST for its classification task.
Learning and Visualizing the Construction of Visual Features from Image Data
Learning from non-deterministic examples
Hierarchical Constraint Programming with Constraint Reasonings
A Semantics-Driven Approach to Evaluation of Sports Teams’ Ratings from Draft Descriptions
Learning to rank with hidden measuresThe recently proposed feature learning method of the Gaussian process (GP) achieves much higher accuracy than the previous gradient descent based GP methods. This paper presents the first step towards a general GP methodology and shows that the GP can be efficiently applied to the MNIST data set. The GP learns a new image using a sparse matrix and a vectorial model. The first layer of GP consists of three components. The first layer is a deep convolutional neural network with a matrix representing the input and a dictionary representation of the image. The output of the generator is then sampled from the dictionary representation by a weighted linear combination of the input and the dictionary representation. The first layer of GP is trained from an initial MNIST dataset with a loss function that estimates the loss of the gradient of the generator. Then, in a two step learning method, the GP learns a new MNIST dataset in which the generator is sampled from the dictionary representation. The gradient of the generator is then calculated as a weighted sum of data and dictionary representation. The feature learning method is then applied to MNIST for its classification task.