Structured Multi-Label Learning for Text Classification – This paper proposes a new method to classify a set of images into two groups, called pairwise multi-label. The proposed learning model, named Label-Label Multi-Label Learning (LML), encodes the visual features of each image into a set of labels and the labels, respectively. The main objective is to learn which labels are similar to the data. To this end, the LML model can be designed by taking the labels as inputs, and is trained by computing the joint ranking. Since labels have importance for the classification, we design a pairwise multi-label learning method. We develop a set of two LMLs, i.e., two multi-label datasets for ImageNet, VGGNet, and ImageNet, with a combination of deep CNN and deep latent space models. The learned networks are connected in the two networks by a dual manifold, and are jointly optimized by a neural network. Through simulation experiments, we demonstrate that the network’s performance can be considerably improved compared to the prior state-of-the-art approaches and outperforms that of those using supervised learning.
We propose a novel method for automatically learning features in low-dimensional, high-resolution video. The problem of estimating feature representations is a challenging and sometimes challenging research problem. We present a novel method that jointly estimates and classifies high-resolution videos using deep Convolutional Neural Network (CNN)-based and deep LSTM-based methods. The CNNs are trained in a semi-supervised manner, as in many recent works. The LSTM is used for a separate learning layer, which can also learn features in low-resolution video. To our knowledge, this is the first work to learn features for a new low-resolution video. We first develop a simple, yet efficient feature classifier for a new low-resolution video based on deep CNNs. Next, we formulate this a multi-class learning problem; our approach is able to learn features for both short and long-duration videos. Our formulation improves upon state-of-the-art supervised learning results in all two datasets.
Convolutional-Neural-Network for Image Analysis
Efficient Scene-Space Merging with Recurrent Loop Regression
Structured Multi-Label Learning for Text Classification
An Efficient Sparse Inference Method for Spatiotemporal Data
A New Approach to Online Multi-Camera Tracking and TrackingWe propose a novel method for automatically learning features in low-dimensional, high-resolution video. The problem of estimating feature representations is a challenging and sometimes challenging research problem. We present a novel method that jointly estimates and classifies high-resolution videos using deep Convolutional Neural Network (CNN)-based and deep LSTM-based methods. The CNNs are trained in a semi-supervised manner, as in many recent works. The LSTM is used for a separate learning layer, which can also learn features in low-resolution video. To our knowledge, this is the first work to learn features for a new low-resolution video. We first develop a simple, yet efficient feature classifier for a new low-resolution video based on deep CNNs. Next, we formulate this a multi-class learning problem; our approach is able to learn features for both short and long-duration videos. Our formulation improves upon state-of-the-art supervised learning results in all two datasets.