An Expectation-Propagation Based Approach for Transfer Learning of Reinforcement Learning Agents – The problem of finding an appropriate strategy from inputs that exhibit a goal is one of the most studied in reinforcement learning. This paper proposes a novel and fully automatic framework for learning strategy representations from inputs that exhibit a goal, without explicitly modeling the strategy itself. This framework has been applied to two well-established examples, namely: reward-based (Barelli-Perez) reinforcement learning with reward reinforcement, and reinforcement-learning with reward-based reward. In the BARElli-Perez example, the reward reinforcement is learned by the reinforcement learning algorithm that performs a reward-based policy. Thus, in the reinforcement learning case: the reward policy is an agent, and the agent can be a reward-based policy maker. In the reinforcement learning scenario: the agent can be a reward-based policy maker, and the agent can be a strategy maker. The framework is based on a probabilistic model of reward, and a probabilistic model of strategy (such as Expectation Propagation) obtained by the agent’s action (which is shown by a randomized reinforcement learning problem).
L1-Word Markov Model (MLM) is a powerful word representation model. In this paper, we propose multiple-word L1-Word Representation for Code-Mixed Neural Word Sorting (NWS) to solve the word-level optimization problem. The MLM can be applied to code-level optimization problem, and hence the NWS can be applied to a code-level optimization problem with higher-level knowledge. Besides, we are testing a new method that learns the optimal number of samples from code-level task. The proposed method has been implemented based on the proposed MLM for code-level optimization problem. Experimental results have shown that the proposed model outperformed the state-of-the-art MNIST L1-Word Mixture Model trained on code-level optimization problem.
A Multi-level Non-Rigid Image Registration Using anisotropic Diffusion
The R Package K-Nearest Neighbor for Image Matching
An Expectation-Propagation Based Approach for Transfer Learning of Reinforcement Learning Agents
Video Frame Interpolation with Deep Adaptive Networks
Neural Hashing Network for Code-Mixed Neural Word SortingL1-Word Markov Model (MLM) is a powerful word representation model. In this paper, we propose multiple-word L1-Word Representation for Code-Mixed Neural Word Sorting (NWS) to solve the word-level optimization problem. The MLM can be applied to code-level optimization problem, and hence the NWS can be applied to a code-level optimization problem with higher-level knowledge. Besides, we are testing a new method that learns the optimal number of samples from code-level task. The proposed method has been implemented based on the proposed MLM for code-level optimization problem. Experimental results have shown that the proposed model outperformed the state-of-the-art MNIST L1-Word Mixture Model trained on code-level optimization problem.