Adaptive Neighbors and Neighbors by Nonconvex Surrogate Optimization – This work addresses a question that has received much interest in recent years: how to use multiple independent variables to find the optimal learning policy for each variable? Unfortunately, it is difficult to generalize the solution to this problem to any fixed model given only the data set. Such problems are difficult to solve on a practical level. In this paper we present an algorithm for learning to efficiently solve problems with multiple independent variables, such as learning from a single continuous variable, learning to predict the future, and learning to learn to predict the past. Our algorithm is applicable to any continuous variable model, including a random variable. We demonstrate that our algorithm can be applied to a wide class of continuous variables, for example: a multilevel function, a family of random variables such as a Markov random field, and a model-free continuous variable model, which learns to predict future outcomes with a continuous variable. Our algorithm is much faster than the traditional multilevel algorithms. We also show that it is well optimized for learning to predict the past with multiple independent variables.
We propose an efficient algorithm for the inference of vectorized kernels with high dimensions. This enables kernel density estimation to be performed effectively without requiring access to any knowledge about the underlying structure of the data. We describe our method, which efficiently samples sparse solutions in latent space and a kernel distribution in a deep architecture. We also describe our practical application to large-scale data analysis.
Deep Neural Networks on Text: Few-Shot Learning Requires Convolutional Neural Networks
On the Runtime and Fusion of Two Generative Adversarial Networks
Adaptive Neighbors and Neighbors by Nonconvex Surrogate Optimization
Tensor-based transfer learning for image recognition
Convex Optimization for Scalable Dictionary LearningWe propose an efficient algorithm for the inference of vectorized kernels with high dimensions. This enables kernel density estimation to be performed effectively without requiring access to any knowledge about the underlying structure of the data. We describe our method, which efficiently samples sparse solutions in latent space and a kernel distribution in a deep architecture. We also describe our practical application to large-scale data analysis.