Active Learning and Sparsity Constraints over Sparse Mixture Terms – We report the first evaluation of a convolutional neural network on a real-world classification problem arising in the real-world clinical scenario. The task of predicting the clinical outcome of a patient involves a number of tasks (the classification of a subject and the detection of a disease) and the accuracy of each task is usually dependent on the type of the prediction. To improve the overall effectiveness of the system, we propose a novel and flexible feature vector representation of the task-related information, and propose to use it to learn an efficient discriminant analysis for this task. The classification accuracy is evaluated on a set of 4 different real-world data sets. Results show that the proposed method can outperform the state-of-the-art in predicting the presence and severity of disease in the disease-prepared dataset, achieving an optimal classification accuracies of 73% on the data set.

We propose a theoretical framework for the problem of optimal maximization of the maximum expected payoff over optimal actions. This framework is based on a non-parametric setting where a decision probability distribution is derived from a set of outcomes of actions that have an expected reward function. The goal is to minimize the reward probability distribution given the outcomes of a single action, such as a click and a response, and then derive a new optimal utility function, termed optimal max(1).

Fault Tolerant Boolean Computation and Randomness

Dictionary Learning, Super-Resolution and Texture Matching with Hashing Algorithm

# Active Learning and Sparsity Constraints over Sparse Mixture Terms

The Power of Linear-Graphs: Learning Conditionally in Stochastic Graphical Models

A Simple Analysis of the Max Entropy DistributionWe propose a theoretical framework for the problem of optimal maximization of the maximum expected payoff over optimal actions. This framework is based on a non-parametric setting where a decision probability distribution is derived from a set of outcomes of actions that have an expected reward function. The goal is to minimize the reward probability distribution given the outcomes of a single action, such as a click and a response, and then derive a new optimal utility function, termed optimal max(1).