American Statistical Association

Support vector machines were developed in the late nineteennineties for solving the general classification problem, in which the different nonparametric hypothesis are described by independent and identically distributed sampled data. Support vector machines are based on the statistical learning theory VapnikChervonenkis theory, which describes the necessary and sufficient conditions for learning algorithms, i.e. the minimization of the empirical risk, to converge to the minimum risk and it gives performance bounds for finite sets. Support vector machines generalize the optimal hyperplane decision rule by nonlinearly transforming the data into a highdimensional Hilbert space and describing its solution in terms of the kernel on that space.
Gaussian process classifiers are Bayesian nonparametric machinelearning tools for solving the same discriminative classification problem. They assume that a Gaussian process prior describes the latent classification function and the observations shape the prior to obtain a posterior probability estimate for each sample. Gaussian process classifiers, similarly to support vector machines, produce nonlinear decisions functions using nonlinear parametric covariance functions, whose parameters can be learnt by maximum likelihood or be marginalized out.
In this talk, we present both discriminative machinelearning procedures. We first describe support vector machines and how they implement the structural risk minimization principle. We introduce Gaussian process classifiers from its estimation counterpart and how they are able to produce accurate posterior probability estimates. We complete our presentation with some applications that have popularized these discriminative learning methods within the engineering and computer science communities.
Date:  Tuesday, September 8, 2009 

Time:  3:00  4:00 P.M. 
Location: 
New York State Psychiatric Institute
1051 Riverside Drive 6th Floor Multipurpose Room (6602) New York, New York (Directions) 