Abstract:
This thesis details a new vantage point for attacking longstanding problems in machine learning. We use tools from computational complexity theory to make progress on problems from computational learning theory. Our methods yield the fastest and most expressive algorithms to date for learning several fundamental concept classes: * We show that any s-term DNF over n variables can be computed by a polynomial threshold function of order O(n1/3 log s). As an immediate consequence we obtain the fastest known DNF learning algorithm which runs in time 2O(n1/3). * We give the first polynomial time algorithm to learn an intersection of a constant number of halfspaces under the uniform distribution to within any constant error parameter. We also give the first quasipolynomial time algorithm for learning any function of a constant number of halfspaces with polynomial bounded weights under any distribution. * We give an algorithm to learn constant-depth polynomial-size circuits augmented with majority gates under the uniform distribution using random examples only. For circuits which contain a polylogarithmic number of majority gates the algorithm runs in quasipolynomial time. Under a suitable cryptographic assumption we show that these are the most expressive circuits which will admit a non-trivial learning algorithm. Our approach relies heavily on giving novel representations of well known concept classes via complexity theoretic reductions. We exploit the fact that many results in computational learning theory have a complexity theoretic analogue or implication. As such,(cont.) we also obtain new results in computational complexity including (1) a proof that the 30 year old lower bound due to Minsky and Papert [88] on the degree of a perceptron computing a DNF formula is tight and (2) improved constructions of pseudo-random generators, mathematical objects which play a fundamental role in cryptography and derandomization.

Description:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2002.; Includes bibliographical references (leaves 127-138).