: Predicting a previously unseen example from training examples is unsolvable without additional assumptions about the nature of the task at hand. A learner’s performance depends crucially on how its internal assumptions, or inductive biases, align with the task. I will present a theory that describes the inductive biases of neural networks using kernel methods and statistical mechanics. This theory elucidates an inductive bias to explain data with "simple functions", which are identified by solving a related kernel eigenfunction problem on the data distribution. This notion of simplicity allows us to characterize whether a network is compatible with a learning task, facilitating good generalization performance from a small number of training examples. I will present applications of this theory to artificial and biological neural systems, and real datasets.