AGI systems have to learn from experience, build models of the environment from the acquired knowledge, and use these models for prediction (and action). In philosophy this is called inductive inference, in statistics it is called estimation and prediction, and in computer science it is addressed by machine learning. I will first review unsuccessful attempts and unsuitable approaches towards a general theory of uncertainty and induction, including Popper's denial of induction, frequentist statistics, much of statistical learning theory, subjective Bayesianism, Carnap's confirmation theory, the data paradigm, eliminative induction, pluralism, and deductive and other approaches. I will then argue that Solomonoff's formal, general, complete, consistent, and essentially unique theory provably solves most issues that have plagued the other approaches. Some theoretical properties, extensions to (re)active learning agents, and practical approximations are mentioned in passing, but they are not the focus of this talk. I will conclude with some general advice to philosophers and AGI researchers.