Andrew Critch, a research fellow at the Machine Intelligence Research Institute, describes a new model of deductively limited reasoning developed by Scott Garrabrant (https://intelligence.org/?p=14538).
Consider a setting where a reasoner is observing a deductive process (such as a community of mathematicians and computer programmers) and waiting for proofs of various logical claims (such as the abc conjecture, or “this computer program has a bug in it”), while making guesses about which claims will turn out to be true. Roughly speaking, "Logical Induction" presents a computable (though inefficient) algorithm that outpaces deduction, assigning high subjective probabilities to provable conjectures and low probabilities to disprovable conjectures long before the proofs can be produced.
This talk was given at a 2016 Effective Altruism Global workshop on theoretical frameworks for thinking about general-purpose AI. For a shortened video of the same talk that cuts out some motivations and directions for future research, see https://youtu.be/QF-eCscwf38.
Paper announcement and summary: https://intelligence.org/?p=14538
Paper preprint: https://intelligence.org/files/Logica...
Abridged paper outline: https://intelligence.org/files/Logica...
Slides: https://intelligence.org/files/Logica... and https://intelligence.org/files/Logica...