Kevin B. Korb - Philosophy of Science, Bayesian Reasoning





The interactive transcript could not be loaded.


Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Mar 28, 2012

Interview with Kevin B. Korb on Philosophy of Science, Popper & Falsificationism, Confirmation Theory, Bayesian Reasoning, Testing for Pseudoscience & The Demarcation Question, Statistics: Classical vs Bayesian Approaches, Heuristics & Biases, , Philosophical Theories of Method, and approaches to AI.

Kevin's research is in artificial intelligence and the philosophy of science and the interrelation between the two, and especially the automation of scientific induction, or causal discovery. He is also co-founder of Psyche: An Interdisciplinary Journal of Research on Consciousness.

Recent presentations: The Philosophy of Computer Simulation, an invited talk at the 13th International Congress of Logic Methodology and Philosophy of Science, Beijing, 9-15 August, 2007 .
Two Technical Reports

Kevin B. Korb, Carlo Kopp and Lloyd Allison (1997) A Statement on Higher Education Policy in Australia. Dept Computer Science, Monash University, Melbourne, 1997. This is our submission to the West Higher Education Review Committee.

Kevin B. Korb (1998) Research Writing in Computer Science. This is an updated (1998) version of: Technical Report 97/308. Dept Computer Science, Monash University, Melbourne, 1997. This explains some of what goes into good research writing, including argument analysis and an understanding of cognitive errors that people are prone to make. It also discusses research ethics.
"The Ethics of AI"

Kevin's paper on the subject can be found here in pdf format -- Kevin Korb -- The Ethics of AI (IEEE).

Kevin gave a presentation at the Singularity Summit AU 2010.
Abstract: "There are two questions about the ethics of artificial intelligence (AI) which are central:
* How can we build an ethical AI?
* Can we build an AI ethically?
The first question concerns the kinds of AI we might achieve — moral, immoral or amoral. The second concerns the ethics of our achieving such an AI. They are more closely related than a first glance might reveal. For much of technology, the National Rifle Association's neutrality argument might conceivably apply: "guns don't kill people, people kill people." But if we build a genuine, autonomous AI, we arguably will have to have built an artificial moral agent, an agent capable of both ethical and unethical behavior. The possibility of one of our artifacts behaving unethically raises moral problems for their development that no other technology can. Both questions presume a positive answer to a prior question: Can we build an AI at all? We shall begin our review there. "


When autoplay is enabled, a suggested video will automatically play next.

Up next

to add this to Watch Later

Add to

Loading playlists...