Eliezer Yudkowsky is one of the worlds foremost researchers on Friendly AI and recursive self-improvement. He created the Friendly AI approach to AGI, which emphasizes the importance of the structure of an ethical optimization process and its supergoal, in contrast to the common trend of seeking the right fixed enumeration of ethical rules a moral agent should follow. At the 2007 Singularity Summit, he introduced three schools of thought currently associated with the word Singularity, their core arguments and bolder conjectures, while noting where they support or contradict each other.
From http://www.singinst.org/media/singula...
Transcript: http://www.acceleratingfuture.com/peo...