0c74c829ccb93410VgnVCM100000c2b1d38dRCRDapproved/UMICH/stats/Home/News & Events/Statistics SeminarDepartment Seminar Series: Richard Sutton, Ph.D., Mind and Data: Learning to Predict Long-term Consequences Efficiently ###@###(Tue, 21 Jan 2014)Department Seminar Series: Richard Sutton, Ph.D., Mind and Data: Learning to Predict Long-term Consequences Efficiently ###@###(Tue, 21 Jan 2014)3725 BBB (Bob and Betty Beyster Building)stats139033800000013903380000004:00 PM<p style=" letter-spacing: normal; -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255); widows: auto; orphans: auto; white-space: normal; font-family: arial, sans-serif; font-style: normal; text-indent: 0px; color: rgb(34, 34, 34); font-variant: normal; margin: 0px; text-align: start; font-weight: normal; font-size: 13px; word-spacing: 0px; text-transform: none; line-height: normal;" class="msonormal"><font size="4" face="times new roman, serif">Abstract: &nbsp;For some time now I have been exploring the idea that Artificial Intelligence can be viewed as a Big Data&nbsp;problem in the sense that it involves continually processing large amounts of sensorimotor data in real time,&nbsp;and that what is learned from the data is usefully characterized as predictions about future data. This&nbsp;perspective is appealing because it is reduces the abstract ideas of knowledge and truth to the clearer ideas&nbsp;of prediction and predictive accuracy, and because it enables learning from data without human intervention.&nbsp;Nevertheless, it is a radical idea, and it is not immediately clear how to make progress pursuing it.&nbsp;</font></p> <p style=" letter-spacing: normal; -webkit-text-stroke-width: 0px; background-color: rgb(255, 255, 255); widows: auto; orphans: auto; white-space: normal; font-family: arial, sans-serif; font-style: normal; text-indent: 0px; color: rgb(34, 34, 34); font-variant: normal; margin: 0px; text-align: start; font-weight: normal; font-size: 13px; word-spacing: 0px; text-transform: none; line-height: normal;" class="msonormal"><font size="4" face="times new roman, serif">A good example of simple predictive knowledge is that people and other animals continually make and learn&nbsp;many predictions about their sensory input stream, a phenomena called “nexting” and “Pavlovian conditioning”&nbsp;by psychologists. In my laboratory we have recently built a robot capable of nexting: every tenth of a second it&nbsp;makes and learns 6000 long-term predictions about its sensors, each a function of 6000 sensory features. To&nbsp;do this is computationally challenging and taxes the abilities of modern laptop computers. I argue that it also&nbsp;strongly constrains the learning algorithms: linear computational complexity is critical for scaling to large&nbsp;numbers of features, and temporal-difference learning is critical to handling long-term predictions efficiently.&nbsp;This then is the strategy we pursue for making progress on the Big Data view of AI: we focus on the search for&nbsp;the few special algorithms that can meet the demanding computational constraints of learning long-term&nbsp;predictions efficiently.</font></p>Nlorieannbzuniga1389969890057db74c829ccb93410VgnVCM100000c2b1d38d____once11112newnewRichard S. Sutton, Professor and iCORE Chair, Department of Computing Science, University of Albertahttp://incompleteideas.net/sutton/