Title
stringlengths
45
100
Transcript
stringlengths
26.6k
482k
Peter Norvig: Artificial Intelligence: A Modern Approach | Lex Fridman Podcast #42
the following is a conversation with peter norvig he's a director of research at google and the co-author with stuart russell of the book artificial intelligence and modern approach that educated and inspired a whole generation of researchers including myself to get into the field of artificial intelligence this is the artificial intelligence podcast if you enjoy it subscribe on youtube give it five stars on itunes support on patreon or simply connect with me on twitter at lex friedman spelled f-r-i-d-m-a-n and now here's my conversation with peter norvig most researchers in the ai community including myself own all three editions red green and blue of the uh artificial intelligence a modern approach it's a field-defining textbook as many people are aware that you wrote with stuart russell how has the book changed and how have you changed in relation to it from the first edition to the second to the third and now fourth edition as you work on it yeah so it's been a lot of years a lot of changes one of the things changing from the first to maybe the second or third was just the rise of uh computing power right so i think in the in the first edition we said uh here's predicate logic but uh that only goes so far because pretty soon you have millions of uh short little predicate expressions and they couldn't possibly fit in memory uh so we're going to use first order logic that's more concise and then we quickly realized oh predicate logic is pretty nice because there are really fast sat solvers and other things and look there's only millions of expressions and that fits easily into memory or maybe even billions fit into memory now so that was a change of the type of technology we needed just because the hardware expanded even to the second edition resource constraints were loosened significantly yeah yeah and that was the early 2000s second edition right so 95 was the first and then 2000 2001 or so and then moving on from there i think we're starting to see that again with the gpus and then more specific type of machinery like the tpus and using custom asics and so on for deep learning so we're seeing another advance in terms of the hardware then i think another thing that we especially notice this time around is in all three of the first editions we kind of said well we're going to find ai as maximizing expected utility and you tell me your utility function and now we've got 27 chapters worth of cool techniques for how to optimize that i think in this edition we're saying more you know what maybe that optimization part is the easy part and the hard part is deciding what is my utility function what do i want and if i'm a collection of agents or a society what do we want as a whole so you touch that topic in this edition you get a little bit more into utility yeah that's really interesting on a technical level we're almost pushing the philosophical i guess it is philosophical right so we we've always had a philosophy chapter which which i was uh glad to that we were supporting and now it's less kind of the uh you know chinese room type argument and more of these uh ethical and societal type issues so we get into uh the issues of fairness and bias and uh and just the issue of aggregating utilities so how do you encode human values into a utility function is is this something that you can do purely through data in a learned way or is there some systematic obviously there's no good answers yet there's just uh beginnings to this uh to even opening doors so there is no one answer yes there are techniques uh to try to learn that so we talk about inverse reinforcement learning right so reinforcement learning uh you take some actions you get some rewards and you figure out what actions you should take in inverse reinforcement learning you observe somebody taking actions and you figure out uh well that this must be what they were trying to do if they did this action it must be because they want it of course there's restrictions to that right so lots of people take actions that are self-destructive uh where they're they're suboptimal in certain ways so you don't want to learn that right you want to uh somehow learn the uh the perfect actions uh rather than the ones they actually take so so that's a challenge uh for that field then another big part of it is just kind of uh theoretical of saying uh what can we accomplish and so you look at like this this work on the uh programs to uh predict recidivism and decide uh you know who should get parole or who should get bail or whatever uh and how are you gonna evaluate that and one of the big issues is fairness across protected classes protected classes being things like uh sex and race and so on and uh so two things you want is you want to say well if i get a score of say uh six out of ten then i want that to mean the same whether no matter what race i'm on yes right so i want to have a 60 percent chance of reoccurring uh regardless uh and the makers of the one of the makers of a commercial program to do that says that's what we're trying to optimize and look we achieved that we've we've reached that kind of balance and then on the other side you also want to say well if if it makes mistakes i want that to affect both sides of the protected class equally and it turns out they don't do that right so they're they're twice as likely to make a mistake that would harm a black person over a white person so that seems unfair so you'd like to say well i want to achieve both those goals and then it turns out you do the analysis and it's theoretically impossible to achieve both those goals so you have to trade them off one against the other so that analysis is really helpful to know what you can aim for and how much you can get that you can't have everything but the analysis certainly can't tell you where should we make that trade-off point but nevertheless then we can uh as humans deliberate where that trade-off should be yeah so at least we now we're we're arguing an informed way we're not asking for something impossible we're saying uh here's where we are and and here's what we aim for and this strategy is better than that strategy so that's i would argue is a really powerful and really important first step but it's a doable one sort of removing uh undesirable degrees of bias in uh in systems in terms of protected classes and then there's something i listen to your uh commencement speech or there's some fuzzier things like you mentioned angry birds yeah do you want do you want to create systems that feed the dopamine enjoyment that feed that optimize for you returning to the system enjoying the moment of playing the game of getting likes or whatever this kind of thing or some kind of long-term improvement right is if are you even thinking about that that's ex that's really going to the philosophical area i think that's a really important issue too certainly thinking about that i i don't think about that as a as an ai issue as much but as you say you know the point is we've built this society in this infrastructure where we say we have a marketplace for attention and uh we've decided as a society that we like things that are free and so we want all uh apps on our phone to be free uh and that means they're all competing for your attention and then eventually they they make some money some way through uh ads or in-game sales or whatever but they can only win by defeating all the other apps by instilling your attention and we build a marketplace where it seems like they're working against you rather than working with you and i'd like to find a way where we can change the playing field so we feel more like well these things are on my side yes they're letting me have some fun in the short term but they're also helping me in the long term rather than competing against me and those aren't necessarily conflicting objectives they're just the incentives the direct current incentives as we try to figure out this whole new world seem to be on uh the easier part of that which is feeding the dopamine the rush right but uh let me take a quick step back at the beginning of the artificial intelligence and modern approach book of writing so here you are in the 90s when you first sat down with stuart to write the book to cover an entire field which is one of the only books that successfully done that for ai and actually in a lot of other computer science fields you know it's a dif it's a it's a huge undertaking so it must have been quite daunting what was that process like did you envision that you would be trying to cover the entire field was there a systematic approach to it that was more step by step how did it feel so i guess it came about you know go to lunch with the other ai faculty at berkeley and we'd say uh you know the field is changing seems like the current books are a little bit behind nobody's come out with a new book recently we should do that and everybody said yeah yeah that's a great thing to do and we never did anything right and then i ended up heading off to uh industry i went to uh sun labs so i thought well that's the end of my possible academic publishing career but i met stuart again at a conference like a year later and said you know that book we were always talking about you guys must be half done with it by now right he said well we keep talking we never do anything so i said well you know we should do it and i think the reason is that we all felt it was a time where the field was changing and that was in two ways so you know the good old-fashioned ai was based uh primarily on boolean logic you had a few tricks to deal with uncertainty and it was based primarily on knowledge engineering then the way you got something done is you went out you interviewed an expert and you wrote down by hand everything they knew and we saw in in 95 that the field was changing in in two ways one we're moving more towards probability rather than boolean logic and we're moving more towards machine learning rather than knowledge engineering uh and the other books uh hadn't caught that way if they were still in the uh more in the in the old school although so certainly they had part of that on the way but we said if we start now completely taking that point of view we can have a different kind of book and we were able to put that together and uh what was literally the process if you remember did you start writing a chapter did you outline yeah i guess i guess we did an outline and then we sort of assigned chapters to each person at the time uh i had moved to boston and stewart was in berkeley so basically uh we did it uh uh over the internet and uh you know that wasn't the same as doing it today it meant you know dial-up lines and telnetting in and you know you you telnetted into one shell and you type cat file name and you hoped it was captured at the other end and certainly you're not sending uh images and figures back and forth right right that didn't work but you know did you anticipate where the field would go from that day from from the 90s did you see the growth into learning-based methods into data-driven methods that followed in the future decades we certainly thought that learning was important i guess we we missed it as uh being as important as it as it is today we missed this idea of big data we missed that uh uh the idea of deep learning hadn't been invented yet we could have uh taken the book from a complete uh machine learning point of view right from the start we chose to do it more from a point of view of we're going to first develop different types of representations and we're going to talk about different types of environments of is it fully observable or partially observable and is it deterministic or stochastic and so on and we made it more complex along those axes rather than focusing on the machine learning axis first do you think you know there's some sense in which the deep learning craze is extremely successful for a particular set of problems and you know eventually it's going to in the general case hit challenges so in terms of the difference between perception systems and robots that have to act in the world do you think uh we're going to return to ai modern approach type breadth in addition five and six yeah in uh in future decades do you think deep learning will take its place as a chapter and as in his bigger uh view of ai yeah i think we don't know yet how it's all going to play out so uh in the new edition uh we have a chapter on deep learning uh we got ian goodfellow to be the guest author for that chapter so he said he could condense his whole deep learning book into one chapter i think he did a great job we were also encouraged that he's you know we gave him the old neural net chapter and said have fun with it modernize that and he said you know half of that was okay that certainly there's lots of new things that have been developed but some of the core was still the same so i think we'll gain a better understanding of what you can do there i think we'll need to incorporate all the things we can do with the other technologies right so deep learning started out convolutional networks and very close to perception uh and has since moved to be uh to be able to do more with actions and some degree of longer term planning but we need to do a better job with representation than reasoning and one-shot learning and so on and well i think we don't know yet how that's going to play out so do you think looking at the some success but certainly uh eventual demise the partial demise of experts to symbolic systems in the 80s do you think there is kernels of wisdom in the work that was done there with logic and reasoning and so on that will rise again in your view so certainly i think the idea of representation and reasoning is crucial that you know sometimes you just don't have enough data about the world to learn de novo so you've got to have some idea of representation whether that was programmed in or told or whatever and then be able to take uh steps of reasoning i i think the problem uh with the you know the good old-fashioned ai was uh one we tried to base everything on these uh symbols that were atomic and that's great if you're like trying to define the properties of a triangle right because they have necessary insufficient conditions uh but things in the real world don't the real world is is messy and doesn't have sharp edges and atomic symbols do so that was a poor match and then the other aspect was that the reasoning was universal and applied anywhere which in some sense is good but it also means there's no guidance as to where to apply it and so you you know you started getting these paradoxes like uh well if i have a mountain and i remove one grain of sand then it's still a mountain and but if i do that repeatedly at some point it's not right and with logic you know there's nothing to stop you from applying things uh repeatedly but maybe with something like deep learning and i don't really know what the right name for it is we could separate out those ideas so one we could say uh you know a mountain isn't just an atomic notion it it's some sort of something like uh word embedding that uh uh has a a more complex representation yeah and secondly we could somehow learn yeah there's this rule that you can remove one grain of sand and you can do that a bunch of times but you can't do it a near infinite amount of times but on the other hand when you're doing induction on the integer sure then it's fine to do it an infinite number of times and if we could uh somehow we have to learn when these strategies are applicable rather than having the strategies be completely neutral and available everywhere anytime you use neural networks anytime you learn from data or form representation from data in an automated way it's not very explainable as to or it's not introspective to us humans in terms of uh how this neural network sees the world where why does it succeed so brilliantly on so many in so many cases and fail so miserably in surprising ways and small so what do you think is this is uh the future there can simply more data better data more organized data solve that problem or is there elements of symbolic systems that need to be brought in which are a little bit more explainable yeah so i prefer to talk about trust and uh validation and verification rather than just about explainability and then i think uh explanations are one tool that you use towards those goals and i think it is important issue that we don't want to use these systems unless we trust them and we want to understand where they work and where they don't work and in an explanation can be part of that right so i apply for loan and i get denied i want some explanation of why and you have in europe we have the gdpr that says you're required to be able to get that but on the other hand explanation alone is not enough right so you know we're used to dealing with people and with the organizations and corporations and so on and they can give you an explanation then you have no guarantee that that explanation relates to reality right right so the bank can tell me well you didn't get the loan because you didn't have enough collateral and that may be true or it may be true that they just didn't like my religion or or something else i can't tell from the explanation and that's that's true whether the decision was made by computer or by a person so i want more i do want to have the explanations and i want to be able to uh have a conversation to go back and forth and said well you gave this explanation but what about this and what would have happened if this had happened and uh what would i need to change that so i think a conversation is is a better way to think about it than just an explanation as a single output and i think we need testing of various kinds right so in order to know was the decision really based on my collateral or was it based on my uh religion or skin color or whatever i can't tell if i'm only looking at my case but if i look across all the cases then i can detect a pattern all right right so you want to have that kind of capability uh you want to have these adversarial testing right so we thought we were doing pretty good at object recognition in images we said look we're at sort of pretty close to human level performance on imagenet and so on and then you start seeing these adversarial images and you say wait a minute that part is nothing like human performance okay you can mess with it really easily you can mess with it really easily right and uh yeah you could do that to humans too right so in a different way perhaps right humans don't know what color the dress was right and so they're vulnerable to certain attacks that are different than the attacks on the on the machines but the you know the tax on the machines are so striking uh they really change the way you think about what we've done right and the the way i think about it is i think part of the problem is we're seduced by uh our low dimensional metaphors right yeah so you know you look like that phrase you look in uh in a textbook and you say okay now we've mapped out the space and you know uh cat is here and dog is here and maybe there's a tiny little spot in the middle where you can't tell the difference but mostly we've got it all covered and if you believe that metaphor uh then you say well we're nearly there and uh you know there's only going to be a couple adversarial images but i think that's the wrong metaphor and what you should really say is it's not a 2d flat space that we've got mostly covered it's a million dimension space and a cat is this string that goes out in this crazy bath and if you step a little bit off the path in any direction you're in nowhere's land and you don't know what's going to happen and so i think that's where we are and now we've got to deal with that so uh it wasn't so much an explanation but it was an understanding of what the models are and what they're doing and now we can start exploring how do you fix that yeah validating the robustness of the system so on but take it back to the this uh this word trust uh do you think we're a little too hard on our robots in terms of uh the standards we apply so you know of uh there's a dance there's a there's a there's a dance and nonverbal and verbal communication between humans you know if we apply the same kind of standard in terms of humans you know we trust each other pretty quickly uh you know you and i haven't met before and there's some degree of trust right that nothing's gonna go crazy wrong and yet to ai when we look at ai systems where we seem to approach uh through skepticism always always and it's like they have to prove through a lot of hard work that they're even worthy of uh even inkling of our trust do it what do you what do you think about that how how do we break that barrier close that gap i think that's right i think that's a big issue uh just listening uh my friend uh mark moffat is a naturalist and he says uh the most amazing thing about humans is that you can walk into a coffee shop or a a busy street in a city and there's lots of people around you that you've never met before and you don't kill each other yeah he says chimpanzees cannot do that yeah right right if the chimpanzee's in a situation where here's some that aren't from my tribe bad things happen especially in a coffee shop there's delicious food around you know yeah yeah but but we humans have figured that out yeah right uh and you know for the most part for the most part we still go to war we still do terrible things uh but for the most part we've learned to trust each other and live together uh so that's going to be important for our uh our ai systems as well and i th also i think uh you know a lot of the emphasis is on ai but in many cases ai is part of the technology but isn't really the main thing so a lot of what we've seen is more due to communications technology than ai ai technology yeah you want to make these good decisions but the reason we're able to have any kind of system at all is we've got the communication so that we're collecting the data and so that we can reach lots of people around the world i think that's a bigger change that we're dealing with speaking of reaching a lot of people around the world on the side of education you've uh one of the many things in terms of education you've done you taught the intro to artificial intelligence course that signed up 100 160 000 students is one of the first successful examples and massive of a mooc massive open online course what did you learn from that experience what do you think is the future of moocs of education online yeah it was a great fun doing it particularly uh being right at the start just because it was exciting and new but it also meant that we had less competition right so uh one of the things you hear about uh well the problem with moocs is uh the completion rates are are so low so there must be a failure and and i gotta admit i'm a prime contributor right i've probably started 50 different courses that i haven't finished but i got exactly what i wanted out of them because i had never intended to finish them i just wanted to dabble in a little bit either to see the topic matter or just to see the pedagogy of how are they doing this class so i guess the main thing i learned is when i came in i thought the challenge was information saying if i'm just take the stuff i want you to know and i'm very clear and explain it well then my job is done and good things are going to happen and then in doing the course i learned well yeah you got to have the information but really the motivation is the most important thing that if students don't stick with it it doesn't matter how good the content is and i think being one of the first classes we were helped by uh sort of exterior motivation so we tried to do a good job of making it enticing and setting up ways for uh you know the community to work with each other to make it more motivating but really a lot of it was hey this is a new thing and i'm really excited to be part of a new thing and so the students brought their own motivation and so i think this is great because there's lots of people around the world who have never had this before you know it would never have the opportunity to go to stanford and take a class or go to mit or go to one of the other schools but now we can bring that to them and if they bring their own motivation they can be successful in a way they couldn't before but that's really just the top tier of people that are ready to do that the rest of the people just don't see or don't have their motivation and don't see how if they push through and were able to do it what advantage that would get them so i think we've got a long way to go before we're able to do that and i think it'll be some of it is based on technology but more of it's based on the idea of community that you got to actually get people together some of the getting together can be done online i think some of it really has to be done in person to be able in order to build that type of community and trust you know there's an intentional mechanism that we've developed uh a short attention span especially younger people uh because sort of shorter and shorter videos online uh there's a whatever the the way the brain is developing now with people that have grown up with the internet they have a quite a short attention span so and i would say i had the same when i was growing up too probably for different reasons so i probably wouldn't have learned as much as i have if i wasn't forced to sit in a physical classroom sort of bored sometimes falling asleep but sort of forcing myself through that process to sometimes extremely difficult computer science courses what's the difference in your view between in-person education experience which you first of all yourself had and you yourself taught and online education and how do we close that gap if it's even possible yeah so i think there's two issues one is whether it's in person or online so it's sort of the physical location and then the other is kind of the affiliation right so you stuck with it in part because you were in the classroom and you saw everybody else was suffering right the same way you were but also because you were enrolled you had paid tuition sort of everybody was expecting you to stick with it society parents yeah peers right and so those are two separate things i mean you could certainly imagine i pay a huge amount of tuition and everybody signed up and says yes you're doing this uh but then i'm in my room and my classmates are in are in different rooms right we could have things set up that way so it's not just the online versus offline i think what's more important is the commitment that you've made and certainly it is important to have that kind of informal you know i meet people outside of class we talk together because we're all in it together i think that's uh really important both in keeping your motivation and also that's where some of the most important learning goes on so you want to have that maybe you know especially now we start getting into higher bandwidths and augmented reality and virtual reality you might be able to get that without being in the same physical place do you think it's possible we'll see a course at stanford for example that for students enrolled students is only online in the near future who are literally sort of that's part of the curriculum and there is no yeah so you're starting to see that i know uh georgia tech has a master's that's done that way oftentimes it's sort of they're creeping in in terms of a master's program or sort of um further education considering the constraints of students and so on but i mean literally is it possible that we just you know stanford mit berkeley all these places go online only in uh in the next few decades yeah probably not because you know they've got a big commitment to a physical campus sure right there's a momentum that's both financial and culturally right and and then there are certain things that just hard to do uh virtually right so you know we're in a field uh where uh if you have your own computer and your own paper and so on uh you can do the work anywhere uh but if you're in a biology lab or something uh you know you don't have all the right stuff at home right so our field programming you've also done a lot of you've done a lot of programming yourself in 2001 you wrote a great article about programming called teach yourself programming in 10 years sort of response to all the books that say teach yourself programming in 21 days so if you're giving advice to someone getting into programming today this is a few years since you've written that article what's the best way to undertake that journey i think there's lots of different ways and i think programming means more things now and i guess you know when i wrote that article i was thinking more about becoming a professional software engineer and i thought that's a you know a sort of a career-long field of study but i think there's lots of things now that people can do where programming is a part of solving what they want to solve without achieving that professional level status right so i'm not going to be going and writing a million lines of code but you know i'm a biologist or a physicist or something or even a historian and i've got some data and i want to ask a question of that data and i think for that you don't need 10 years right so there are many shortcuts to being able to answer those kinds of questions and and you know you see today a lot of emphasis on learning to code teaching kids how to code uh i think that's great uh but i wish they would change the message a little bit right so i think code isn't the main thing i don't really care if you know the syntax of javascript or if you can connect these blocks together in this visual language but what i do care about is that you can analyze a problem you can think of a solution you can carry out you know make a model run that model test the model see the results verify that they're reasonable ask questions and answer them right so it's more modeling and problem solving and you use coding in order to do that but it's not just learning coding for its own sake that's really interesting so it's actually almost in many cases it's learning to work with data to extract something useful out of data so when you say problem solving you really mean taking some kind of maybe collecting some kind of data set cleaning it up and saying something interesting about it which is useful in all kinds of domains and you know and i see myself being stuck sometimes in kind of the the old ways right so you know be working on a project maybe with a younger employee and we say oh well here's this new package that could help solve this problem and i'll go and i'll start reading the manuals and you know i'll be two hours into reading the manuals and then my colleague comes back and says i'm done yeah you know i downloaded the package i installed it i tried calling some things the first one didn't work the second one work now i'm done yeah and i say but i have 100 questions about how does this work and how does that work and they say who cares right i don't need to understand the whole thing i unders i answered my question it's a big complicated package i don't understand the rest of it but i got the right answer and i'm just it's hard for me to get into that mindset i want to understand the whole thing and you know if they wrote a manual i should probably read it and but that's not necessarily the right way i think i have to get used to dealing with more being more comfortable with uncertainty and not knowing everything yeah so i struggle with the same instead of the the spectrum between donald and don knuth yeah it was kind of the very you know before he can say anything about a problem he really has to get down to the machine code assembly yeah versus exactly what you said i've have several students in my group that uh you know 20 years old and they can solve almost any problem within a few hours that would take me probably weeks because i would try to as you said read the manual so do you think the nature of mastery you're you're mentioning biology sort of outside disciplines applying programming but computer scientists so over time there's higher and higher levels of abstraction available now so with uh this week there's a the tensorflow summit right so if you're if you're not particularly into deep learning but you're still a computer scientist uh you can accomplish an incredible amount with uh tensorflow without really knowing any fundamental internals of machine learning do you think the nature of mastery is is changing even for computer scientists like what it means to be an expert programmer yeah i think that's true you know we never really should have focused on programmer right because it's still it's the skill and what we really want to focus on is the result so we we built this uh ecosystem where the way you can get stuff done is by programming it yourself at least when i started that you know library functions meant you had square root and that was about it right everything else you built from scratch and then we built up an ecosystem where a lot of times well you can download a lot of stuff that does a big part of what you need and so now it's more a question of assembly rather than [Music] manufacturing and that's a different way of looking at problems from another perspective in terms of mastery and looking at programmers or people that reason about problems in a computational way so google is you know the from the hiring perspective from the perspective of hiring or building a team of programmers how do you determine if someone's a good programmer or if somebody again yeah i want to deviate from i want to move away from the word programmer but somebody who can solve problems of large-scale data and so on what's what's uh how do you build a team like that through the interviewing process yeah and i and i think uh as a company grows uh you get more uh expansive in the types of people you're looking for right so i think you know in the early days we'd interview people and the question we were trying to ask is uh how close are they to jeff dean and most people were pretty far away but we take the ones that were you know not that far away and so we got kind of a homogeneous group of people who are really great programmers then as a company grows you say well we don't want everybody to be the same to have the same skill set and so now we're uh hiring uh biologists in our health areas and we're hiring physicists we're hiring mechanical engineers we're hiring uh you know social scientists and ethnographers and people with different backgrounds who bring different skills so you have mentioned that you still may partake in code reviews given that you have a wealth of experience as you've also mentioned uh what errors do you often see and tend to highlight in the code of junior developers of people coming up now uh given your background from blisp to a couple decades of programming yeah that's a great question you know sometimes i try to look at the flexibility of the design of yes you know this api solves this problem but uh where is it going to go in the future who else is going to want to call this and uh you know are you making it easier for them to do that it's a matter of design is it documentation is it is it uh sort of an amorphous thing you can't really put it it's just how it feels if you put yourself in the shoes of a developer would you use this kind of thing i think it is how you feel right and so yeah documentation is good uh but it's but it's more a design question right if you get the design right then people will figure it out whether the documentation is good or not and and if the design is wrong then it'll be harder to use how have uh you yourself changed as a programmer over the years as in in a way we already started to say sort of you want to read the manual you want to understand the core of the syntax to the how the language is supposed to be used and so on but what's the evolution been like from the 80s 90s to today i guess one thing is you don't have to worry about the small details of efficiency as much as you used to right so like i remember uh i did my list book in the 90s and one of the things i wanted to do was say uh here's how you do an object system and uh basically we're going to make it so each object is a hash table and you look up the methods and here's how it works and then i said of course the real common lisp object system is much more complicated it's got all these efficiency type issues and this is just a toy nobody would do this in real life and it turns out python pretty much did exactly what i said yeah and said uh objects are just dictionaries and yeah they have a few little uh tricks as well but mostly you know the thing that would have been 100 times too slow in the 80s is now plenty fast for most everything so you had to as a programmer let go of perhaps an obsession that i remember coming up with of trying to write efficient code yeah that to say you know what really matters is the total time it takes to get the project done and most of that's going to be the programmer time so if you're a little bit less efficient but it makes it easier to understand and modify then that's the right trade-off so you've written quite a bit about lisp your book on programming is in lisp you you have a lot of code out there that's in lisp so myself and people who don't know what lisp is should look it up it's my favorite language for many ai researchers it is a favorite language the favorite language they never use these days so what part of the list do you find most beautiful and powerful so i think the beautiful part is the simplicity that in half a page you can define the whole language and other languages don't have that so you feel like you can hold everything in your head and then you know a lot of people say well then that's too simple you know here's all these things i want to do and you know my java or python or whatever has 100 or 200 or 300 different syntax rules and don't i need all those and lisp's answer was no we're only going to give you eight or so syntax rules but we're going to allow you to define your own and so that was a very powerful idea and i think this idea of saying i can start with my problem and with my data and then i can build the language i want uh for that problem and for that data and then i can make lists define that language so you uh you're sort of uh mixing levels and saying i'm simultaneously a programmer in a language and a language designer and that allows a better match between your problem and your eventual code and i think lis had done that better than other languages yeah it's a very elegant implementation of functional programming but why do you think lisp has not had the mass adoption and success of languages like python is it the parentheses is it all the parentheses yeah so i think a couple of things so one was i think it was designed for a single programmer or a small team and a skilled programmer who had the good taste to say well i'm i am doing language design and i have to make good choices and if you make good choices that's great if you make bad choices you can hurt yourself and it can be hard for other people on the team to understand it so i think there was a limit to the scale of the size of a project in terms of number of people that lisp was good for and as an industry we kind of grew beyond that i think it is in part the parentheses you know one of the jokes is the acronym for lisp is lots of irritating silly parentheses my acronym was lisp is syntactically pure saying all you need is parentheses and atoms but i remember you know so we had the the ai textbook and uh because we did it in the 90s we had we had pseudocode in the book but then we said well we'll have lisp online because that's the language of ai at the time and i remember some of the students complaining because they hadn't had lists before and they didn't quite understand what was going on and i remember one student complained i don't understand how this pseudocode corresponds to this lisp and there was a one-to-one correspondence between the the symbols in the code in the pseudocode and the only thing difference was the parentheses so i said it must be that for some people a certain number of left parentheses shuts off their brain yeah it's very it's very possible in that sense then python just goes the other way and so so that was the point at which i said okay can't have only lisp that's a language because i you know i don't want to you know you only got 10 or 12 or 15 weeks or whatever it is to teach ai and i don't want to waste two weeks of that teaching lisp so i say i got to have another language java was the most popular language at the time i started doing that and then i said it's really hard to have a one-to-one correspondence between the pseudocode and the java because java's so verbose so then i said i'm going to do a survey and find the language that's most like my pseudocode and turned out python basically was my pseudo code somehow i had channeled uh guido and designed a pseudocode that was the same as python although i hadn't heard of python uh at that point and from then on uh that's what i've been using because it's been a good match so what's the story in python behind pietudes your github repository with puzzles and exercises and python is pretty fun yeah just it seems like fun uh you know you know i like uh doing puzzles and i like uh being an educator i did a class with udacity uh udacity uh 212 i think it was it was basically problem solving uh using python and looking at different problems does pie tubes feed that class in terms of the exercises i was wondering what that yeah so the class the class came first yeah some of the stuff that's in pi tubes was write-ups of what was in the class and then some of it was just continuing to uh to work on new problems so what's the organizing madness of pi tubes is it just the collect a collection of cool exercises just whatever i thought was fun okay awesome so you were the director of search quality of google from 2001 to 2005. in the early days uh when there's just a few employees and when the company was growing like crazy right so i mean a google revolution has the way we discover share and aggregate knowledge so just this is uh this is one of the fundamental aspects of civilization right is information being shared and there's different mechanisms throughout history but google is just 10x improved that right and you're a part of that right people discovering that information so what were some of the challenges on the philosophical or the technical level in those early days it definitely was an exciting time and as you say we were doubling in size every year and the challenges were we wanted to get the right answers right and uh we had to figure out what that meant we had to implement that and we had to make it all uh efficient and uh we had to keep on testing and seeing if we were delivering good answers and now when you say good answers it means whatever people are typing in in terms of keywords in terms that kind of thing that the the results they get are ordered by the desirability for them of those results like they're like the first thing they click on will likely be the thing that they were actually looking for right one of the metrics we had was focused on the first thing uh some of it was focused on the whole page so it was focused on you know top three or so so we looked at a lot of different metrics for for how well we were doing and we broke it down into subclasses of you know maybe here's a type of uh of uh query that we're not doing well on then we try to fix that early on we started to realize that we were in an adversarial position right so we started thinking uh well we're kind of like the card catalog in the library right so the books are here and we're off to the side and we're just reflecting what's there and then we realized every time we make a change the webmasters make a change and it's uh game theoretic and so we had to think not only of is this the right move for us to make now but also if we make this move what's the counter move going to be is that going to get us into a work worst place in which case we won't make that move we'll make a different move and did you find i mean i assume with the popularity and the growth of the internet that people were creating new content so you're almost helping guide the creation yeah so that's certainly true right so we we definitely changed uh the structure of the network right so if you think back you know in the in the very early days uh larry and sergey had the page rank paper and john kleinberg had this uh hubsan authorities model which says the web is made out of these uh hubs which will be my page of cool links about dogs or whatever and people would just list links uh and then there'd be authorities which were the ones uh the page about dogs that most people link to that doesn't happen anymore people don't bother to say my page of cool links because we took over that function right so so we changed the way that worked did you imagine back then that the internet would be as massively vibrant as it is today i mean it was already growing quickly but it's just another i i don't know if you've ever if today if you sit back and just look at the internet with wander the amount of content that's just constantly being created constantly being shared unemployed yeah it's uh it's always been surprising to me i guess i'm not very good at predicting the future in the future okay and i remember you know being a graduate student in in 1980 or so and uh you know we had the arpanet and then there was this uh proposal to uh commercialize it and have this internet and this uh uh crazy senator gore thought that might be a good idea yeah and i remember thinking oh come on you can't you can't expect a commercial company to understand this technology they'll never be able to do it yeah okay we can have this dot-com domain but it won't go anywhere so i was wrong al gore was right at the same time the nature of what it means to be a commercial company has changed too so google yeah isn't that it's founding is different than uh you know what companies were before i think right so there's all these uh business models that are so different than what was possible back then so in terms of predicting the future what do you think it takes to build a system that approaches human level intelligence you've talked about of course that we you know we shouldn't be so obsessed about creating human level intelligence just create systems that are very useful for humans but what do you think it takes to uh to uh yeah approach that level right so certainly i don't think human level intelligence is one thing right so i think there's lots of different tasks lots of different capabilities i also don't think uh that should be the goal right so i you know i wouldn't want to create a uh calculator that could do multiplication at human levels right that would be a step backwards and so for many things we should be aiming far beyond human level for other things maybe human level is a good level to aim at and for others we say well let's not bother doing this because we already have humans can take on those tasks so as you say i like to focus on what what's a useful tool and and in some cases being on human level is an important part of crossing that threshold to make the tool useful so we see in things like these uh personal assistants now that you get either on your phone or on a speaker that sits on the table you want to be able to have a conversation with those and and i think as an industry we haven't quite figured out what the right model is for what these things can do and we're aiming towards well you just have a conversation with them the way you can with the person right but we haven't delivered on that model yet right so you can ask it what's the weather you can ask it play some nice songs uh and uh you know five or six other things and then you run out of stuff that it can do in terms of a deep meaningful connection so you've mentioned the movie her as one of your favorite ai movies do you think it's possible for a human being to fall in love with an ai system ai assistant as you mentioned so taking this big leap from uh what's the weather to you know having a deep connection yeah i i think uh as people that's what we love to do and uh i was at a a showing of her where we had a panel discussion and and somebody asked me uh what other movie do you think her is similar to and my answer was uh life of brian which which is not a science fiction movie uh but both movies are about wanting to believe in something that's not necessarily real yeah by the way for people don't know it's monty python yeah yeah that's been brilliantly put right so i mean i think that's just the way we are we we want to trust we want to believe we want to fall in love and uh it doesn't necessarily take that much right so you know my kids uh fell in love with their teddy bear right and the teddy bear was not very interactive right so that's all us yeah pushing our feelings onto our devices and our things and i think that that's what we like to do so we'll continue to do that so yeah as human beings will long for that connection and just ai has to uh do a little bit of work to uh to catch us in the other end yeah and certainly you know if you can get to uh dog level a lot of people have invested a lot of uh love in their pets and their pets some some people as i've been told in working with autonomous vehicles have invested a lot of love into their inanimate cars yeah so it really doesn't take much so what is a good test to linger on a topic that may be silly or a little bit philosophical what is a good test of intelligence in your view is natural conversation like in the touring test a good a good test put another way what would impress you yeah if you saw a computer do it these days yeah i mean i get impressed all the time right but like really impressive you know go playing uh starcraft playing uh those are all pretty cool you know and i think uh sure conversation is important i think uh you know we sometimes have these tests where it's easy to fool the system where you can have a chatbot that can have a conversation but you never uh it never gets into a situation where it has to be deep enough that uh it really reveals itself as being intelligent or not i think uh you know turing suggested that uh but i think if he were alive he'd say you know i didn't really mean that seriously right yeah and i think uh and you know this is just my opinion but but i think turing's point was not that uh this test of conversation is a good test i think his point was having a test is the right thing so rather than having the philosopher say oh no ai is impossible you should say well we'll just have a test and then the result of that will will tell us the answer and it doesn't necessarily have to be a conversation test that's right and coming up a new better test as the technology evolves is probably the right way do you worry as a lot of the general public does about not a lot but some vocal uh part of the general public about the existential threat of artificial intelligence so looking farther into the future as you said most of us are not able to predict much so when shrouded in such mystery there's a concern of well you think you start thinking about worst case is that something that occupies your mind space much so i certainly think about uh threats i think about uh dangers uh and i think uh any new technology uh has positives and negatives and if it's a powerful technology it can be used for bad as well as for good so i'm certainly not worried about uh the robot apocalypse the terminator type scenarios i am worried about change in employment and uh are we going to be able to react fast enough to deal with that i think we're you know we're already seeing it today where a lot of people are are disgruntled about uh the way income inequality is working and uh and automation could help accelerate those kinds of problems i see powerful technologies can always be used as weapons uh whether they're robots or drones or whatever uh some of that we're seeing due to ai a lot of it you don't need ai and i don't know what's uh what's a worse threat if it's an autonomous drone or it's uh crispr technology becoming available or we have lots of threats to face and some of them involve ai and some of them don't so the threats that technology presents are you for the most part optimistic about technology also alleviating those threats so creating new opportunities or protecting us from the more detrimental effects of these things yeah i don't know it again it's hard to predict the future and uh yes as a success society so far we've survived the nuclear systems and other things of course uh only societies that have survived are having this conversation so uh maybe that's a survivorship bias there yeah what problem stands out to you as exciting challenging impactful to work on in the near future for yourself for the community in broadly so i you know we talked about these uh assistance in conversation i think that's a great area i think combining uh common sense reasoning uh with the power of data is a a great area in which application in in conversation relation or just broadly just in general yeah as a programmer i'm interested in uh programming tools both in terms of uh you know the current systems we have today with with tensorflow and so on can we make them much easier to use for broader uh class of people and also can we apply uh machine learning to the more traditional type of programming right so you know when you go to google and you uh type in a query and you spell something wrong it says did you mean and the reason we're able to do that is because lots of other people made a similar error and then they corrected it we should be able to go into our code bases and our bug fix spaces and when i type a line of code it should be able to say did you mean such and such if you type this today you're probably going to type in this bug fix tomorrow yeah that's a really exciting application of uh almost uh an assistant for the coding programming experience yeah at every level so i think i could safely speak for the entire ai community first of all for uh thank you for the amazing work you've done uh certainly for the amazing work you've done with uh ai a modern approach book yep i think we're all looking forward very much for the fourth edition and then the fifth edition and so on so uh peter thank you so much for talking today yeah thank you pleasure you
Leonard Susskind: Quantum Mechanics, String Theory and Black Holes | Lex Fridman Podcast #41
the following is a conversation with Leonard Susskind he's a professor of theoretical physics at Stanford University and founding director of Stanford Institute of theoretical physics he's widely regarded as one of the fathers of string theory and in general is one of the greatest physicists of our time both as a researcher and an educator this is the artificial intelligence podcast perhaps you noticed that the people have been speaking with are not just computer scientists but philosophers mathematicians writers psychologists physicists and soon other disciplines to me AI is much bigger than deep learning bigger than computing it is our civilizations journey into understanding the human mind and creating echoes of it in the machine if you enjoy the podcast subscribe on YouTube give it five stars and iTunes supported on patreon or simply connect with me on Twitter at lex friedman spelled fri d ma a.m. and now here's my conversation with leonard susskind you work two more friends with richard fineman house he influenced you changed you as a physicist and thinker what I saw I think what I saw was somebody who could do physics in this deeply intuitive way his style was almost a closed his eyes and visualize the phenomena that he was thinking about and through visualization I'll flank the mathematical highly mathematical and very very sophisticated technical arguments that people would use I think that was also natural to me but I saw somebody who was actually successful at it who could do physics in a way that that I regarded as simpler more direct more intuitive and while I don't think he changed my way of thinking I do think he validated it he made me look at it and say yeah that's something you can do and get away with practically even get away with it so do you find yourself whether you're thinking about quantum mechanics or black holes or string theory using intuition as a first step or step throughout using visualization yeah very much so very much so I tend not to think about the equations I tend not to think about the symbols I tend to try to visualize the phenomena themselves and then when I get an insight that I think is valid I might try to convert it to mathematics but I'm not a math and then a natural mathematician or I'm good enough at it I'm good enough at it but I'm not a great mathematician so for me the way of thinking about physics is first intuitive first visualization scribble a few equations maybe but then try to convert it to mathematics experiences that other people are better at converting into mathematics and I am and yet you've worked very counterintuitive ideas so how that's true that's naive is something Connor into every their rewiring your brain in new ways yeah quantum mechanics is not intuitive very little of modern physics is intuitive intuitive or what does intuitive mean it means the ability to think about it with basic classical physics the physics that that we evolved with throwing stones splashing water or whatever it happens to be quantum physics general relativity quantum field theory are deeply unintuitive in that way but you know after time and getting familiar with these things you develop new intuitions I always said you rewire and it's to the point where me and many of my friends find many my friends can think more easily quantum-mechanically than we can classically we've gotten so used to it I mean yes our neural wiring in our brain is such that we understand rocks and stones and water and so I'm sort of evolved evolved for you do you think it's possible to create a wiring of neuron like state devices that more naturally understand quantum mechanics understand wave function understand these weird things well I'm not sure I think many of us have evolved the ability to think quantum mechanically to some extent but that doesn't mean you can think like an electron that doesn't mean an another example forget for a minute quantum mechanics just visualizing four dimensional space or five dimensional space or six dimensional space I think we're fundamentally wired to visualize three dimensions I can't even visualize two dimensions or one dimension without thinking about it as embedded in three dimension in space if I want to visualize a line I think of the line as being aligned in three dimensions right well I think of the line as being aligned on a piece of paper with a piece of paper being in three dimensions I never seem to be able to in some abstract and pure way visualize in my head the one dimension the two dimension the four dimensions the five dimensions and I don't think that's ever gonna happen the reason is I think our neural wiring is just set up for that on the other hand we do learn ways to think about five six seven dimensions and we learn ways we learn mathematical ways and we learn ways to visualize them but they're different and so yeah I think I think we do rewire ourselves whether we can ever completely rewire ourselves to be completely comfortable with these concepts I doubt so that it's completely natural there was a tour it's completely natural so I'm sure there's some what you could argue creatures that live in it two-dimensional space yeah and there are and um well it's romanticizing the notion of course we're all living as far as we know in three-dimensional space but how do you how do those creatures imagine 3d space well probably the way we imagined 4d by using some mathematics and some equations and some some tricks okay so jumping back to a fireman just for a second he had a little bit of an ego yes what do you think ego is powerful or dangerous in science I think both both both I think you have to have both arrogance and humility you have to have the arrogance to say I can do this nature is difficult nature is very very hard I'm smart enough I can do it I can win the battle with nature on the other hand I think you also have to have the humility to know that you're very likely to be wrong on any given occasion everything you're thinking could suddenly change young people can come along and say things you won't understand and you'll be lost and flabbergasted so I think it's a combination of both you better recognize that you're very limited and you better be able to say to yourself I'm not so limited that I can't win this battle with nature it takes a special kind of person who can manage both of those I would say and I would say there's echoes of that in your own work a little bit of ego a little bit of outside of the box humble thinking I hope so so it was their time where you complete you felt you looked at yourself and asked am i completely wrong about this oh yeah but the whole thing about specific things the whole thing that way which cold thing me and me and my ability to do this thing oh those kinds of doubts those first of all did you have those kinds of doubts no I had different kind of doubts I came from a very working-class background and I was uncomfortable in academia for Oh for a long time but they weren't doubts about my ability of my they were just the discomfort and being in an environment that my family hadn't participated in I know nothing about as a young person I didn't learn that there was such a thing called physics until I was almost 20 years old so I did have certain kind of doubts but not about my ability I don't think I was too worried about whether I would succeed or not I never I never felt this insecurity am I ever gonna get a job that veteran never occurred to me that I wouldn't maybe you could speak a little bit to this sense of what is academia for because I do feel a bit uncomfortable in it mm-hm there's something I can't put quite into words what you have that's not doesn't if we call it music you play a different kind of music than a lot of academia what how have you joined this Orchestra how do you think about it I don't know that I thought about it as much as I just felt it yeah you know thinking is one thing feeling is another thing I felt like an outsider until a certain age when I suddenly found myself the ultimate insider in academic physics and that was a sharp transition in the world I wasn't the young man I was probably 50 years old you were never quite it was a phase transition you were never quite free milk in the middle yeah that's right I wasn't I always felt a little bit of an outsider the beginning a lot and outside Earth my way of thinking was different my approach to mathematics was different but also this my social background that I came from was different now these days half the young people I meet their parents were professors my that was not my case so yeah but then all of a sudden at some point I found myself at they're very much the center of maybe not the only one at the center but certainly one of the people in the center of a certain kind of physics and all that put away I mean I went away in a flash so maybe maybe a little bit with Fineman but in general how do you develop ideas do you work their ideas alone do you brainstorm with others oh both both very definitely both the earth the younger time I spent more time with myself now because I'm at Stanford because I'm because I have a lot of ex students and I you know people who who are interested in the same thing I am I spend a good deal of time almost on a daily basis interacting brainstorming as you said it's a it's a very important part I spend less time probably completely self focused and the paper and just sitting there staring at it what are your hopes for quantum computers so machines that are based on that have some elements of leverage quantum mechanical ideas yeah it's not just leveraging quantum mechanical ideas you can simulate quantum systems on a classical computer simulate them means solve the Schrodinger equation for them or solve the equations of quantum mechanics on a computer on a classical computer but the classical computer is not doing is not a quantum mechanical system itself of course it is that everything is made of quantum mechanics but it's not some functioning it's not a functioning as a quantum system it's just solving equations the quantum computer is truly a quantum system which is actually doing the things that you're programming it to do you want to program a quantum field theory if you do it in classical physics that program is not actually functioning in the computer as a quantum field theory it's just solving some equations physically it's not doing the things that that the quantum system would do the quantum computer is really a quantum mechanical system which is actually carrying out the quantum operations you can measure it at the end it intrinsically satisfies the uncertainty principle it is limited in the same way that quantum systems are limited by uncertainty and so forth and it really is a quantum system that means that what you what you're doing when you program something for quantum system is they're actually building a real version of the system the limits of a classical computer classical computers are enormous ly limited when it comes to the quantum systems enormously limited because you probably heard this before but in order to store the amount of information that's in a quantum state of 400 spins that's not very many 400 I can put in my path with 400 pennies in my pocket so we'll be able to simulate the quantum state of 400 elementary quantum systems qubits we call him to do that would take more information than can possibly be stored in the entire universe if it were packed so tightly that you couldn't pack anymore in right 400 cubits on the other hand if your quantum computer is composed of four hundred qubits it can do everything four hundred qubits can do what kind of space if you just intuitively think about the space of algorithms that that unlocks for us so there's a whole complexity theory around classical computers measuring the running time of things and PE so on what kind of algorithm is just intuitively do you think it's you know mocks for us okay so we know that there are a handful of algorithms that can seriously be quantum of classical computers and which can have exponentially more power and this is a mathematical statement nobody's exhibited this in the laboratory it's a mathematical statement we know that's true but it also seems more and more that the number of such things is very limited only very very special problems exhibit that much advantage for a quantum computer others of standard problems to my mind as far as I can tell the great power of quantum computers will actually be to simulate quantum systems if you're interested in a certain quantum system and it's too hard to simulate classically you simply build a version of the same system you build a version of it you build a model of it that's actually functioning as the system you run it and then you do the same thing you would do the quantum system you make measurements on it quantum measurements on it the advantages you can run it much slower you could say why bother why not just use the real system and why not just do experiments on the real system well real systems are kind of limited you can't change them you can't like them you can't slow them down so that you can poke into them you can't modify them an arbitrary kinds of ways to see what would happen if I if I change the system a little bit so I think that quantum computers will be extremely valuable in in understanding quantum systems at the lowest of the fundamental laws they're actually satisfying the same laws as the systems that they're simulating that's right okay so in the one hand you have things like factoring in factoring is the great thing of quantum computers factoring large numbers that doesn't seem that much to do with quantum mechanics right it seems to be almost a fluke that a quantum computer can solve the factoring problem in a short time so though and those problems seem to be extremely special rare and it's not clear to me that there's gonna be a lot of them on the other hand there are a lot of quantum systems chemistry there's solid-state physics there's material science there's quantum gravity there's all kinds of quantum of quantum field theory and some of these are actually turning out to be Applied Sciences as well as very fundamental Sciences so we probably will run out of the ability to solve equations for these things you know solve equations by the standard methods of pencil and paper and solve the equations by the method of classical computers and so what we'll do is we'll build versions of these systems run them and run them under controlled circumstances or we can change them manipulate them make measurements on them and find out all the things we want to know so in finding out the things we want to know about very small systems right now the is there something we can also find out about the macro level about something about it the function and forgive me of our brain biological systems the the stuff that's about one meter in size versus much much smaller well what the only excitement is about among the people that I interact with is understanding black holes that falls black holes are big things there are many many degrees of freedom there is another kind of quantum system that is big it's a large quantum computer and one of the things we learned is that the physics of large quantum computers is in some ways similar to the physics of large quantum black holes and we're using that relationship now you asked you didn't ask about quantum computers or systems you didn't ask about black holes you asked about brains yeah I bought stuff that's in the middle of the - it's different so but black holes are there's something fundamental about black holes it feels to be very different in the brain yes and they also function in a very quantum mechanical way right okay it is first of all unclear to me but of course it's unclear to me I another I'm not a a neuroscientist I have I don't even have very many friends who are neuroscientists I would like to have more friends who are neuroscientists I just don't run into them very often among the few neuroscientists I've ever talked about about this they are pretty convinced that the brain functions classically there is not intrinsically a quantum mechanical system or doesn't make use of the of the special features entanglement coherent superposition are they right I don't know I sort of hope that wrong with just because I like the romantic idea that the brain is a quantum system and but I think that I think probably not the other thing big systems can be composed of lots of little systems materials the materials are that we work with and so forth are three large systems and a large piece of material but they're Bagan they're made out of quantum systems now one of the things that's been happening over the last a good number of years is with discovering materials and quantum systems which function much more quantum mechanically then than we imagine topological insulators this kind of thing that kind of thing those are macroscopic systems but they just superconductors superconductors I have a lot of quantum mechanics in them you can have a large chunk of superconductor so it's a big decent material on the other hand it's functioning and its properties depend very very strongly on quantum mechanics and to analyze them you need the tools of quantum mechanics if we can go on to black holes mm-hmm and looking at the universe as a information processing system as a computer as a giant computer what's the power of thinking of the universe as an information processing system but what is perhaps its use besides the mathematical use of discussing black holes and your famous debates and ideas around that to human beings or life in general as information processing systems well all systems are information processing systems you poke them they change a little bit they evolve all systems or information processes there's no extra magic to us humans it certainly feels consciousness intelligence feels like magic sure though where does it emerge from if we look at information processing what are the emergent phenomena that come from viewing the world is an information processing system here is what I think my thoughts are not worth much of this if you ask me about physics my thoughts may be worth something yes if you ask me about this I'm not sure my thoughts are worth anything but as I said earlier I think when we do introspection when we imagine doing introspection and try to figure out what it is when we do and we're thinking I think we I think we get it wrong I'm pretty sure we get it wrong everything I've heard about the way the brain functions is so counterintuitive for example you have neurons which detect the vertical lines you have different neurons which detect lines at 45 degrees you have different neurons I never imagined that there were whole circuits which were devoted to vertical lines in the brain yeah doesn't seem to we where when my brain works my brain seems to work put my finger up vertically or if I put it horizontally or if I put it this way or that way it seems to me it's the same the same circuits that are it's not the way it works the way the brain is compartmentalized seems to be very very different than what I would have imagined if I were just doing psychological introspection about how things work my conclusion is that we won't get it right that way but how will we get it right I think maybe computer scientists will get it right eventually I don't think that any ways near it I don't even think they're thinking about it but by computer eventually we will build machines perhaps which are complicated enough and partly engineered partly evolved maybe evolved by machine learning and so forth this machine learning is very interesting by machine learning will evolve systems and we may start to discover mechanisms that that have implications for how we think and for what what does consciousness thing is all about and we'll be able to do experiments on them and perhaps answer questions that we can't possibly answer by by introspection so that's a really interesting point you've in many cases if you look even at string theory when you first think about a system it seems really complicated like the human brain and through some basic reasoning then trying to discover a fundamental low-level behavior of the system you find out that it's actually much simpler you one have you you know is that generally the process and to do you have that also hope for biological systems as well for all the kinds of stuff we're studying at the human level of course physics always begins by trying to find the simplest version of something an analyzer yeah I mean there are lots of examples where physics has taken very complicated systems analyze them and found simplicity in them for sure I said superconductors before it's an obvious one a superconductor seems like monstrously complicated thing with all sorts of crazy electrical properties magnetic properties and so forth and when it finally is boiled down through its simplest elements it's a very simple quantum mechanical phenomenon called spontaneous symmetry breaking and which we in other context we learned about and we're very familiar with so yeah I mean yes we do take complicated things make them simple but what we don't want to do is take things which are intrinsically complicated and fool ourselves into thinking that we can make them simple we don't want to make I don't know who said this but we don't want to make them simpler than they really are right okay is the brain a thing which ultimately functions by some simple rules or is it just complicated in terms of artificial intelligence nobody really knows what are the limits of our current approaches you mentioned machine learning how do we create human level intelligence it seems that there's a lot of very smart physicists who perhaps oversimplify the nature of intelligence and think of it as information processing and therefore that it doesn't seem to be any theoretical reason why we can't artificially create a human level or super human level intelligence in fact the reasoning goes if you create human level intelligence the same approach you just used to create human level intelligence should allow you to create superhuman level intelligence very easily exponentially so what do you think that way of thinking that comes from physicists is all about I wish I knew but there's a particular reason why I wish I knew I have a second job I consult for Google ah not for Google for Google X I am the senior academic advisor third to a group of machine learning physicists at now that sounds crazy because I know nothing about the subject I know very little about the subject on the other hand I'm good at giving advice so I give them advice on things anyway I see these young physicists who are approaching the machine learning problem there is a myth there is a real machine learning problem mainly why does it work as well as it does it nobody really seems to understand why it is capable of doing the kind of generalizations that it does and so forth and there are three groups of people who have thought about this there are the engineers the engineers are incredibly smart but they tend not to think as hard about why the thing is working as much as they do how to use it obviously they provided a lot of data and it is they who demonstrated that machine learning can work much better than you have any right to expect the machine learning systems are systems that the system is not too different than the kind of systems if this is a study there's not all that much difference between quantum construction of mathematics physically yes but in the structure the mathematics between a tension network designed to describe a quantum system on the one hand and the kind of networks that are used in machine learning so they're more and more I think young physicists are being drawn to this field of machine learning some very very good ones I work with a number of very good ones not on machine learning but having lunch on having lunch yeah and I can tell you they are super smart they don't seem to be so arrogant about their physics backgrounds that they think they can do things that nobody else can do but those physics way of thinking I think will add will I had great value to UM will bring value to the machine learning I believe it will and I think it already has and what time scale do you think predicting the future becomes useless in your long experience and being surprised at new discoveries sometimes a day sometimes 20 years there are things which I thought we were very far from understanding which practically in a snap of the fingers or a blink of the eye suddenly became understood completely surprising for me there are other things which I looked at and I said we're not gonna understand these things for 500 years in particular quantum gravity the scale for that was 20 years 25 years and we understand a lot and we don't understand it completely now by any means but we're I thought it was 500 years to make any progress it turned out to be very very far from that it turned out to be more like 20 or 25 years from the time when I thought it was 500 years so for me can we jump around quantum gravity some basic ideas in physics what is the dream of string theory mathematically what is the hope where does it come from what problems are trying to solve I don't think the dream of string theory is any different than the dream of fundamental theoretical physics altogether understanding a unified theory of everything I I don't like thinking of string theory as a subject unto itself with people called string theorists who are the practitioners of this thing called string theory I much prefer to think of them as theoretical physicists trying to answer deep fundamental questions about nature in particular gravity in particular gravity and it's connection with quantum mechanics and who at the present time find string theory a useful tool rather than saying there's a subject called string theorists I don't like being referred to as a string theorists yes but as a tool is it useful to think about our nature in multiple dimensions the strings vibrating I believe it is useful I'll tell you what the main use of it has been up till now well has had a number of main uses originally string theory was invented then I know there I was there I was right at the spot where it was being invented literally and it was being invented to understand hey groans hey drones are sub-nuclear particles protons neutrons mesons and at that time the late 60s early seventies it was clear from experiment that these particles call hydrants had could vibrate could rotate could do all the things that a little closed string can do and it was and is a valid and correct theory of these hydrants it's been experimentally tested and that is a done deal it had a second life as a theory of gravity the same basic mathematics except on a very very much smaller distance scale the objects of gravitation are nineteen orders of magnitude smaller than a proton but the same mathematics turned up the same mathematics turned up what has been its value its value is that it's mathematically rigorous in many ways and enabled us to to find to find mathematical structures which have both quantum mechanics and gravity with rigor we can test out ideas we can test out ideas we can't test them in the laboratory that nineteen orders of magnitude too small or things that were interested in but we can test them out mathematically and analyze their internal consistency by now forty years ago thirty five years ago so forth people very very much questioned the consistency between gravity and quantum mechanics Stephen Hawking was very famous for it rightly so now nobody questions that consistency anymore they don't because we have mathematically precise string theories which contain both gravity and quantum mechanics in a consistent way so it's provided that um that certainty that quantum mechanics and gravity can coexist that's not a small thing that's a very huge thing it's a huge thing Einstein be proud Einstein he might be appalled I don't know I'm like a very much yeah he would certainly be struck by it yeah I think that maybe at this time its biggest contribution to physics in illustrating almost definitively that quantum mechanics and gravity are very closely related and not inconsistent with each other is there a possibility of something deeper more profound that still is consistent with string theory but is deeper that is to be found well you could ask the same theme of quantum mechanics is there something exactly yeah yeah I think string theory is just an example of a quantum mechanical system that contains both gravitation and in quantum mechanics so is there something underlying quantum mechanics perhaps something deterministic so have something deterministic my friend far out it wolf whose name you may know he's a very famous physicist Dutch not as famous as he should be but the heart dispels names it's hard to say his name you know it's easy to spelling ' he's only person I know his name begins with an apostrophe and he's one of my heroes in physics and it's a little younger than me but it's nonetheless one of my heroes the Tufte believes that there was some sub structure to the world which is classical in character the deterministic in character which somehow by some mechanism that he has a hard time spelling out emerges as quantum mechanics I don't the wavefunction is somehow emergent the wavefunction and not just the wavefunction but the whole making the whole thing that goes with quantum mechanics uncertainty and pango meant all these things are emergent do you think quantum mechanics is the bottom of the well as is the right here I think is here I think is where you have to be humble here's where humility comes I don't think anybody should say anything is the bottom of the well at this time yes I think we I think we can reasonably say I can reasonably say when I look into the well I can't see past quantum mechanics I don't see any reason for it to be anything beyond quantum mechanics I think a tuft is a Sperry interesting and deep questions I don't like his answers well again let me ask if we look at the deepest nature of reality with whether it's deterministic or unobserved is probabilistic what does that mean for our human level of ideas of free will is there any connection whatsoever from this perception perhaps illusion of free will that we have and the fundamental nature of reality the only thing I can say is I am I am puzzled by that as much as you are the illusion of it the illusion of consciousness the illusion of free will the illusion of self does that connect to how can a physical system do that and and I am as puzzled as anybody there's echoes of it in the observer effect yeah so do you understand what it means to be an observer I understand it at a technical level an observer is a system with enough degrees of freedom that it can record information and which can become entangled with the thing that's measuring entanglement is the key when a system which we call an apparatus or an observer same thing interacts with the system that it's observing it doesn't just look at it it becomes physically entangled with it and it's that entanglement which we call an observation or measure now does that satisfy me personally as an observer hmm yes and no I find it very satisfying that we have a mathematical representation of what it means to observe a system you are observing stuff right now yeah the conscious level right is you think there's echoes of that kind of entanglement in our macro scale yes absolutely for sure we're entangled with quantum mechanically entangled with everything in this room if we weren't of and it was just well we wouldn't be observing it but on the other hand you can ask though I really am I really comfortable with it and I'm uncomfortable with it in the same way that I can never get comfortable with five dimensions my my brain isn't wired for it are you comfortable with four dimensions a little bit more because I can always imagine the fourth dimension this time so the arrow of time are you comfortable with that arrow do you think time is an emergent phenomena or is it's fundamental to nature that is a big question in physics right now all the physics that we do or at least that the people that I am comfortable with talking to my my friends yeah my friends no we all ask the same question that you just asked in space we have a pretty good idea is emergent and it emerges out of tan tangle mint and other other things time always seems to be built into our equations as just what Newton pretty much were for Newton modified a little bit by Einstein would have called time and and mostly in our equations it is not emergent time in physics is completely symmetric forward and bathymetric so you don't really need to think about the area of time for most physical phenomena the most microscopic phenomena no it's only when the phenomena involves systems which are big enough for thermal Amyx to become important the entropy to become important for small subsets a small system entropy is not a good concept an entropy is something which which emerges out of large numbers it's a probabilistic idea it's a statistical idea and it's a thermodynamic idea thermodynamics requires lots and lots and lots of little sub structures okay so it's not until you emerge at the thermodynamic level that there's an arrow of time do we understand it yeah I think I think we understand better than most people think that most people say they think we understand it yeah I think we understand it it's just a statistical idea the you mean like second law thermodynamics entropy and so on yeah the pack of cards and you're flinging it in the air and you look what happens to it yeah but what's random we understand it doesn't go from random to simple it goes from simple to random but do you think it ever breaks down what I think you can do is in a laboratory setting you can take a system which is somewhere intermediate between being small and being large and make it go backward a thing which looks like it only wants to go forward because of statistical mechanical reasons because of the second law you can very very carefully manipulate it to make it run backward I don't think you can take an egg Humpty Dumpty who fell on the floor yeah and reverse that but you can in a very controlled situation you can take systems which appear to be evolving statistically toward randomness stop them reverse them and make them go back what's the intuition behind that how do how do we do that how do we reverse it a clue you're saying closed system yeah pretty much closed system yes did you just say that time travel is possible no I didn't say time travel is possible I said you can make a system go backward in time and you don't like it go back you can make it reverse it steps you can make it reverse its trajectory yeah how do we do what's the intuition there does it have is it just a fluke thing that we can do at a small scale in the lab that doesn't have what I'm saying is you can do it on a little bit better than a small scale you can certainly do it with a simple small system small systems don't have any sense of the arrow of time atoms atoms uh no sense of the arrow of time they're completely reversible it's only when you have you know the second law of thermodynamics is the law of large numbers say you can break the law because it's not you can break German isn't the break it but it's hard it requires great care the bigger the system is the more to care the more the harder it is you have to overcome what's called chaos and that's hard and it requires more and more precision for 10 particles you might be able to do it with a with some effort 400 particles it's really hard for a thousand or a million particles forget it but not for any fundamental reason just because it's technologically too hard to make the system go backward so so note no time travel for engineering reasons oh no no no what is time travel time travel time travel to the future that's easy yes you just close your eyes go to sleep and you wake up in the future yeah yeah good nap gets you there yeah good map gets you there right but in reversing the second law of thermal is a very difficult engineering effort I wouldn't call that time travel because it gets to me too mixed up with what the science fiction calls time-travel right this is just the ability to reverse a system you take the system and you reverse the direction of motion of every molecule in it that input you can do it with one molecule if you find a particle moving in a certain direction let's not say a mama particle a baseball you stop it dead and then you simply reverse its motion in principle that's not too hard and it'll go back along its trajectory in the backward direction just running the program backwards running the program backward yeah okay if you have two baseball's colliding well you can do it but you have to be very very careful to get it just right now ten baseball's really really tore better yet tend ten billiard balls on an idealized frictionless billiard table mm-hmm okay so you start the balls all in a triangle right and you're whack them yep depending on the game you're playing you the wacom where you're really careful but the you're welcome and they go flying off in all possible directions okay try to reverse that try to reverse that imagine trying to take every billiard ball stopping it dated sometime at some point and reversing its motion so it was going in the opposite direction if you did that with tremendous care it would reassemble itself back into the triangle okay that is a fact and you can probably do it with two billiard balls maybe with three billion balls if you're really lucky but what happens is as the system gets more and more complicated you have to be more and more precise not to make the tiniest error because the tiniest errors will get magnified and you'll simply not be able to do the reversal so yeah you could that but I wouldn't call that time travel yeah that's something else but if you think think of it it just made me think if we think the unrolling of state that's happening as a program if we look at the world so the idea of looking at the world as a simulation as a computer but it's not a computer it's just a single program a question arises that might be useful how how hard is it to have a computer that runs the universe okay so there are mathematical universes that we know about one of them is called anti de sitter space where we and it's quantum mechanics well I think we could simulate it in a computer and a quantum computer classical computer all you can do is solve its equations you can't make it work like the real system if we could build a quantum computer or big enough one robust enough one we could probably simulate a universe a small version of an anti-de sitter universe and that the sitter is a kind of cosmology so I think we know how to do that the trouble is the universe that we live in is not the anti-de sitter geometry it's the decent or geometry and we don't really understand the quantum mechanics at all so at the present time I would say we wouldn't have the vaguest idea how to simulate a universe similar to our own you know we could ask oh we could we build in the laboratory a small version a quantum mechanical version the collection of quantum computers entangled and the couple together which would reproduce the the phenomena that go on in the universe even on a small scale yes if you were anti de sitter space know if it's the sitter space can you a slightly describe the sitter space and anti-de sitter space yeah what are the geometric properties of big different they differ by a this is the sine of a single constant called the cosmological constant one of them is negatively curved the other is positively curved the anti-de sitter space which is the negatively curved one you can think of as an isolated system in a box with reflecting walls you could think of it as a system of quantum mechanical system isolated in an isolated environment the sitter space is the one we really live in and that's the one that's exponentially expanding exponential expansion dark energy whatever you want to call it and we don't understand that mathematically do we understand not everybody would agree with me but I don't understand they would agree with me they definitely would agree with me that I don't understand it what about their an understanding of the birth the origin no the bing bang so knows what normally theories there are theories my favorite is the one called eternal inflation the infinity can be on both sides on one of the sides and none of the sides so what my real opinion okay infinity on both sides oh boy yeah yeah that's why is that your favorite because it's the the most just mind-blowing no because we want a beginning no why do we want a beginning I practiced it was the beginning of course and practice it was a beginning but could it have been a random fluctuation in an otherwise infinite time maybe in any case the the eternal inflation theory I think if correctly understood it would be infinite in both directions how do you think about infinity Oh God so okay of course you can think about mathematically I just finished this I just finished this discussion with my friend Sergey Brin yes how do you think about infinity I say well Sergey Brin is infinitely rich how do you test that hypothesis okay essential good lines all right yeah so there's no there's really no way to visualize some of these things like ya know this is a very good question those physics have any is does infinity have any place in physics right right and well I can say is very good question so what do you think of the recent first image of a black hole visualized from the event horizon telescope it it's an incredible triumph of science in itself the fact that there are black holes which collide is not a surprise and they seem to work exactly the way they're supposed to work will we learn a great deal from it I don't know I can I I we might but the kind of things we learn won't really be about black holes why there are black holes in nature of that particular mass scale and why they're so common may tell us something about the structure evolution of structure in the universe but I don't think it's going to tell us anything new about black holes but it's a triumph in the sense that you go back a hundred years and it was a continuous development general relativity the discovery of black holes LIGO the incredible technology that went into LIGO it is something that I never would have believed was gonna happen you know 30 40 years ago and I think it's a magnificent the structure magnificent thing this evolution of general relativity LIGO high precision ability to measure things on a scale of 10 to the minus 21 so so you're just astonishing though we zoom all this just happy for us to this right picture is it different you know you've thought a lot about black holes is it how did you visualize them in your mind and is the picture different than you know lies that no it simply confirmed you know it's a magnificent triumph to have confirmed confirmed a direct observation yeah that Einstein's theory of gravity at the level of black hole collisions actually works is awesome and it's really awesome you know I know some of the people who were involved in that they just thought married people yeah and the idea that they could carry this out I just don't I'm shocked yeah just these little Homo sapiens yeah just these little monkeys got together right and took a picture of slightly advanced lemurs I think what kind of questions can science not currently answer but you hope might be able to soon well you you've already addressed them what is consciousness for example do you think that's within the reach of science I think it's somewhat within the reach of science but I think that now I think it's in the hands of the computer scientists and the neuroscientists I'm not a physicist perhaps with the helper haps at some point but I think physicists will try to simplify it down to something that they can use their methods and maybe they're not appropriate maybe we maybe we simply need to do more machine learning on bigger scales evolve machines machines not only that learn but volve their own architecture as a process of learning evolve in architecture not under our control only partially under our control but under the control of a machine learning I'll tell you another thing that I find awesome you know this Google thing that they taught the computers how to play chess yeah yeah okay they taught the computers how to play chess not by teaching them how to play chess but just having them play against each other against each other itself against each other this is a form of evolution these machines evolved they evolved and intelligence they evolved in intelligence without anybody telling them how to do it and we're not engineered they just played against each other and got better and better and better that makes me think that machines can evolve intelligence what exact kind of intelligence I don't know but in understanding that better and better maybe we'll get better clues as to what there goes on your life and intelligence is last question what kind of questions can science not currently answer and may never be able to answer yeah is there an intelligence out there that's underlies the whole thing you can call them with the G word if you want I can say are we a computer simulation with a purpose is there an agent an intelligent agent that underlies or is responsible for the whole thing does that intelligent agent satisfy the laws of physics does it satisfy the laws of quantum mechanics is it made of atoms and molecules yeah there's a lot of questions and I don't see this it seems to me a real question it's an answerable question well it's answerable the questions have to be answerable to be real some philosophers would say that a question is not a question unless it's answerable this question doesn't seem to me answerable by any known method but it seems to me real there's no better place to end monitor thank you so much for talking about okay you
Regina Barzilay: Deep Learning for Cancer Diagnosis and Treatment | Lex Fridman Podcast #40
the following is a conversation with Regina Bardsley she's a professor at MIT and a world-class researcher in natural language processing and applications of deep learning to chemistry and oncology or the use of deep learning for early diagnosis prevention and treatment of cancer she has also been recognized for teaching of several successful AI related courses at MIT including the popular introduction to machine learning course this is the artificial intelligence podcast if you enjoy it subscribe on YouTube give it five stars and iTunes supported on patreon or simply connect with me on Twitter at Lex Friedman spelled Fri D ma a.m. and now here's my conversation with Regina Bosley in an interview you've mentioned that if there's one course you would take it would be a literature course for the friend of yours that a friend of your teachers just out of curiosity because I couldn't find anything on it are there books or ideas that had profound impact on your life journey books and ideas perhaps outside of computer science and the technical fields I think because I'm spending a lot of my time at MIT and previously in other institutions where I was a student I have limited ability to interact with people so a lot of what I know about the world actually comes from books and they were quite enough of books that had profound impact on me and how I view the world let me just give you one example of such a book I've maybe a year ago read a book called the emperor of all maladies it's a book about it's kind of a history of science book on how the treatments and drugs for cancer were developed and that book despite the fact that I am in the business of science really opened my eyes on how imprecise and imperfect the discovery process is and how imperfect our current solutions and what makes science succeed and be implemented and sometimes it's actually known the strengths of the idea but devotion of the person who wants to see it implemented so this is one of the books and you know at least for the last year quite changed the way I'm thinking about scientific process just from the historical perspective and what do I need to do to make my ideas really implemented let me give you an example of a book which is not kind of which is a fiction book is a book called Americana and this is a book about a young female student who comes from Africa to study in the United States and it describes her paths you know was in her studies and her life transformation that you know in a new country and kind of adaptation to a new culture and when I read this book I saw myself in many different points of it but but it also kind of gave me the lens on different events and some event that I never actually paid attention one the funny stories in this book is how she arrives to to her new college and she starts speaking in English and she had this beautiful British accent because that's how she was educated in her country and this is not my case and then she notices that the person who talks to her you don't talk to her in a very funny way in a very slow way and she's thinking that this woman is disabled in a and she's also trying to kind of talk um a date huh and then after a while when she finishes her discussion with this officer from her college she sees how she interacts with the other students with American students and he discovers that actually she talked to her this way because she saw that she doesn't understand English and I he said wow this is a fine experience and he literally within few weeks I went to to LA to a conference and they asked somebody in the airport you know how to find like a cab or something and then I noticed this person is talking in a very strange way and my first thought was and this person have some you know pronunciation issues or something and I'm trying to talk very slowly to him an average with another professor and Frankel and he's like laughing because it's funny that I don't get that the guy is talking it in this way because he think that I cannot speak so it was really kind of mirroring experience as it is let me think a lot about my own experiences moving you know from different countries so I think that books play a big role in my understanding of the world on the on the science question you mentioned that it made you discover that personalities of human beings are more important than perhaps ideas is that what I heard it's not necessarily that they are more important than ideas but I think that ideas on their own unknown sufficient and many times at least at the local horizon is the personalities and their devotion to their ideas is really the locally changes the landscape now if you're looking at AI like let's say 30 years ago you know Dark Ages of AI or whatever what the symbolic times you can use anyone you know there is some people now we're looking at a lot of that work and we're kind of thinking this is not really maybe a relevant work but you can see that some people manage to take it and to make it so shiny and dominate the you know the academic world and make it to be the standards if you look in the area of natural language processing it is well known fact and the reason the statistics in NLP took such a long time to became to become mainstream because there were quite a number of personalities which didn't believe in this idea and it stopped research progress in this area so I do not think that you know can asymptotically maybe personalities matters but I think locally it does make quite a bit of impact okay generally you know speed adds speed up the rate of adoption of the new ideas yeah and and the other interesting question is in the early days of particular discipline I think you mentioned in in that book was is ultimately a book of cancer it's called the Emperor of all maladies yeah the yep and those maladies included the trying to the medicine was the center arm so it was actually centered on you know how people sort of curing cancer like like for me it was really a disc how people what was the science of chemistry behind drug development that it actually grew up out of the dyeing like coloring industry that people who developed chemistry in 19th century in Germany and Britain - do you know the really new dyes they looked at the molecular and identified it they do certain things to cells and from there the process started and you know like histology thing yeah this is fascinating that they managed to make the connection and look under the microscope and do all this discovery but as you continue reading about it and you read about how chemotherapy drugs socially developed in Boston and some of them were developed and Farber dr. Farber from Dana Farber you know how the experiments were done that you know there was some miscalculation let's put it this way and they tried it on the patients and then just and those were children with leukemia and they died and they tried another modification you look at the process how imperfect is this process and you don't like it well again looking back like six years ago 70 years ago you can kind of understand it but some of the stories in this book which were really shocking to me we're really happening you know maybe decades ago and we still don't have a vehicle to do it much more fast and effective and you know scientific the way of taking computer science scientific so from the perspective of computer science you've gotten chance to work the application to cancer and to medicine in general from a perspective of an engineer and a computer scientist how far along are we from understanding the human body biology of being able to manipulate it in a way we can cure some of the melodies some of the diseases so this is very interesting question and if you're thinking is a computer scientist about this problem I think one of the reasons that we succeeded in the areas we as a computer scientist succeeded is because we don't have we are not trying to understand in some ways like if you're thinking about like e-commerce Amazon I was doesn't really understand you and that's why it recommends you certain books or certain products correct and in you know traditionally when people were thinking about marketing you know they divided the population to different kind of subgroups identify the features of this subgroup and come up with a strategy which is specific to that subgroup if you're looking about recommendation system they're not claiming they understanding somebody they're just managing to from the patterns of your behavior to recommend you a product now if you look at the traditional Biogen obviously I wouldn't say that I at any way you know educated in this field but you know what I see there is really a lot of emphasis on mechanistic understanding and it was very surprising to me coming from computer science how much emphasis is on this understanding and given the complexity of the system maybe the deterministic full understanding of this processes is you know beyond our capacity and the same ways in computer science when we doing recognition when you recommendation in many other areas it's just probabilistic matching process and in some way maybe in certain cases we shouldn't even attempt to understand we can attempt to understand but in parallel we can actually do this kind of matching that would help us to find you out to do early diagnostics and so on and I know that in these communities it's really important to understand but I am sometimes wondering what exactly does it mean to understand here well there's stuff that works and but that can be like you said separate from this deep human desire to uncover the mysteries of the universe of of science of the way the body works the way the mind works it's the dream of symbolic AI of being able to reduce human knowledge into into logic and be able to play with that logic in a way that's very explainable and understandable for us humans I mean that's a beautiful dream so I understand it but it seems that what seems to work today we'll talk about it more as as much as possible reduced stuff into Data reduce whatever problem you're interested in to data and try to apply statistical methods flat machine learning to that on a personal note you were diagnosed with breast cancer in 2014 what it facing your mortality make you think about how did they change you know this is a great question and I think that I was interviewed many times nobody actually asked me this question I think I was 43 at a time and if the first time I realized in my life that I may die and I never thought about it before and yeah and there was a long time since you diagnosed until you're sure you know what you have and have CV is your disease for me it was like maybe two and a half months and I didn't know where I am during this time because it was getting different tests and one would say it's bad and I would say no it is not so until I knew where I am I really was thinking about all these different possible outcomes were you imagining the worst or were you trying to be optimistic or it would be really I don't remember you know what was my thinking it was really a mixture with many components at the time at speaking you know in our terms and one thing that I remember and you know every test comes and you're saying oh it could be this so it may not be this and you're hopeful then you're disparate so it's like if there is a whole you know slow of emotions it goes through but what I remember is that when I came back to MIT I was kind of going the whole times to the treatment to MIT but was brain was not really there but when I came back here I finished went to each one that I was here teaching and everything yeah I look back at what my group was doing what other groups was doing and I saw these trivialities it's like people are building their careers on improving some parts around two or three percent or whatever I was like seriously I did a walk on how to decipher Ugaritic like a languages nobody speak and and whatever like what is significance when I was sad and you know I walked out of MIT which is you know when people really do care you know what happened to you I clear favor is you know what is your next publication to ACL to the world where people you know people you see a lot of sufferings that I'm kind of totally shouldered on it on daily basis and it's like the first time I've seen like real life and real suffering and I was thinking why are we trying to improve the parser or deal with some trivialities when we have capacity to really make a change and it was really challenging to me because on one hand you know I have my graduate students really want to do their papers and their work and they want to continue to do what they were doing which was great and then it was me who really kind of reevaluated what is the importance and also at that point because I had to take some break I look back into like my years in science and I was thinking you know like 10 years ago this was the biggest thing I don't know topic models that we have like millions of papers on topic models and variation topics models now it's really like irrelevant and you you start looking at this you know what do you perceive as important a different point of time and how you know it's fades over time and since we have a limited time all of us have limited time unless it's really important to prioritize things that really matter to you maybe matter to you at that particular point but is important to take some time and understand what matters to you which may not necessarily be the same as what matters to the rest of your scientific community and pursue that vision so though that moment did it make you cognizant you mentioned suffering of just the general amount of suffering in the world is that what you're referring to so as opposed to topic models and specific detail problems in NLP did did you start to think about other people who have been diagnosed with cancer that the way you saw the started to see the world perhaps oh absolutely and it actually creates because for instance you know these parts of the treatment where you need to go to the hospital every day and you see you know the community of people that you see and many of them are much worse then I I was at a time and you're over sad and see it all and people who are happy as some day just because they feel better and for people who are in our normal every aisle you take it totally for granted that you feel well that if you decide to go running you can go running and you can you know you're pretty much free to do whatever you want with your body like I saw like a community my community became those people and I remember one of my friends Dena kitabi took me to Prudential to buy me a gift for my birthday and it was like the first time in months I said I went to kind of to see other people and I was like wow first of all these people you know they're happy and they're laughing and they're very different from this other my people and second of singing I think it totally crazy they're like laughing and wasting their money on some stupid gifts and you know they may die they already may have cancer and and they don't understand it so you can really see how the mind changes that you can see that you know before that you can have didn't you know that you're gonna die of course I knew but it was kind of a theoretical notion it wasn't something which was concrete and at that point when you really see it and see how little means some time the system has to hummed and you really feel that we need to take a lot of our brilliance that we have here at MIT and translated into something useful yeah and you so couldn't have a lot of definitions but of course alleviating suffering alleviating trying to cure cancer is a beautiful mission so I of course know the theoretically the notion of cancer but just reading more and more about its 1.7 million new cancer cases in the United States every year 600,000 cancer related deaths every year so this has a huge impact United States global when broadly before we talk about how machine learning how MIT can help when do you think we as a civilization will cure cancer how hard of a problem is it from everything you've learned from it recently I cannot really assess it what I do believe will happen with the advancement in machine learning that a lot of types of cancer we will be able to predict way early and more effectively utilize existing treatments I think I hope at least that with all the advancements in AI and drug discovery we would be able to much faster find relevant molecules what I'm not sure about is how long it will take the medical establishment and regulatory bodies to kind of catch up and to implement it and they think this is a very big piece of puzzle that is currently not addressed the see really interesting question so first a small detail that I think the answer is yes but is cancer one of one of the diseases that when detected earlier that significantly improves the outcomes it's so like because we will talk about there's the cure and then there is detection and I think while machine learning can really help is earlier detection so the detection help prediction is crucial for instance the vast majority of pancreatic cancer patients are detected at the stage they are incurable that's why they have such a you know terrible survival rate it's like just few percent over five years it's pretty much today a death sentence but if you can discover this disease early there are mechanisms to treat it and in fact I know a number of people who were diagnosed and saved just because they had food poisoning they had terrible food poisoning they went to our and they got scan there were early science on the scan that would save their lives but this wasn't really an accidental case so as we become better we would be able to help too many more people that have you know that are likely to develop diseases and I just want to say that as I got more into this field I realize that you know countries of course terrible disease but they're really the whole slew of terrible diseases out there like neurodegenerative diseases and others so we of course a lot of us are fixated on cancer just because it's so prevalent in our society and you see these people and there are a lot of patients with neurodegenerative diseases and that kind of aging diseases that we still don't have a good solution for and we you know and I felt as a computer scientist we kind of decided that it's other people's job to treat these diseases because it's like traditionally people in biology or in chemistry or and these are the ones who's thinking about it and after kind of start paying attention I think that it's really a wrong assumption and we all need to join the bottle so how it seems like in cancer specifically that there's a lot of ways that machine learning can help so what's what's the role of machine learning in the diagnosis of cancer so for many cancers today we really don't know what is your likelihood to get cancer and for the vast majority of patients especially on the younger patients it really comes as a surprise like for instance for breast cancer 80% of the patients are first in their families it's like me and I never thought that I had any increased risk because you don't nobody had it in my family and for some reason in my head it was kind of inherited disease but even if I would pay attention the the models that currently this is very simplistic statistical models that are currently used that in clinical practice it really don't give you an answer so you don't know and the same pancreatic cancer the same truth for non-smoking one cancer and many others so what machine learning can do here is utilize all this data to tell us le who is like it'll be susceptible and using all the information that is already there beat imaging beat your other tests and you know eventually liquid biopsies and others where the signal itself is not sufficiently strong for human eye to do good discrimination because the signal may be weak but by combining many sources machine which is trained on large volumes of data can really detect it early and that what we've seen with breast cancer and people are reporting it in other diseases as well that really boils down to data right and in the different kinds of sources of data and you mentioned regulatory challenges so what are the challenges in gathering large data sets in the space again another great question so it took me after I decided that I want to work on it two years to get access to data and like right now in this country there is no publicly available data set of modern mammograms that you can just go on your computer sign a document and get it it just doesn't exist I mean in obviously every hospital has its own collection of mammograms there are data that come out if they came out of clinical trials what we're talking about here is a computer scientist who just want to run his or her model and see how it works this data like imagenet doesn't exist and they you know there is an e said which is called like florida data set which is a film mammogram from 90s which is totally not representative of the current developments whatever you're learning on them doesn't scale up this is the only resource that is available and today there are many agencies that govern access to data like the hospital holds your data and the hospital decides whether they would give it to the researcher to walk with this data individual hospital yeah I mean the hospital may you know assume is that you're doing a surgical operation you can submit you know there is appropriate prove all process guided by IRB and you if you go through all the processes you can eventually get access to the data but if you yourself know I community they don't know that many people culturally ever go to access to data because it's very challenging process and Sarge isn't a quick comment eat MGH or any kind of hospital are they scanning the data that they digitally storing it oh it is already digitally stored you don't need to do any extra processing steps it's already there in the right format is that all right now there are a lot of issues that govern access to the data because the hospital is legally responsible for for the data and you know they have a lot to lose if they give the data to the wrong person but they may not have a lot to gain if they gave it as a hospital as a legal entity as giving it to you and the way you know whatever dimension happening in the future is the same thing that happens when you're getting your driving license you can decide whether you want to donate your organs so you can imagine that whenever a person goes to the hospital they it should be easy for them to the name their data for research and it can be different kind of do they only give you your test results or only mammogram only imaging data or the whole medical record because at the end we all will benefit from all this insights and it's not like you say I want to keep my data private but I would really love to get it you know from other people because other people think in the same way so if there is a mechanism to do this the nation and and the patient has an ability to say how they want to use their data for research it would be really a game-changer people when they think about this problem there's a it depends on the population the pains and the demographics but there's some privacy concerns generally we're not just medical data just say any kind of data it's what you said my data it should belong kind of to me I'm worried how it's going to be misused how how do we alleviate those concerns is that seems like a problem that needs to be that problem of trust of transparency needs to be solved before we build large data sets that help detect cancer help save those very people and there in the future so similar to things that could be done there is a technical solutions and there are societal solutions so on the technical and we today have ability to improve disambiguation like for instance for imaging it's you know for imaging you can do it pretty well what's this ambiguous and it's removing the identification removing the names of the people there are other data like if it isn't Rotax you cannot really achieve 99.9 percent but there are all these techniques that I should some of them I developed at MIT how you can do learning on the encode the data where you locally encode the image you train on network which only works on the encoded on encoded images and then you send the outcome back to the hospital and you can open it up so those are the technical solution there are a lot of people who are walking in this space where the learning happens in the encoded form I we're still early but this is the interesting research area what I think will make more progress there is a lot of work in natural language processing community how to do the identification better but even today there already a lot of data which can be de-identified perfectly like your test data for instance correct where you can just you know the name of the patient you just want to extract the part with the numbers the big problem here is again hospitals don't see much incentive to give this data away on one hand and then it is general concern now when I'm talking about societal benefits and about the education the public needs to understand that I think that there are situation and I still remember myself when I really needed an answer I had to make a choice there was no information to make a choice you're just guessing and at that moment you feel that your life is at the stake but you just don't have information to make the choice and many times when I give talks I get emails from women who say you know I'm in this situation can you please run statistic and see what are the outcomes we get almost every week a mammogram that comes by me to my office at MIT I'm serious that people ask to run because they need to make you know life-changing decisions and of course you know I'm not planning to open a clinic here but we do run and give them the results for their doctors but the point that I'm trying to make that we all at some point or our loved ones will be in the situation where you need information to make the best choice and if this information is not available you would feel vulnerable and unprotected and then the question is you know what do I care more because at the end everything is a trainer of correct yeah exactly just out of curiosity what it seems like one possible solution I'd like to see what you think of it based on what you just said based on wanting to know answers for anyone urine yourself in that situation is it possible for patients to own their data as opposed to hospitals owning their data of course theoretically I guess patients own their data but can you walk out there with the USB stick containing everything or uploaded to the cloud we're a company you know I remember Microsoft had a service like I try I was be really excited about and Google health was there I tried to give and I was excited about it basically companies helping you upload your data to the cloud so that you can move from hospital to hospital from Doctor to doctor do you see a promise of that kind of possibility I absolutely think this is you know the right way to to exchange the data I don't know now who is the biggest player in this field but I can clearly see that even you know for even for totally selfish health reasons when you are going to a new facility and many of us ascend to some specialized treatment they don't easily have access to your data and today you know we wouldn't want to send this mammogram need to go to the hospital find some small office which give them that CD and they ship as the CDC you can imagine we're looking at the kind of decades-old mechanism of data exchange so I definitely think this is in the area where hopefully all the right regulatory and technical forces will align and we will see it actually implemented it's sad because unfortunately and I have I need to research why that happened but I'm pretty sure Google Health and Microsoft's HealthVault or whatever it's called both closed down which means that there was either regulatory pressure or there's not a business case or there's challenges from hospitals which is very disappointing so when you say you don't know what the biggest players are the two biggest that I was aware of close the doors so I'm hoping uh I'd love to see why and I'd love to see who else can come up it seems like one of those Elon Musk style problems that are obvious needs to be solved and somebody needs to step up and actually do this large-scale data collection I know that is an initiative in Massachusetts the thing that you led by the governor to try to create this kind of house exchange system or at least to help people who kind of when you show up in emergency room and there is no information about what our ologists and other things so I drove how far it will go but another thing is you said and I find it very interesting it's actually who are the successful players in this space and the whole implementation how does it go two meters from the anthropological perspective it's more fascinating that AI that today goes in healthcare you know we've seen so many you know attempts and so very little successes and it's interesting to understand that I've by no means you know have knowledge to assess why we are in the position where we are yeah it's interesting as a data is really fuel for a lot of successful applications and when that data requires regulatory approval like the FDA or any kind of approval it's seems that the computer scientists are not quite there yet in being able to play the regular game understanding the fundamentals of it I think that in many cases when even people do you have data we still don't know what exactly do you need to demonstrate to change the standard of care well like let me give you example related to my breast cancer research so traditional in traditional breast cancer risk assessment there is something called density which determines the likelihood of a woman to get cancer and this is pretty much this how much white do you see on the mammogram the white say it is and the more likely the tissue is dense and the idea behind density it's not a bad idea in 1967 a radiologist called wolf decided to look back at women who were diagnosed and see what is special in the images can we look back and says that they're likely to develop so he come up with some patters it was the best that his human I can you know can identify then it was kind of formalized and coded into four categories and that what we are using today and today this density assessment is actually a federal law from 2019 they're approved by President Trump and for the previous FDA Commissioner where women are supposed to be advised by their providers if they have high density putting them into high-risk category and in some states you can actually get supplementary screening paid by your insurance because you are in this category now you can say how much science do we have behind it whatever biological science or epidemiological evidence so it turns out that between 40 and 50 percent of women have dense breasts so about 40 percent of patients are coming out of their screening and somebody tells them you are in high risk now what exactly does it mean if you as half of the population high risk its recede maybe I'm not you know what do I really need to do with it because the system doesn't provide me a lot of the solutions because there are so many people like me we cannot really provide very expensive solutions for them and the reason this whole density became this big deal it's actually advocated by the patients who felt very unprotected because many women when did the mammograms which were normal and then it turns out that they already had cancer quite developed cancer so they didn't have a way to know who is really at risk and what is the likelihood it when the doctor tells you you're okay you are not okay well at the time and it was you know 15 years ago this maybe was the best piece of science that we had and it took you know quite 1516 years to make it federal law but now that this is this is a standard now within the planning model we can so much more accurately predict who is gonna develop breast cancer just because you're trained on a logical thing and instead of describing how much white and what kind of white machine can systematically identify the patterns which was the original idea behind the sort of the traditions machinists can do it much more systematically and predict the risk when you train the machine to look at the image and to say the risk in one to five years now you can ask me how long it will take to substitute this density which is broadly used across the country and I really it's not helping to bring this new models and I would say it's not a matter of the algorithm algorithm is already orders of magnitude better the thought is currently in practice I think it's really the question who do you need to convince how many hospitals do you need to run the experiment both you know all this mechanism of adoption and how do you explain to patients and to women across the country that this is really a better measure and again I don't think it's in AI question we can walk more and make the algorithm even better but I don't think that this is the current you know the barrier the barrier is really this other piece that for some reason is not really explored it's like anthropological trees and coming back to a question about books there is a book that I am reading it's called American sickness by Elizabeth was in town and I got this book from my clinical collaborator dr. Connie Limon and I said I know everything that I need to know about American health system but you know every page doesn't fail to surprise me and I think there is a lot of interesting and really deep lessons for people like us from computer science who are coming into this field to really understand how complex is the system of incentives in the system to understand how you really need to play to drive adoption you just said it's complex but if we're trying to simplify it who do you think most likely would be successful if we push on this group of people is that the doctors is it the hospitals is it the governments of policymakers is it the individual patients consumers who needs to be inspired to most likely lead to adoption or is there no simple answer there's no simple answer but I think there is a lot of good people in medical system who do want you know to make a change and I think a lot of power will come from us as a consumers because we all are consumers or future consumers of healthcare services and I think we can do so much more in explaining the potential and not in the hype terms and not saying that we now killed all antimatter and you know I'm really sick of reading this kind of articles which made these claims but really to show with some examples what this implementation does and how it changes the care because I can't imagine doesn't matter what kind of politician it is you know we all are susceptible to these diseases there is no one who is free and eventually you know we all are humans and we're looking for way to alleviate the suffering and and this is one possible way where we can't the underutilizing which i think can help so it sounds like the biggest problems are outside of AI in terms of the biggest impact at this point but are there any open problems in the application of ml to oncology in general so improving the detection or any other creative methods whether it's on the detection segmentations of the vision perception side or some other clever of inference yeah what would it in general in youth any of you are the open problems in this space yeah I just want to mention sit beside detection another area what I am kind of quite active and I think it's really an increasingly important area in house care is drug design because you know it's fine if you detect something early but you still need to get you know to get drugs and new drugs for these conditions and today all of the drug design ml is non-existent that we don't have any drug that was developed by their male model or even not developed by at least even you that ml model plays a significant role I think this area was all the new ability to generate molecules with desired properties to do in silica screening is really a big open area in to be totally honest with you now when we are doing diagnostics and imaging primarily taking the ideas that we develop for other areas and you applying them with some adaptation the area of you know drug design is very technically interesting and exciting area you need to work a lot with graphs and capture various 3d properties there are lots and lots of opportunities to be technically creative and I think there are a lot of open questions in this area you know we're already getting a lot of successes even you know with the kind of the first generation of this models but there is much more new creative things that you can do and what's very nice to see is it actually the you know the the more powerful the more interesting models actually do do better so there is a place to to innovate in machine learning in this area and some of these techniques are really unique to let's say to you know graph generation and other things so what just an interpreter quick I'm sorry graph generation or graphs drug discovery in general what's what how do you discover a drug is this chemistry is this trying to predict different chemical reactions or is it some kind of water graphs even represented in this paper and what's a drug okay so let's say you think there are many different types of drugs but let's say you're gonna talk about small molecules because I think today the majority of drugs are small molecules so small molecule is a graph the molecule is just where the node in the graph is an atom and then you have the bone so it's really a graph representation if you look at it in 2d correct you can do it 3d but let's say well let's keep it simple and stick in 2d so pretty much my understanding today how it is done a scale in the companies you're without machine learning you have high throughput screening so you know that you are interested to get certain biological activity over the compound so you scan a lot of compounds like maybe hundreds of thousands some really big number of compounds you identify some compounds which have the right activity and then at this point you know the chemists come and they're trying to now to optimize this original heat to different properties that you want it to be maybe soluble you want to decrease tax ECG you want to decrease the side effects against your dropper can that be done in simulation or just by looking at the molecules or do you need to actually run reactions and real labs with lab it is so when you do high throughput screening you really do screening it's in the lab it's it's really the lab screening you screen the molecules corrected screening you just check them for certain property like in the physical space in the physical world like actually there's a machine probably that's doing some actually running the race actually running reactions yeah so so there is a process where you can run in this race go high through bodily you know it become cheaper and faster to do it and very big number of molecules you run the screaming you identify potential you know potential good starts and then we're the chemists come in who you know I've done it many times and then they can try to look at it and say how can I change the Millennial to get the desired profile in terms of all other properties so maybe how do you make it more by octave and so on and they're you know the creativity of the chemist really is the one the determines the success of this design because again they have a lot of domain knowledge you know what works how do you decrease the CCD and so on and that's what they do so all the drugs that are currently you know in the fda-approved Iraq serving drugs that are in clinical trials they are designed using these domain experts which goes through this combinatorial space of molecular graphs or whatever and find the right one now adjust it to be the right ones sounds like the the breast density heuristic from sixty seven the same echoes it's unnecessary is that it's really you know it's really driven by deep understand it so like they just observe it I mean they do deeply understand chemistry and they do understand how different groups and how does it changes the properties so there is a lot of science it gets into it and a lot of kind of simulation how do you want it to behave it eats very very complex they're quite effective at this is no effective yeah we have drugs like a spinning in how do you measure effect if if you measure it's in terms of cost its prohibitive if you measure the incidence of times you know we have lots of diseases for which we don't have any drugs and we don't even know how to approach and don't need to mention few drugs on your generative disease drugs that fail you know so there are lots of you know trials of face you know in later stages which is really catastrophic from the financial perspective so you know is it is it the effective the most effective mechanism absolutely no but this is the only one that currently works and I would you know I was closely interacting was fueling pharmaceutical industry I was really fascinating on how sharp and and what a deep understanding of the domain do they have it's not an observation driven it's there is really a lot of science behind what they do but if you ask me can machine learning change it I firmly believe yes because even the most experienced chemist cannot you know hold in their memory and understanding ever since you can learn you know from millions of molecules and reactions and and this piece of grass is a totally new space I mean it's a it's a really interesting space for machine learning to explore graph generation yeah so there's a lot of thing that you can do here so we do a lot of work so the first tool that we started with was the tool that can predict properties of the molecules so you can just give the molecular molecule and the property it can be by activity property or it can be some other property and you train the molecules and you can now take a new molecule and predict this property now when people started working in this area it is something very simple and they're kind of existing you know fingerprints which is kind of handcrafted features of the molecule when you break the graph to substructures and then you run it in a feed-forward neural network and what it was interesting to see that clearly you know this was not the most effective way to proceed and you need to have much more complex models that can induce the representation which can translate this graph into the embeddings and and do these predictions so this is one direction and another direction which is kind of related is not only to stop by looking at the embedding itself but actually modify it to produce better molecules so you can think about it as the machine translation that you can start with a molecule and then there is an improved version of molecular and you can again within coda translate it into the hidden space and then learn how to modify to improve the in some ways version of the molecules so that's it's kind of really exciting we already seen that the property prediction works pretty well and now we are generating molecules and there is actually loves which are manufacturing this molecule so we'll see why it will get us okay that's really exciting that so there's a lot of promise speaking of machine translation and embeddings I think you do you have done a lot of really great research in NLP natural language processing can you tell me your journey through NLP what ideas problems approaches were you working on were you fascinated with did you explore before this magic of deep learning re-emerged and after so when I started for my working in LP it was the 97th this is very interesting time it was exactly the time that I came to ACL and the time I could barely understand English but it was exactly like the transition point because half of the papers where really you know rule-based approaches where people took more kind of heavy linguistic approaches for small domains and try to build up from there and then they were the first generation of papers which were corpus based papers and they were very simple in our terms when you collect some statistics and do prediction based on them and I found it really fascinating that you know one community can think so very differently about you know about the problem and I remember the first paper that I wrote it didn't have a single formula it didn't have evaluation it just had examples of outputs and this was a standard of the field at a time in some ways I mean people maybe just started emphasizing the empirical evaluation but for many applications like summarization your interest or some examples of outwards and then increasingly you can see that how the statistical approach is dominated the field and we've seen you know increased performance across many basic tasks the sad part of the story may be that if you look again through this journey we see that the role of linguistics in some ways greatly diminishes and I think that you really need to look through the whole proceeding to do to find Martin to papers which make some interesting linguistic references it's really today today today this was different active trees just even basically against our conversation about human understanding of language which I guess what linguistic this would be structured parkour represent representing language in a way that's human explainable understandable is missing know if it is what is explainable and understandable in the end you know we perform functions and it's okay do you have machine which performs a function like when you're thinking about your calculator correct your calculator can do calculation very different from you would do the calculation but it's very effective I mean and this is fine if we can achieve certain tasks with high accuracy it doesn't necessarily mean that it has to understand it the same way as we understand in some ways it's even the eve to request because you have so many other sources of information that are absent when you are training your system so it's ok is it delivers it I never tell you one application this is really fascinating in 97 when it came to ACL there was some papers on machine translation they were like primitive like people were trying really really simple and the feeling my feeling wasn't you know to make real machine translation system it's like to fly and the moon and build a house there in the garden and live happily ever after I mean it's like impossible I never could imagine that within you know 10 years we would already see the system working and now you know nobody is even surprised to utilize the system on daily basis so this was like a huge huge progress saying that people for very long time try to solve using other mechanisms and they were unable to solve it that's why coming back to a question about biology then you know in linguistics people try to go this way and try to write the the syntactic trees and try to abstract it and to find the right representation and you know they couldn't get very five with this understanding while these models using you know other sources actually cable to make a lot of progress now I'm not naive to think but we are in this paradise space in NLP and shows you know that when we slightly change the domain and when we decrease the amount of training it can do like really bizarre and funny thing but I think it's just a matter of improving generalization capacity which is just a technical question Wow so that's that's the question how much of language understanding can be solved with deep neural networks in your intuition I mean it's unknown I suppose but as we start to creep towards romantic notions of the spirit of the Turing test and conversation and dialogue and something that may be to to me or to us silly humans feels like it needs real understanding how much can I be achieved with these neural networks or statistical methods so I guess I am very much driven by the human by the outcomes can we achieve the performance which will be satisfactory for for us for different tasks now if you again look at machine translation system which are you know trained on large amounts of data they really can do a remarkable job relatively to where they've been a few years ago and if you you know if you project into the future if it will be the same speed of improvement you know this is great now does it bother me that it's not doing the same translation as we are doing now if you go to cognitive science we still don't really understand what we are doing I mean there are a lot of theories so there is obviously a lot of progress and standing but our understanding what exactly goes on you know in our brains when we process language it's still not crystal clear and precise that we can translate it into machines what does bother me is that you know again that machines can be extremely brittle when you go out of your comfort zone of that when there is a distributional shift between training and testing and it have been years and years every year when I teach NOP class you know show them some examples of translation from some newspaper in Hebrew whatever it was perfect and then they have a recipe that to me a closed system sent me a while ago and it was written in Finnish of Karelian pies and it's just a terrible translation you cannot understand anything what it does it's not like some syntactic mistakes it's just terrible in year after year I try and interview translate in the end after year it does it's terrible walk because I guess you know the recipe is a no big part of their training refers to our so but in terms of outcomes that's a really clean good way to look at it I guess the question I was asking is do you think the imagine of future do you think the current approaches can pass the Turing test in the way in the best possible formulation of the Turing test is would you want to have a conversation within your own network for an hour Oh God but there are some people in this world alive or not that you would like to talk to for an hour could in your network of achieve that outcome so I think it would be really hard to create a successful training set we renamed Valene together conversation for a contextual conversation for an hour it's a problem of data I think in some ways it's informative it's a problem both of data and the problem of the way we are training our systems their ability to truly to generalize to be very compositional in some ways it limited you know if in the current capacity at least you know we can translate well we can you know find information well we can extract information so there are many capacities and which is doing very well and you can ask me would you trust the machine to translate for you and use it as a source I would say absolutely especially if we are talking about newspaper date or other data which is in the rearm of its own training so I would say yes but you know having conversations with the machine it's not something that I would choose to do but you know I would tell you something talking about Turing test and about all this kind of eliezer conversations I remember visiting tents and in China and they have this chat board and they claim it is like really humongous amount of the local population which like four hours talks to the child what to me it was I cannot believe it but apparently it's like documented that there are some people who enjoy this conversation and you know it brought to me the another MIT story about Eliza and wasting bomb I don't know if you familiar with the store service in Bonn was a professor at MIT and when he developed this Eliza which was just doing string matching very trivial like restating of what you said with very few rules no syntax apparently they were secretaries at MIT that would sit for hours and converse with this trivial thing and at the time there was no beautiful interfaces so you actually need to go through the pain of communicating and the wisdom bound himself was so horrified by this phenomena that people can believe in after the machine you just need to give them the hint that machine understands you and you can complete the rest that he kind of stopped this research and went into kind of trying to understand what these artificial intelligence can do to our brains so my point is you know how much it's not how good is the technology is how ready we are to believe that it delivers the goods that we are trying to get that's a really beautiful way to put it I I by the way I'm not horrified by that possibility but inspired by it because I mean human connection whether it's through language or through love it it seems like it's very amenable to machine learning and the rest is just challenges of psychology thank you said the secretaries who enjoy spending hours I would say I would describe most of our lives as enjoying spending hours with those we love for very silly reasons all we're doing is keyword matching as well so I'm not sure how much intelligence we exhibit to each other you know with the people we love that we're close with so it's a very interesting point of what it means to pass the Turing test well language I think you're right in terms of conversation I think machine translation is it has very clear performance and improvement right what it means to have a fulfilling conversation is very very person dependent and context dependent and so on that's a yeah it's very well put so but in your view what's a benchmark a natural language a test that's just out of reach right now but we might be able to that's exciting is it a machine isn't perfecting machine translation or is there other is it summarization what's what's out there it goes across specific application it's more about the ability to learn from few examples for real what we call future learning and all these cases because you know the way we publish these papers today we say if we have like a naively we get 55 but now we had a few example and we can move to 65 none of these methods actually realistically doing anything useful you cannot use them today and their ability to be able to generalize and to move or to be autonomous in finding the data that you need to learn to be able to perfect new tasks or new language this is an area where I think we really need to to move forward to and we are not yet there are you at all excited curious by the possibility of creating human level intelligence is this is you've been very in your discussion so if we're looking at oncology you're trying to use machine learning to help the world in terms of alleviating suffering if you look at natural language processing you focus on the outcomes of improving practical things like machine translation but you know human level intelligence is the thing that our civilization is dreaming about creating superhuman level intelligence do you think about this do you think it's at all within our reach so as you said to yourself for any I'm talking about you know how do you perceive you know our communications with each other that you know we're matching keywords and certain behaviors and so all so in the end whenever one assesses let's say relations with another person you have a separate kind of measurements and outcomes inside you had the determine you know what is the status of the relation so one way and this is classical dilemma what is the intelligence is it the fact that now we are gonna do the same way as human is doing when we don't even understand what the human is doing or we now have an ability to deliver this outcomes but not in one area not in a non fear not just to translate or just answer questions but it was many many areas that we can achieve the functionality is that humans can achieve with the ability to learn and do other things I think this is and this we can actually measure how far we are and that's what makes me excited that we you know in my lifetime at least so far what we've seen it's like tremendous progress across with these different functionalities and I think it will be really exciting to see where we will be and again one way to think about is there are machines which are improving their functionality another one is to think about us with our brains which I'm perfect how they can be accelerated by this technology as it becomes stronger and stronger coming back to another book that I love flowers for algernon have you read this book yes so there is this point that then the patient gets this miracle cure which changes his brain and of a sudden the he life in a different way and can do certain things better but certain things much worse so you can imagine this kind of computer augmented cognition where it can bring you that now in the same way as you know the cars enable us to get to places where we've never been before can we think differently can we think faster so and we already see a lot of it happening in how it impacts us but I think we have a long way to go there so that's sort of artificial intelligence and technology affecting our augmenting our intelligence as humans yesterday a company called neural link announced they do this whole demonstration I don't know if you saw it it's they demonstrated brain computer brain machine interface whether it's like a sewing machine for the brain do you uh you know a lot of that is quite out there in terms of things that some people would say are impossible but there are dreamers and want to engineer systems like that do you see based on what you just said I hope for that more direct interaction with the brain i I think there are different ways one is a direct interaction with the brain and again there are lots of companies that work in this space and I think there will be a lot of developments when I'm just saying that many times we are not aware of our feelings of motivation what drives us like let me give you a trivial example our attention there are a lot of studies in demonstrate that it takes a while to a person to understand that they are not attentive anymore and we know that there are people who really have strong capacity to hold the tension there are another end of the spectrum people with a DD and other issues they have problem to regulate their attention imagine to yourself do you have like a cognitive aid this just alerts you based on your gaze that your attention is now not on what you are doing and it's sort of writing a paper you're now dreaming of what you're going to do in the evening so even this kind of simple measurement things how they can change us and I see it even in these simple ways with myself I have my zone up from that I go to MIT gym it kind of records you know how much the Duran and you have points and you could get some status whatever like what is this ridiculous thing who would ever care about some status in some up guess what so took it to me to contain the status you have to do certain number of points every month and not only is that they do it every single month for the last 18 months it went to the poem that I was running and the day I was injured and when I could run again I in two days I did like some humongous amount of ready to complete the point I was like really not safe music I'm not gonna lose my status so you can already see that this direct measurement and the feedback it's you know we're looking at video games and see why you know the addiction aspect of it but you can imagine that the same idea can be expanded to many other areas of our life when we really can get feedback and imagine in your case in relations when we are doing keyword matching imagine that the person who is generating the keywords that first one gets direct feedback before the whole thing explodes is it maybe a point we are going in the wrong direction really behavior modifying moment so yeah it's a relationship management - so yeah that's that's a fascinating whole area of psychology actually as well as seeing how our behavior has changed with basically all human relations now have other non-human entities helping us out so you uh you teach a large a huge machine learning course here at MIT I can ask you a million questions but you've seen a lot of students what ideas do students struggle with the most as they first enter this world of machine learning actually this year was the first time I started teaching a small machine learning class and it came as a result of what I saw in my big machine learning class the to me Ackland I build maybe six years ago what we've seen that as this area become more and more popular more and more people at MIT want to take this class and while we designed it for computer science majors there were a lot of people who really I interested to learn in but unfortunately their background was not enabling them to do well in the class and many of them associated machine learning with the would struggle in failure primarily for non-majors and that's why which restarts the new class which we call machine learning from algorithms to modeling which emphasizes more the modeling aspects of it and focuses on it has majors and non-majors so we kind of try to extract the relevant paths and make it more accessible because the fact that we're teaching 20 classifiers in standard machine learning class it's really a big question we really needed but it was interesting to see this from first generation of students you know when they came back from their internships and from their jobs what different and exciting things they can do is it I would never think that you can even apply machine learning - some of them are like matching you know their relations and you know that actually brings up an interesting point of computer science in general it almost seems maybe I'm crazy but it almost seems like everybody needs to learn how to program these days if you're 20 years old or if you're starting school even if you're an English major it seems it seems like programming unlocks so much possibility in this world so in when you interact with those non majors is their skills that they were simply lacking at the time that you wish they had and that they learned in high school and so on like how will it how should education change in this computer computerized the world that we live in seen because they knew that is a Python component in the class you know their Python skills we're okay and the class is not really heavy on programming they were primarily kind of add parts to the programs I think it was more of them mathematical barriers and the class against with the design on the majors was using the notation like Big O for complexity and others if people who come from different backgrounds just don't have it in the lexicon it's unnecessarily very challenging notion but they were just not aware so I think that you know kind of linear algebra and probability the basics that calculus won't variate calculus a thing that can help what advice would you give the students interested in machine learning interested you've talked about detecting curing cancer drug design if they want to get into that field what what should they do get into it and succeed as researchers and entrepreneurs the first good piece of news is right now there are lots of resources that you know are created at different levels and you can find online on school classes which are more mathematical more applied and so on so you can find kind of a preacher which preaches your own language where you can enter the field and you can make many different types of contribution depending of you know what is your strengths and the second point I think it's really important to find some area which you which you really care about and it can motivate your learning and it can be for somebody curing cancer or doing self-driving cars so whatever but to find an area where you know there is data where you believe there are strong patterns and we should be doing it and we're still not doing it oh you can do it better and just start there and see where it can bring you so you've you've been very successful in many directions in life but you also mentioned flowers of Argan on and I think I've read or listened to you mention somewhere that researchers often get lost in the details of their work this is per our original discussion with cancer and so on and don't look at the bigger picture bigger questions of meaning and so on so let me ask you the impossible question of what's the meaning of this thing of life of your life of research why do you think we descendant of great apes are here on this spinning ball you know I don't think that I have really a global answer you know maybe that's why I didn't go to humanities and then thank University's Colossus in undergrad but the way I'm thinking about it that each one of us inside of them have their own set of you know things that we believe I important and it just happens that we are busy with achieving various goal of busy listening to others and to kind of try to conform and to be part of the crowd that we don't listen to that part and you know yeah we all should find some time to understand what is our own individual missions and we may have very different missions and to make sure that while we are running ten thousand things we are not you know missing out and we're putting all the resources to satisfy our own mission and if I look over my time when I was younger most of these missions you know I was primarily driven by the external stimulus you know to achieve there so to be that and now a lot of what I do is driven by really thinking what is important for me to achieve independently of the external recognition and you know I don't mind to be viewed in certain ways the most important thing for me is to be true to myself to what I think is right how long did it take how hard was it to find the user you have to be true to so it takes time and even now sometimes you know the vanity and the triviality can take a MIG yeah it can everywhere it you know it's just the vanity item it is different the vanity in different places but we all have our piece of vanity but I think actually for me the many times the place to get back to it is you know when I when I'm alone and also when I read and I think by selecting the right books you can get the right questions and learn from what you read so but but again it's not perfect like vanities - dominates well that's a beautiful way to end thank you so much for talking today yeah that's fun why it was fun you
Colin Angle: iRobot CEO | Lex Fridman Podcast #39
the following is a conversation with Colin angle he's the CEO and co-founder of iRobot a robotics company that for 29 years has been creating robots that operate successfully in the real world not as a demo or on a scale of dozens but on a scale of thousands and millions as of this year iRobot has sold more than 25 million robots to consumers including the Roomba vacuum cleaning robot the Bravo floor mopping robot and soon the Terra lawn-mowing robot 29 million robots successfully operating autonomously in real people's homes to me is an incredible accomplishment of science engineering logistics and all kinds of general entrepreneurial innovation most robotics companies fail iRobot has survived and succeeded for 29 years I spent all day at iRobot including a long tour and conversation with Colin about the history of iRobot and then sat down for this podcast conversation that would have been much longer if I didn't spend all day learning about and playing with the various robots in the company's history I'll release the video of the tour separately Colin iRobot its founding team its current team and its mission has been and continues to be an inspiration to me and thousands of engineers who are working hard to create AI systems that help real people this is the artificial intelligence podcast if you enjoy it subscribe on YouTube give it five stars and iTunes supported on patreon or simply connect with me on Twitter at Lux Friedman spelled Fri D ma a.m. and now here's my conversation with Colin angle in his 1942 short story runaround from his iRobot collection as a Asimov proposed the Three Laws of Robotics in order don't harm humans obey orders protect yourself so two questions first does the Roomba follow these three laws and also more seriously what role do you hope to see robots take in modern society in the future world so the three laws are very thought-provoking and require such a profound understanding of the world a robot lives in the ramifications of its action and its own sense of self that it's not a relevant bar at least it won't be a relevant bar for decades to come and so if Roomba follows the three laws and I believe it does you know it is designed to help humans not hurt them it's designed to be inherently safe and we design it to last a long time it's not through any AI or intent on the robots part it's because following the Three Laws is aligned with being a good robot product so um so I guess it does but it does by not by explicit design so then the bigger picture well what role do you hope to see robotics robots take in our what's currently mostly a world of humans we need robots to help us continue to improve our standard of living we need robots because the average age of humanity is increasing very quickly and simply the number of people young enough and spry enough to care for the elder growing demographic is inadequate and so what is the role of robots today the role is to make our lives a little easier a little cleaner maybe a little healthier but in time robots are going to be the difference between real gut wrenching declines in our ability to live independently and maintain our standard of living and a future that is the bright one where we have more control of our lives can spend more of our time focused on activities we choose and so honored and excited to be playing a role in that journey so you give me a tour it showed me some of the long histories now 29 years that I iRobot has been at it creating some incredible robots who showed me Pat bot he showed me a bunch of other stuff that led up to Roomba that led to Brava and Terra so let's skip that incredible history in the interest of time because we already talked about it I'll show this incredible footage you mentioned elderly and robotics and society I think the home is a fascinating place for robots to be so where do you see robots in the home currently I would say once again probably most homes in the world don't have a robot so how do you see that changing woody things the big initial value add that robots can do so iRobot has sort of over the years narrowed in on the home the consumers home as the place where we want to innovate and deliver tools that we'll help a home be a more automatically maintained place a healthier place a safer place and perhaps even a more efficient place to be and you know today we vacuum we mop soon we'll be mowing your lawn but where things are going is when do we get to the point where the home not just the robots that live in your home but the home itself becomes part of a system that maintains itself and plays an active role in caring for and helping the people who live in that home and I see everything that we're doing as steps along the path toward that future so what are the what are the steps so if we can summarize some of the history of Roomba the you've mentioned and maybe you can elaborate on it but you mentioned that the early days were really taking a robot from something that works either in the lab or something that works in the field that helps soldiers do the difficult work they do to actually be in the hands of consumers and tens of thousands hundreds of thousands robots that don't break down over how much people love them over months of very extensive use so that was the big first step and then the second big step was the ability to sense the environment to build a map to localize to be able to build a picture of the home that the human can then attach labels to in terms of you know giving some semantic knowledge to the robot about its environment okay so that's like a huge two big huge steps now maybe you can comment on them but also what is the the the next step of of making a robot part of the home sure so the goal is to make a home that that takes care of itself takes care of the people in the home and gives a user and experience of just living their life in the home is somehow doing the right thing lights when you leave cleaning up the environment and we went from robots that were great in the lab but we're both too expensive and not sufficiently capable to ever do an acceptable job of anything other than being a toy or curio in your home to something that was both affordable and sufficiently effective to drive you know be a threshold and drive purchase intent now we've disrupted the entire vacuuming industry the number one selling vacuums for example in the u.s. our room bows so not robot vacuums but vacuums and that's really crazy and weird and yes we need to pause and that's incredible that's this is incredible that a robot is this is the number one selling thing that does something yep everything as essential as vacuuming so we're it's still kind of fun to say but they just because this was a crazy idea that that just started you know in a room here we're like do you think we can do this hey let's give it a try but but now the robots are starting to understand their environment and if you think about the next step there's two dimensions I've been working so hard since the beginning of iRobot to make robots are autonomous that you know they're smart enough and understand their task enough they they can just go do it without human involvement now what I'm really excited and working on is how do I make them less autonomous meaning that the robot is supposed to be your partner not this automaton that just goes and does what a robot does and so that if you tell it hey I just dropped some flower by the fridge in the kitchen not can you deal with it wouldn't be awesome if the right thing just happened based on that utterance and to some extent that's less autonomous because it's actually listening to you understanding the context and intent of the sentence mapping it against its understanding of the home it lives in and knowing what to do and so that's an area of research it's an area where we're starting to roll out features you can now tell your robot to clean up the kitchen and it knows what the kitchen is and can do that and that's sort of 1.0 of where we're going the other cool thing is that we're starting to know where stuff is and why is that important well robots are supposed to have arms right data had an arm Rosie had an arm yeah Robbie the robot Hatter I mean brother you know they are physical things that move her around in an environment they're supposed to like do work yeah and if you think about it if a robot doesn't know anything where anything is why should it have an arm but with this new dawn of home understanding that we're starting to go enjoy I know where the kitchen is I might in the future know where the refrigerators I might if I had an arm be able to find the hand I'll open it and even get myself a beer obviously that's one of the true dreams of robotics as Deb robots bringing us a beer while we watch television but um you know I think that that new category of tasks where physical manipulation robot arms is a just a potpourri of new opportunity and excitement and you see humans as a crucial part of that so you kind of mentioned that and I personally find that a really compelling idea I think full autonomy can only take us so far especially in the home so you see humans is helping the robot understand or give deeper meaning to the spatial information right it's it's a partnership the robot is supposed to operate according to descriptors that you would use to describe your own home the robot is supposed to in lieu of better direction kind of go about its routine which ought to be basically right and lead to a home maintained in a way that it's learned you like but also be perpetually ready to take direction that would activate a different set of behaviors or actions to meet a current need to the extent it could actually perform that task so I gotta ask you I think this is a fundamental and fascinating question because iRobot has been a successful company and a rare successful robotics company so Anki geebo Mayfield robotics with a robot curry sci-fi works rethink robotics these were robotics companies that were founded and run by brilliant people but all very unfortunately for at least for us roboticists that and all went out of business recently so why do you think they didn't last longer why do you think it is so hard to keep a robotics company alive you know I say this only partially in jest that back in the day before Roomba you know I was a I was a high-tech entrepreneur building robots but it wasn't until I became a vacuum cleaner salesman that we had any success so though I mean the point is technology alone doesn't equal a successful business we need to go and find the compelling need where the robot that we're creating can deliver clearly more value to the end user than it costs and it's this is not a marginal thing where you're looking at the skin like it's closed maybe we can hold our breath and make it work it's clearly more value than the the cost of the robot to bring you know in the store and I think that the challenge has been finding those businesses where that's true in a sustainable fashion you know the the when you get into entertainment style things you could be the cat's meow one year but 85% of toys regardless of their merit fail to make it to their second season it's just super hard to do so and so that that's just a tough business and there have been a lot of experimentation around what is the right type of social companion what is the right robot in the home that is doing something other than tasks people do every week that they'd rather not do and I'm not sure we've got it all figured out right and so that you get brilliant roboticist with super interesting robots that ultimately don't quite have that magical user experience and thus the that value benefit equation remains ambiguous so you as somebody who dreams of robots you know changing the world what's your estimate why how big is the space of applications that fit the criteria that you just described where you can really demonstrate an obvious significant value over the alternative non robot bought a solution well I think that we're just about none of the way to achieving the potential of robotics at home but we have to do it in a really eyes wide open honest fashion and so another way to put that is the potentials infinite because we did take a few steps but you're saying those steps is just very initial steps so the Rumba is a hugely successful product but you're saying that's just the very very just the very very beginning it's the foot in the door and you know I think I was lucky that in the early days of robotics people would ask me when are you gonna clean my floor it was something that I grew up saying I got all these really good ideas but everyone seems to want their floor clean and so maybe we should do that Betty your good ideas earned the right to do the next thing after that so the good ideas have to match with the desire of the people and then the actual cost has to like the business the financial aspect has to all mash together yeah I during our partnership back a number of years ago Johnson wax they would explain to me that they would go into homes and just watch how people lived and try to figure out what were they doing that they really didn't really like to do but they had to do it frequently enough that it was top of mind and and understood as a a burden hey let's make a product or come up with a solution to make that pain point lesyk less challenging and sometimes we do certain burdens so often as a society that we actually don't even realize like it's actually hard to see that that burden is something that could be removed so it does require just going into the home and staring it wait how do I actually live life what are the pain points yeah and it getting those insights is a lot harder than it would seem it should be in retrospect so how hard on that point I mean one of the big challenges of robotics is driving the cost to something driving the cost down to something that consumers people would afford so people would be less likely to buy a Roomba for cost $500,000 right which is probably sort of what a Roomba would costs several decades ago so how do you drive which I mention is very difficult how do you drive the cost of a Roomba or a robot down such that people wouldn't want to buy it when I started building robots the cost of the robot had a lot to do with the amount of time it took to build it and so that we build our robots out of aluminum I would go spend my time in the machine shop on the milling machine cutting out the the the parts and and so forth and then when we got into the toy industry I realized that if we were building at scale I could determine the cost of the rope instead of adding up all the hours to mill out the parts but by weighing it and that's liberating you can say wow the world is has just changed as I think about construction in a different way the 3d CAD tools that are available to us today the operating at scale where I can do tooling and injection mold an arbitrarily complicated part and the cost is going to be basically the weight of the plastic in that part is incredibly exciting and liberating and opens up all sorts of opportunities and for the sensing part of it where we are today is instead of trying to build skin which is like really hard for a long time spent creating strategies and and ideas around how could we duplicate the skin on the human body because it's such an amazing sensor the instead of going down that path why don't we focus on vision and how many of the problems that face a robot trying to do real work could be solved with a cheap camera and a big-ass computer yeah and Moore's Law continues to work the cell phone industry the mobile industry is giving us better and better tools that can run on these embedded computers and I think we passed a an important moment maybe two years ago where you could put machine vision capable processors on robots at consumer price points and I was waiting for it happed to happen we avoided putting lasers on our robots to do navigation and instead spent years researching how to do vision based navigation because you could just see it where these technology trends were going and between injection molded plastic and a camera with a computer capable of running machine learning and visual object recognition I could build an incredibly affordable incredibly capable robot and that's gonna be the future you know on that point with a small tangent but I think an important one another industry in which I would say the only other industry in which they're true there is automation actually touching people's lives today is autonomous vehicles mm-hmm what the vision each is described of using computer vision and using cheap camera sensors that's there's a debate on that of lidar versus computer vision and sort of the Elon Musk famously said that lidar is a crutch that really in camera in the long term camera only is the right solution which echoes some of the ideas you're expressing of course in terms of its safety criticality is different but what do you think about that approach in the autonomous vehicle space and in general do you see a connection between the incredible real-world challenges you have to solve in the home with Roomba and I saw a demonstration of some of them corner cases literally and autonomous vehicles so there's absolutely a tremendous overlap between both the problems you know a robot vacuum and a Thomas vehicle are trying to solve and the tools and the types of sensors that are being applied in the pursuit of the solutions in my world my environment is actually much harder than the environment and automobile travel we don't have roads we have t-shirts we have steps we have a near infinite number of patterns and colors and surface textures on the floor especially from a visual perspective yeah wait William looks it's really tough is an infinitely variable on the other hand safety is way easier on the inside my robots they're not far you heavy they're not very fast if they bump into your foot you think it's funny and you know and autonomous vehicles kind of have the inverse problem and so that for me saying vision is the future I can say that without reservation for autonomous vehicles I think I believe what you know I'm saying about the future is ultimately going to be vision maybe if we put a cheap lighter on there as a backup sensor it might not be the worst idea in the world so the something so much hi thanks so much higher that's you much more careful thinking through how far away that feature is right right and but I think that the primary environmental understanding sensor is going to be a visual system visual system so on that point well let me ask do you hope there's an AI robot robot in every home in the world one day I expect there to be at least when I robot robot in every home you know we've we've sold 25 million robots so we're in about 10 percent of US homes which is a great start but I think that when we think about the numbers of things that robots can do you know today I can vacuum your floor mop your floor cut your lawn or soon we'll be able to cut your lawn but there are more things that we could do in the home and I hope that we continue using the techniques I described around exploiting computer vision and low cost manufacturing that we'll be able to create these solutions at affordable price points so let me ask on that point of a robot in every home that's my dream as well I'd love love to see that I you know I think the possibilities there are indeed infinite positive possibilities but you know in our current culture no thanks to science fiction and so on there's a serious kind of hesitation anxiety concern about robots and also a concern about privacy and it's a fascinating question to me why that concern is amongst a certain group of people as as intense as it is so you have to think about it because it's a serious concern but I wonder how you address it best so from a perspective of a vision sensor so robots that move about the home and sense the world how do how do you alleviate people's privacy concerns how do you make sure that they can trust iRobot and the robots that they share their home with I think that's a great question and we've really leaned way forward on this because given our vision as to the role the company intends to play in the home really for us make-or-break is can our approach be trusted to protecting the data and the privacy of the people who have our robots and so we've gone out publicly the privacy manifesto stating we'll never sell your data we've adopted GDP are not just where GDP are is required but globally we have ensured that any that images don't leave the robot so processing data from the visual sensors happens locally on the robot and only semantic knowledge of the home with the consumers consent is sent up we show you what we know and are trying to go use data as an enabler for the performance of the robots with the informed consent and understanding of the people who own those robots and you know we take it very seriously and ultimately we think that by showing a customer that you know if you let us build a semantic map of your home and know where the rooms are well then you can say clean the kitchen if you don't want to robot to do that don't make the map it'll do its best job cleaning your home but it won't be able to do that and if you ever want us to forget that we know that it's your kitchen you can have confidence that we will do that for you so we're trying to go and be a sort of a data 2.0 perspective company where we treat the data that the robots average of the consumers home as if it were the consumers data and that they have rights to it so we think by being the good guys on this front we can build the trust and thus be entrusted to enable robots to do more things that are thoughtful you think people's worries will diminish over time as a society broadly speaking do you think you can win over trust not just for the company but just the comfort of people have with AI in their home enriching their lives in some way I think we're an interesting place today we're less about winning them over and more about finding a way to talk about privacy in a way that more people can understand I would tell you that today when there's a privacy breach people get very upset and then then go to the store and buy the cheapest thing paying no attention to whether or not the products that they're buying honor privacy standards or not in fact if I put on the package of my Roomba the privacy commitments that we have I would sell less than I would if I did nothing at all and that needs to change so it's it's not a question about earning trust I think that's necessary by not sufficient we need to figure out how to have a comfortable set of what is the grade a meet standard applied to privacy that customers can trust and understand and then use in the buying decisions that will reward companies for good behavior and that will ultimately be how this moves forward and maybe be part of the conversation between regular people about what it means what privacy means if you have some standards you can say you're gonna start talking about who's following them who's not have more because most people are actually quite clueless about all aspects of artificial intelligence of data collection so on it would be nice to change that for people to understand the good that AI can do and it's not some some system that's trying to steal all the most sensitive data yep do you think do you dream of a Roomba with human level intelligence one day so you've mentioned a very successful localization and mapping of the environment being able to do some basic communication to say go clean the kitchen do you see and you're maybe more bored moments once you get the beer to sit back with that beer and have a chat on a Friday night with the Roomba about how your day one so true your latter question absolutely - your former question as to whether robot can have human level intelligence not my lifetime you can have you I think you can have a great conversation a meaningful conversation with a robot without it having anything that resembles human level intelligence and I think that as long as you realize that conversation is not about the robot and making the robot feel good that conversation is about you learning interesting things that make you feel like the conversation that you had with a robot is a pretty awesome way of learning something and it could be about what kind of day your pet had it could be about you know how can I make my home more energy-efficient it could be about you know if I'm thinking about climbing Mount Everest what should I know and that's a very doable thing you know but if I think that that conversation gonna have of the robot is I'm gonna be rewarded by making the robot happy but I couldn't have just put a button on the robot you could push in the robot would smile and that sort of thing so I think you need to think about the question in the the right way and and robots can be awesomely effective at helping people feel less isolated learn more about the home that they live in and fill some of those lonely gaps that we wish we were engaged learning cool stuff about our world last question if you could hang out for a day with a robot from science fiction movies books and safely pick safely pick its brain for that day who would you pick data data from Star Trek I think that a data is really smart data's been through a lot trying to go and save the galaxy and I'm really interested actually in emotion and robotics and I think he'd have a lot to say about that because I believe actually that emotion play is an incredibly useful role in doing reasonable things in situations where I have imperfect understanding of what's going on in social situations on there's been perfect information in social situations also in competitive or dangerous situations that we have a motion for a reason and so that ultimately this my theory is that as robots get smart and smarter they're actually going to get more emotional because you can't actually survive on pure logic because only a very tiny fraction of the situations you find ourselves and can be resolved reasonably with logic and so I think data would have a lot to say about that and so I could find out whether he agrees what if you could ask data one question you would get a deep honest answer to what would you ask what's Captain Picard really like okay I think that's the perfect way to end the call and thank you so much for talking today I really appreciate it my pleasure you
François Chollet: Keras, Deep Learning, and the Progress of AI | Lex Fridman Podcast #38
the following is a conversation with Francois Shelley he's the creator of Karass which is an open source deep learning library that is designed to enable fast user friendly experimentation with deep neural networks it serves as an interface to several deep learning libraries most popular of which is tensorflow and it was integrated into the tensorflow main codebase a while ago meaning if you want to create train and news Neil networks probably the easiest the most popular option is to use chaos inside tensorflow aside from creating an exceptionally useful and popular library Francois was also world-class AI researcher and software engineer at Google and he's definitely an outspoken if not controversial personality in the AI world especially in the realm of ideas around the future of artificial intelligence this is the artificial intelligence podcast if you enjoy it subscribe on YouTube give us five stars and iTunes supported on patreon or simply connect with me on Twitter at lex Friedman spelled Fri D M a.m. and now here's my conversation with Francois shall I you're known for not sugarcoating your opinions and speaking your mind about ideas and AI especially on Twitter it's one of my favorite Twitter accounts so what's one of the more controversial ideas you've expressed online and gotten some heat for how do you pick yeah no I think if you have if you go through the trouble of maintaining Twitter accounts you might as well speak your mind you know otherwise it's you know what's even the point filling in Twitter accounts they're getting nice Colin just didn't leave it in in the garage yes so what's one thing for which I got out of push back perhaps you know that time I wrote something about the idea of intelligence explosion and I was questioning the ID and the reasoning behind this idea and I guess I was push back on that I guess not a flag for it so yeah so integers explore I'm sure if Mei was the idea but it's the idea that if you were to build general AI problem-solving algorithms well the problem of building such an AI that itself is a problem that could be solved by your eye and maybe it could be so better than that then what humans can do so you're a I could start tweaking its own algorithm good that start being a better version of itself and so on it's ratified in a recursive fashion and so you would end up with an AI with exponentially increasing intelligence all right and I was basically questioning this idea first of all because the notion of intelligence explosion uses an implicit definition of intelligence that doesn't sound quite right to me it considers intelligence as property of a grain that you can consider in isolation like the height of the building for instance right but that's not really what intelligence is intelligence emerges from the interaction between a brain a body like embodied intelligence and an environment and if you're missing one of these pieces then you can actually define interagency so just tweaking a brain to make it smaller and smaller doesn't actually make any sense to me so first of all you're crushing the dreams of many people right so there's a little bit like say Maris I feel a lot of physicists max tegmark people who think you know the universe is an information processing system our brain is kind of an information processing system so what's the theoretical limit like it doesn't make sense that there should be some it seems naive to think that our own brain is somehow the limit of the capabilities and this information is just I'm playing devil's advocate here this information processing system and then if you just scale it if you're able to build something that's on par with the brain you just the process that builds it just continues and it will improve exponentially so that that's the logic that's used actually by almost everybody that is worried about superhuman intelligence yeah so you're you're trying to make so most people who are skeptical that are kind of like this doesn't their thought process this doesn't feel right like that's for me as well so I'm more like it doesn't we the whole thing is shrouded in mystery where you you can't really say anything concrete but you could say this doesn't feel right this doesn't feel like that's how the brain works and you're trying to with your blog post and now making a little more explicit so one idea is that the brain isn't exists alone it exists within the environment so you can't exponentially you have to somehow exponentially improve the environment and the brain together almost yeah in order to create something that's much smarter in some kind of of course we don't have a definition of intelligence that's right that's correct III don't think if you look at very smart people today even humans not even talking about a eyes I don't think their brain and the toughness of their brain is the bottleneck to the actually expressed intelligence to their achievements you cannot just tweak one part of this system back of this brain body environment system and expect capabilities like what emerges out of this system to just you know explode exponentially because anytime you improve one part of the system with many interdependencies like this there's a new bottleneck that arises right and I don't think even today for very smart people their brain is not the bottleneck to the sort of problems they can solve right in fact many various what people to them you know they are not actually solving any big scientific problems in a tense time they like Einstein but you know the the patent clerk days like Iceland became Einstein because this was a meeting of a genius with a big problem at the right time right but maybe this meeting could have noon and never happens and then Iceland there's just been a patent clerk it's and in fact many people today are probably like genius level smart but you wouldn't know because they're not really expressing any of that was brilliant so we can think of the world earth but also the universe is just as the space of problems so all these problems and tasks are roaming it a various difficulty and there's agents creatures like ourselves and animals and so on that are also roaming it and then you get coupled with a problem and then you solve it but without that coupling you can't demonstrate your quote-unquote intelligence exactly intelligence is the meaning of great problem-solving capabilities with a great problem and if you don't have the problem you don't react spreche in intelligence all you're left with is potential intelligence like the performance of your brain are you know haha your IQ is which in itself it's just a number right so you mentioned problem-solving capacity yeah what what do you think of as problem-solving about what can you try to define intelligence like what does it mean to be more or less intelligent is it completely coupled to a particular problem or is there something a little bit more universal yeah I do believe all intelligence is specialized intelligence even human intelligence has some degree of generality well all intelligence systems have some degree of generality they're always specialized in in one category of problems so the human intelligence is specialized in the human experience and that shows at various levels that shows in some prior knowledge that's innate that we have at birth knowledge about things like agents goal-driven behavior visual priors about what makes an object try us about time and so on that shows also in the way we learn for instance is very very fast to pick up language it's very very easy for us to learn certain things because we are basically hard-coded to learn them and we are specialized in solving certain kinds of problem and we are quite useless when it comes to other kinds of problems for instance we we are not really designed to handle very long term problems we have no capability of seeing that the very long term we don't have them how much working memory you know so how do you think about long term using long term planning we're talking about scale of years millennia what do you mean by long term were not very good well human intelligence is specialized in the human experience and humans experience is very short like one lifetime is short even within one lifetime we have a very hard time envisioning you know things on a scale of yells like it's very difficult to project yourself at at the scale of favi at the scale of ten years and so on right we can solve only fairly narrowly scoped problems so when it comes to solving bigger problems larger scale problems we are not actually doing it on an individual level so it's not actually our brain doing it we we have this thing called civilization right which is itself a sort of problem solving system a sort of artificially intelligent system right and it's not running on one brain is ringing on a network of brains in fact it's running on much more than a network of brains it's running on a lot of infrastructure like books and computers and the internet and human institutions and so on and that is capable of handling problems on the on a much greater scale in any individual human if you look at some computer science for instance that's an institution that solves problems and it's it is super human right I took resin on a greater scale it can source cancer much bigger problem than an individual human good and science itself science as a system as an institution is a crime affair artificial intelligence problem solving algorithm that is superhuman yes these computer science is like a theorem prover at a scale of thousands maybe hundreds of thousands of human beings at a scale what do you think is a intelligent agent so there's us humans at the individual level there is millions maybe billions of bacteria on our skin there is that's at the smaller scale you can even go to the particle level as systems that behave you couldn't say intelligently in some ways and then you can look at the earth as a single organism you can look at our galaxy and even the universe is just a little organism do you think how do you think about scale and defining intelligent systems and we're here at Google there is millions of devices doing computation just in a distributed way how do you think what intelligence there's a scale you can always characterize anything as a system I think people who talk about things like intelligence explosion tend to focus on one Asian is basically one brain like one brain considered in isolation like a brain a jaw that's controlling your body in a very like top to bottom can a fashion and that body is person goes into an environment so it's a very hierarchical view you have the brain at the top of the pyramid then you have to bother just plainly receiving orders and then the body is manipulating objects in environment and so on so everything is subordinate to this one thing this epicenter which is the brain but in real life intelligent agents don't really work like this right there is no strong delimitation between the brain and the body stalin's you have to look not just to the brain but at the nervous system but then the nervous system and the body are not free to step and it is so you have to look at an entire animal as one agent but then you start realizing as you observe an animal of any length of time that a lot of the intelligence of an animal is actually externalized that's especially true for humans a lot of our intelligence is externalized when you write down some notes that is externalized intelligence when you write the computer program you are externalizing cognition so it's externalizing books it's generalized in in computers the internet in other humans it's externalizing language and so on so it's there is no like hardly limitation of what makes an intelligent agent it's all about context okay but alphago is better at go than the best humor player you know there's levels of skill here so do you think there is such a ability as such a concept as a intelligence explosion and a specific task and then well yeah do you think it's possible to have a category of tasks on which you do have something like an exponential growth of ability to solve that particular problem I think if you consider specificity corn is probably possible to some extent I also don't think we have to speculate about it's because we have real-world examples of frequency self-improving intelligence systems for instance science problem-solving system and knowledge generation system like a system that experiences the world in some sense and then gradually understands it and can act on it and that system is superhuman and it is clearly recursively self-improving because science feeds into technology technology can be used to build better tools with our computers better instrumentation and so on which in turn I can make sense faster right so science is probably the closest thing we have today to a recursively self-improving super human AI and you can just observe you know it's science its scientific progress to the exploding which you know it's that vision isn't is an interesting question you can use that as a basis to try to understand what we happen with a superhuman AI that as a science track behavior let me linger on it a little bit more what is your intuition why an intelligence explosion is not possible like taking the scientific all the semantic revolutions why can't we slightly accelerate that process so you you can absolutely accelerates any problem solving process so recursively as recursive self-improvement is absolutely a real thing but what happens with recursively seven boring system it's typically not explosion because no system exists in isolation and so tweaking one part of the system means that suddenly another pollow system becomes a bottleneck and if you look at science for instance which is clearly a recursively self-improving clearly a problem-solving system scientific progress is not actually exploding if you look at science what you see is the picture of a system that is consuming an exponentially increasing amount of resources but it's having a linear output in terms of scientific progress and maybe that that will seem like a very strong claim many people are actually saying that you know scientific progress is exponential but when they are claiming this they are actually looking at indicators of resource consumption resource consumption by science the number of papers being published the number of parents being filed and so on which are just just completely credited with how many people are working on science today yeah right so it's actually an indicator of resource consumption but what you should look at is the ad put is progress in terms of the knowledge that sales generates in terms of the scope and significance of the problems that we solve and some people have actually been trying to measure that like Michael Neilson for instance he had a very nice paper I think that was last year about it so his approach to measure a scientific progress was to look at the time line of scientific discoveries over the past you know hundred 150 years and for each measure discovery ask a panel of experts to rate the significance of the discovery and if the output of Sciences institution were exponential you will expect the example density of significance to go up exponentially maybe because there's a faster rate of discoveries maybe because the discoveries are you know increasingly more important and what actually happens if you if you plot this temporal density of significance measured in this way is that you see very much a flat graph you see a flat graph across all disciplines across physics biology medicine and so on and it actually makes a lot of sense if you think about it because thing about the progress of physics a hundred and ten years ago right it was a time of crazy change think about the progress of technology you know 160 years ago when we started it in you know replacing horses with scars when we saw that in electricity and so on it was a time of incredible change and today is also a time a very fast change but it would be an unfair characterization to say that today technology in science are moving way faster than they did 50 years ago 100 years ago and if you do try to regardless plots the temporal density of the significance you have significance idea of seeing a family sorry you do see very flat curves let's fasten and and you can check out the paper that Michael Nielson had about this idea and so the way interpret is as you make progress in an in a given field on any given subtitles it becomes exponentially more difficult to make further progress like the very first person to work on information theory if you enter a new field and still the very early years there's a lot of low-hanging fruits you can take that's right yeah but the next generation of researchers is gonna have to dig much harder actually to make smaller discoveries a probably larger number of small discoveries and to achieve the same amount of impact you're gonna need a much greater headcount and that's exactly the picture you're seeing with science that the number of scientists and engineers is in fact increasing exponentially the amount of computational resources that are available to science is increasing exponentially and so on so the resource consumption of science is exponential but the output in terms of progress in terms of significance is linear and the reason why is because and even though science is recursively self-improving meaning that scientific progress mm-hmm turns into technological progress which in turn helps science if you look at computers for instance our products of science and computers are tremendously useful in spinning up science the internet same thing the engine is a technology that's made possible by various incentive advances and itself because it enables you know scientists to to network to communicate to exchange papers and ideas much faster it is a way to speed eccentric products so even though you're looking at a recursively self-improving system it is consuming Spanish way more resources to produce the same amount of problem-solving so that's the fascinating way to paint and certainly that holds for the deep learning community right if you look at the temporal what did you call it the temporal density of significant ideas if you look at in deep learning I think I'd have to think about that but if you really look at significant ideas in deep learning they might even be decreasing so I I do believe the per per paper significance it's like creasing with signified and the amount of papers is still today exponentially increasing sweating if you look at an aggregate my guess is that you would see a linear progress you're probably aware to some to some the significance of all papers you would see roughly in your profits and in in my opinion it is not coincidence that you're seeing in your progress in science despite exponential resource conception I think the resource consumption is dynamically adjusting itself to maintain linear progress because the we as a community expecting your progress meaning that if we start investing less and sing less progress it means that suddenly there are some low-hanging fruits that become available and someone's going to step in step up and pick them right right so it's very much like a market right for discoveries and ideas but there's another fundamental part which you're highlighting which as a hypothesis as science or like the space of ideas any one path you travel down it gets exponentially more difficult to get a new way to develop new ideas yes and your sense is that fun that's gonna hold across our mysterious universe yes when exponential promise Stringer's exponential friction so that if you tweak one part of a system suddenly some other part becomes a bottleneck for instance let's say let's say develop some device that measures it's an acceleration and then it's it has some engine and it add puts even more acceleration in proportion if it's an acceleration and you drop it somewhere it's not going to reach infinite speed because some it exists in a certain context so the air around its gonna generate friction it's gonna is gonna you know block it at some top speed and even if you were to consider the broader context and lift the bottleneck there like the bottleneck a firm a friction then some other part of the system which starts stepping in and creating exponential friction maybe the speed of light are you know whatever and it's definitely horse true when you look at the problem solving algorithm that is being run by science as an institution science as a system as you make more and more progress this despoiling this recursive self-improvement component you are encountering exponential friction like do more researchers you have working on different ideas the more overhead you have in communication across researchers if you look at you were mentioned in quantum mechanics right well if you wants to start making significant discoveries today significant progress in quantum mechanics there is an amount of knowledge you have to ingest which is huge so there is a very large overhead to even start to contribute there is a large amount of overhead to synchronize across researchers and so on and of course this the significant practical experiments are going to require exponentially expensive equipment because there is your ones I've already been run right so in your senses there is no way escaping there's no way of escaping this kind of friction with artificial intelligence systems yeah no I think science is very good way to model with what we happen with with a superhumans are you serious if improving yeah that's intense I mean that's that's my intuition too it's not it's not like a mathematical proof of anything that's not my points like I'm not I'm not trying to prove anything I'm just trying to make an argument to question the narrative of intelligence explosion which is quite a dominant narrative and you do get a lot of pushback if you go against it because so for many people write AI is not just a subfield of computer science it's more like a belief system I just believe that the world is headed towards an event the singularity past which you know AI will become we go exponential very much and the world will be transformed and humans will become obsolete and if you if you go against this narrative because because it is not really a scientific argument but more for belief system it is part of the identity of many people if you go against this narrative it's like you're attacking the identity of people who believe in it it's almost like saying God doesn't exist at something right so you do get a lot of pushback if you try to question this ideas first of all I believe most people all they might not be as eloquent or explicit as you're being but most people in computer science and most people who actually have built anything that you could call AI quote unquote would agree with you they might not be describing in the same kind of way it's more so the pushback you're getting it's from people who get attached to the narrative from not from a place of science but from a place of imagination yes correct miss correct so why do you think that's so appealing because the usual dreams that people have when you create a super intelligent system past a singularity that would people imagine it somehow always destructive do you have if you were put on your psychology hat what's why is it so appealing to imagine the ways that all of human civilization will be destroyed I think it's a good story you know it's a good story and very interestingly it's mirrors residue stories right reiji's mythology if you look at the mythology of most civilizations it's about the world being headed towards some final event in which the world will be destroyed and some new world order will arise that will be mostly spiritual like the apocalypse followed by products probably yeah it's a very appealing story on a fundamental level and we all need stories we own stories to structure in the way we see the world especially at time scales that are beyond our ability to make predictions right so on a more serious non exponential explosion question do you think there will be a time when we'll create something like human level intelligence or intelligence systems that will make you sit back and be just surprised at damn how smart this thing is that doesn't require exponential growth and an exponential improvement but what what's your sense than a time line and so on that where you'll be really surprised at certain capabilities and we'll talk about limitations and deep learners so when do you think in your lifetime you'll be really damn surprised around 2013-2014 I was many times surprised by the capabilities of deep learning actually that was before we had assess exactly well deepening could do and could not do and it felt like a time of immense potential and then we started you know narrowing it down but I was very surprised so it's a it's it's it's it has already happened was there a moment there must have been a day in there where your surprise was almost bordering on the belief of the narrative that we just discussed what it was there a moment because you've written quite eloquently about the limits of deep learning was there a moment that you thought that maybe deep learning is limitless no I don't think I've ever believed this what was restocking is that it it worked all right they worked at all yes yeah but there's a there's a big jump between being able to do really good computer vision and human level intelligence so I I don't think at any points I wasn't an impression that the results we got in computer vision meant that we were very close to him and even intelligence I don't think we're very close to human ever intelligence I do believe that there's no reason why we want achieve it at some point I also believe that you know it's the problem is with talking about human level intelligence that implicitly you are considering like an axis of intelligence with different levels but that's not really how intelligence works intelligence is very multi-dimensional and so there's the question of capabilities but there's also the question is being human-like and two very different things like you can be potentially very advanced intelligent agents that are not human like at all and you can also build very human-like agents and this out okay two very different things right right let's go from the philosophical to the practical I can give me a history of Karis and all the major deep learning frameworks that you kind of remember in relation to chaos and in general tensorflow Theano the old days you give a brief overview Wikipedia style history and your role in it before return to AGI discussions yeah that's a broad topic so I started working on chaos to the name chaos at the time I actually pick the name like just today I was gonna release it so I started working on it in February 2015 and so at the time there weren't too many people working on deep learning maybe like fewer than 10,000 the software tuning was not really developed so the deepening library was cafe which was mostly C++ why do I say cafe was the main one cafe was vastly more popular than ya know in in late 2014 early 2015 cafe was the one library that everyone was using for computer vision and computer vision was the most popular problem absolutely company like covenants was like the subfield of deplaning it everyone was working on so myself suing in in late 2014 I was actually interested in islands in Rico neural networks which was a very niche topic at the time right III a tree to catherine around 2016 and so I was looking for good tools and I had used torch 7 News Channel you stay on a lot in cable competitions mmm I just cafe and there was no like good solution for Ireland's at the time like there was no reusable open-source implementation of in lsdm for instance so I decided to build my own and that first the pitch for that was it was going to be mostly around lsdm Iconia networks it was going to be in Python an important decision at the time that was Canon are obvious is that the models would be defined yeah a Python code which was kind of like going against the mainstream at the time because cafe Thailand who wants on like all the big libraries were actually going with you approach sharing static configuration files in Yemen to define models so some libraries were using code to define models like torch 7 obviously that was not Python Lezyne was like a piano based very early library that was I think developed I don't remember exactly probably late 2014 Python as well it's Python as well it was it was like on top of Tiano and so I started working on something and in the value proposition at the time was that not only that the what I think was the first reducible open-source implementation FRS diem you could combine Islands and covenants with the same library which is not really possible before like a he was on into incontinence and it was kind of easy to use because so before I was using ten I was actually using psychically on and I loved psychically for its usability so I drew a lot of inspiration from psychic then when I went Cara's it's almost like cycling for neural networks yeah the fit function exactly the fit function like reducing a complex training loop to a single function call right and of course you know some people will say this is hiding a lot of details but that's exactly the point all right the magic is the point all right so it's magical but in a good way it's magical in the sense that it's delightful yeah right yeah I'm actually quite surprised I didn't know that it was born out of desire to implement our hands in lc/ms it was that's fascinating so you were actually one of the first people to really try to attempt to get the major architectures together and it's also interesting you made me realize that that was a design decision at all is defining the model in code just I'm putting myself in your shoes whether the yamo especially if cafe was the most popular it was the most but I might fall if I was I'm if I were yeah I don't it I didn't like the yellow thing but it makes more sense that you will put in a configuration file the definition of a model that's an interesting gutsy move just stick with defining it in code just if you look back other libraries we're doing it as well but it was definitely the more niche option yeah okay Cara's and then girls so I really scare us in March 2015 and it got she's just pretty much from the start so the deep learning community was very small at the time lots of people were starting to be interested in the rest um so it was gonna release it at the right time because it was offering an easy to use it as team implementation exactly at the time where lots of yours started to be intrigued by the capabilities of onin on ins one LP so it it grew from there then I joined Google about six months later and that was actually completely unrelated to took care us actually joined a research team working on image classification mostly like computer vision so I was doing computer vision research at Google initially and immediately when I joined Google I was exposed to the early internal version of tensorflow and the way to appeal to me at the time and that was definitely the way it was at the time is that this was an improved version of Tiano so I immediately knew I had to port cars to this new tensorflow thing and I was actually very busy as as as a noogler as a new Googler so I had not time to work on that but then in November I think twist November 2015 tensorflow got released and it was kind of like my my wake-up call at hey to actually you know go and make it happen so in December I I putted cars to run on two of tensorflow but it was not exactly port it was more like a refactoring where I was abstracting away all the backend functionality into one module then the same codebase could run on top of multiple backends right so on top of things fluor Theano and for the next year yeah no you know stayed as the default option it was you know it was easier to use somewhat let's begin it was much faster especially when he came to Orleans but eventually you know a tensorflow overtook it right and test of all the early tests for similar architectural decisions there's the arrow yeah so what is there was a natural as a natural transition yeah absolutely so what I mean that still carries is the side almost fun project right yeah so it it was not my job assignment it's not I was doing it on the side that so I'm and even though it's great to have you know a lot of uses for a deepening library at the time like throughout 2016 but I wasn't doing it as my main job so things solid changing in I think it's mustard maybe October 2016 so one year later so Rashad who has the lead intensive law basically showed up one day in in our building while I was doing like so I was doing research in things like so I added of computer vision research also collaborations with Christians getting and deep planning for theorem proving it was a really interesting research topic answer Rajat was saying hey we saw chaos we liked it we saw that you had Google why don't you come over for like a quarter and and and work with us I was like yeah that sounds like a great opportunity let's do it and so I started working on integrating the chaos API into tends to flow more tightly so what fold up is a sort of like temporary tents of lonely version of chaos that was in tents for that contrib for a while and finally moved to dance to the core and you know I've never actually gotten back to my old sim doing research well it's it's kind of funny that somebody like you who dreams of or at least sees the power of AI systems the reason and they were improving will talk about has also created a system and makes the the most basic kind of LEGO building that is deep learning super accessible super easy so beautifully so that's the funny irony that you're Billy there's just both you're responsible for both things but so telephoto 2.0 it's kind of there's a sprint I don't know how long I'll take but there's a sprint towards the finish what do you look what are you working on these days whether you're excited about what are you excited about in 2.0 I mean eager execution there's so many things that just make it a lot easier yeah work what are you excited about and what's also really hard what are the problems you have to kind of saw so I've spent the past year and a half working on 1002 and it's been a long journey I'm actually extremely excited about it I think it's a great product it's a delightful product competitive law one we met huge progress so on the carrot side what I'm really excited about is that so you know previously Kara's has been this very easy-to-use high level interface to do deep learning but if you wanted to you know if you wanted a lot of flexibility the chaos framework you know was probably not the optimal way to do things compared to just writing everything from scratch so in some way the framework was getting in the way and in terms of you to you don't have this at all actually you have the usability of the high level interface but you have the flexibility of this lower level interface and you have this spectrum of workflows where you can get more or less usability and flexibility the trade-offs depending on your needs right you can write everything from scratch and you get a lot of help doing so by you know subclassing models and writing some train loops using ego execution it's very flexible is very easy to debug is very powerful but all of these integrates seamlessly with higher level features up to you know the classic workflows which which are very psychically unlike and and you know are ideal for a data scientist machining engineer type of profile so now you can have the same framework offering the same set of api's that enable a spectrum of workflows that are more or less uniform or less high level that are suitable for you know profiles ranging from researchers to data scientists and everything in between yeah so that's super excited I mean it's not just that it's connected to all kinds of tooling you can go on mobile and what that's for light it can go in the cloud or serving and so on and all its connected together now some of the best software written ever is often done by one person sometimes two so with a Google you're now seeing sort of Karass having to be integrated in tensorflow I'm sure it's a ton of engineers working on so and there's I'm sure or a lot of tricky design decisions to be made how does that process usually happen from at least your perspective what are the what are the debates like what a is there a lot of thinking considering different options and so on yes so a lot of the time I spend on Google is actually discussing design discussions right writing design Docs participating in design review meetings and so on this is you know as important as actually writing a cool right well there's a lot of thoughts there's a lot of thought and and a lot of care that is that taken in coming up with these decisions and taking into account all of our users because tensorflow has this extremely diverse user base right it's not it's not like just one user segment where everyone has the same needs we have small-scale production uses large-scale production uses we have startups we have researchers you know it's all over the place and we have to catch up to all of their needs if I just look at the standard the base of C++ or Python there's some heated debate do you have those at Google I mean they're not here in terms emotionally but there's probably multiple ways to do it right so how do you arrive through those design meetings at the best way to do it especially in deep learning where the field is evolving as you're doing it is there some magic to it there's a magic to the process I don't know just magic to the process but there definitely is a process so making design decision is about satisfying a set of constraints but also trying to do so in the simplest way possible because this is what can be maintained is what can be expanding in the future so you don't want to naively satisfy the constraints by just you know for each capability you need available you're gonna come up with one argument new idea and so on you want to design api's and that are modular and hierarchical so that they're there they have an API surface that is as small as possible right and and you want this modular hierarchical architecture to reflect the way that domain experts think about the problem because as the men expect when you're reading about a new media you're reading each toy or some darks pages you already have a way that you're thinking about the problem you already have like certain concepts in mind and and and your thing about how they relate together and when you're reading darks you're trying to build as quickly as possible and mapping between the concepts feature the new API and the concepts in your mind so you are trying to map your mental model as a domain expert to the way things work in the API so you need an API and an underlying implementation that are reflecting the way people think about these things so in minimizing the time it takes them this mapping yes minimizing the time the cognitive load there is in in just industry knowledge about your API an API should not be self referential or RF referring to implementation details it should only be referring to domain-specific concepts that people already never understand brilliant so what's the future of kerosene transfer look like what it stands for 3.0 look like so that's gonna to fall in the future for me to answer especially since I'm now I'm not even the one making these decisions okay but so from my perspective which is you know just one perspective among many different perspectives on the transferor team I'm really excited by developing even higher level api's higher level and Carols I'm really excited by hyper parameter tuning by automated machine learning or two ml I think the future is not just you know defining a model like like us and being Lego blocks and then click fit on it it's more like an automatical model let me just look at your data and optimize the objective view after right so that's that's what what I'm looking - yeah so you put the baby into a room with the problem and come back a few hours later with a fully solved problem exactly it's not like a box of Lego's right it's more like the combination of a kid that's pretty good at Legos blocks of Legos yeah it's just building the thing very nice so that's that's an exciting feature and I think there's a huge amount of applications and revolutions to be had under the constraints of the discussion we previously had but what do you think of the current limits of deep learning if we look specifically at these function approximator x' that tries to generalize from data they have you've talked about local versus extreme generalization you mentioned in your networks don't generalize well humans do so there's this gap so and you've also mentioned that externalization extreme journals asian requires something like reasoning to fill those gaps so how can we start trying to build systems like that all right yes so this is this is by design right deplaning models are like huge parametric models differentiable so continuous that go from an input space to not with space and they're trained with gradient descent so they're trying-- pretty much point by points they are learning a continuous geometric morphing from from an input vector space to not protective space right and because this is done point by point a deep neural network can only make sense of points in experience space that are very close to things that it has already seen in string data at best it can do interpolation across points but that means you know that means in order to train your network you need a dance sampling of the input cross ad with space almost a point-by-point sampling which can be very expensive if you're dealing with complex real-world problems like autonomous driving for instance or our robotics is it's doable if you're looking at the subset of the visual space but even then it's still fairly expensive you seen in millions of examples and it's only going to be able to make sense of things that are very close to waste as seen before and in contrast to that well of course we have human intelligence but even if you're not looking at human intelligence you can look at very simple rules algorithms if you have a symbolic rule it can actually apply to a very very large set of inputs because it is abstract it is not obtained by doing a point by point mapping for instance if you try to learn a sorting algorithm using a deep neural network well you're very much limited to learning point by point what the sorted representation of this specific list is like but instead you could have a very simple sorting algorithm written in a few lines maybe it's just you know two nested loops and it can process any list at all because it is abstract because it is a set of rules so deep learning is really like point by point geometric more things more things train with conditions and meanwhile abstract rules can generalize much better and I think the future is which combine the two so how do we do you think combine the tools how do we combine good point by point functions with programs which is what symbolic AI type systems yeah at which levels the combination happen and you know obviously we're jumping into the realm of where there's no good answers it just kind of ideas and intuitions and so on well if you look at the really successful AI systems today I think they are already hybrid systems that are combining symbolic AI with D planning for instance success robotics systems are already mostly model-based rule-based things like planning algorithms and so on at the same time they're using deep learning as perception modules sometimes they're using deep learning as a way to inject a fuzzy intuition into a rule-based process if you look at a system like an a self-driving car it's not just one big end when your network you know that wouldn't work at all precisely because in order to train that you need a dense sampling of experience space when it comes to driving which is completely unrealistic obviously instead the Salonika is mostly symbolic you know it's software it's programmed by hand it's mostly based on explicit models in this case mostly 3d models of the of the environment around the car but it's interfacing with the real world using deep learning modules right right so the deep learning there serves is the way to convert the raw sensory information to something usable by symbolic systems okay well it's lingering that a little more so dense sampling from input to output you said it's obviously very difficult is it possible in the case of send driving you mean let's say still driving itself driving permit for many people but let's not even talk about self-driving let's talk about steering so staying inside the lane lines following yeah it's definitely a problem cancel reason and two in the planning model but that's like one small subset on a second yeah I don't like you're jumping from the extreme so easily because I disagree with you on that I think well it's it's not obvious to me that you can solve Lane following it's no it's not it's not obvious I think it's doable I think in general you know there is no hard limitations to what you can learn with a DP on network as long as this the search space like is rich enough is flexible enough and as long as you have this dense sampling of the input cross output space the problem is that you know this dense sampling could mean anything from 10,000 examples to like trillions and trillions so that's that's my question so what's your intuition and if you could just give it a chance and think what kind of problems can be solved by getting a huge amounts of data and thereby creating a dense mapping so let's think about natural language dialogue the Turing test do you think the Turing test can be solved with a neural network alone well the deterrent test is all about tricking people into believing that certain to human I don't think that's actually very difficult because it's more about exploiting a human perception and not so much about intelligence there's a big difference between mimicking in Asian behavior an actual intogen behavior so ok let's look at maybe the elect surprised and so on the different formulations of a natural language conversation that are less about mimicking and more about maintaining a fun conversation that lasts for 20 minutes mm-hmm that's a little less about mimicking and that's more about I mean it's still mimicking but it's more about being able to carry forward a conversation with all the tangents that happen in dialogue and so on do you think that problem is learn Irbil with this kind of well the neural network that does the point-to-point mapping so I think it would be very very challenging to do this with deep learning I don't think it's out of the question either I wouldn't read out the space of problems that can be solved or the large neural network what's your sense about the spaces those problems so it useful problems for us in theory it's it's infinite right you can solve any problem in practice while deep learning is great fit for perception problems in general any any problem which is naturally a minimal to explicit handcrafted rules or rules that you can generate device exhaustive search or some program space so perception of intuition as long as you have a sufficient ring there and that's the question I mean perception there's interpretation and understanding of the scene yeah which seems to be outside the reach of current for social systems so do you think larger networks will be able to start to understand the physics and the physics of the scene the three-dimensional structure and relationships divisors in the scene and so on or really that's where symbology has to step in well it's it's always possible to solve these problems with with deplaning is just extremely inefficient a model would be an explicit rule-based abstract model would be a flaw efficient for better and more compressed representation of physics then learning justice mapping between in this situation this thing happens if you change the situation like slightly then this other thing happens and so on do you think is possible to automatically generate the programs that would require that kind of reasoning our dessert have to so the word expert systems fail there's so many facts about the world had to be hand coded and thing is possible to learn those logical statements that are true about the world and their relationships do you think I mean that's kind of what you're improving at a basic level is trying to do right yeah except it's it's much harder to farm any statements about the world compared to family ting mathematical statements statements about the world you know tend to be subjective so can you can you learn rule-based models yes yes differently that's the this is a field of program synthesis however today we just don't really know how to do it so it's it's very much a grad search or research problem and so we are limited to you know the sort of at recession grassroot algorithms that we have today personally I think genetic algorithms are very promising so I was like genetic programming genic priming Zack can you discuss the field of program synthesis like what how many people are working and thinking about it what where we are in the history programs the decision what are your hopes for it well if it we are deep planning this is like the 90s so meaning that already have we already have existing solutions we are starting to have some basic understanding of where this is about but it still I feel that is in its infancy there are very few people I working on it there are very few real-world applications so the one we are world application I'm aware of is a flash fill in Excel it's a way to automatically learn very simple programs to format cells in an excel spreadsheet from from a few examples for instance training a weight from a date things like that oh that's fascinating yeah you know okay that's the disgusting topic I always wonder when I provide a few samples to excel what it's able to figure out like just giving it a few dates mm-hmm what are you able to figure out from the pattern I just gave you it's just a fascinating question and it's fascinating whether that's learn about the patterns and you're saying they're working on that yeah how big is the toolbox currently are we completely in the dark so if you said enjoying the in terms of provinces no I would say so maybe not even even too optimistic because by the nineties you know we already understood that prop we already understood you know the engine of deplaning even though we couldn't release its potential quite today I don't think we've found the engine of problems into this so we're in the winter before backprop yeah anyway yes so I do believe program synthesis in general discrete search over route based models it is going to be a cornerstone of our research in the next century right and that doesn't mean like we're gonna drop deep learning deep learning is immensely useful like being able to learn this is a very flexible adaptable parametric models who's got Henderson let's let's actually mentally use like all it's doing its pattern cognition but being good at pattern recognition given lots of delays is a statistics from me powerful so we are still gonna be working on the planning we are going to be working on programs entities we're going to be combining the two increasingly automated ways mm-hmm so let's talk a little about data you've tweeted about 10,000 deep learning papers have been written about hard coding priors about a specific task in a neural network architecture it works better than a lack of a prior basically summarizing all these efforts they put a name to an architecture but really what they're doing is hard-coding some priors that improved yes yes but we get straight to the point is it's probably true and so you say that you can always buy performance by in quotes performance by either training on more data better data or by injecting tasks information to the architecture that pre-processing however this is an informative about the generalization power the techniques use the fundamental ability to generalize do you think we can go far by coming up with better methods for this kind of cheating for better methods of large-scale annotation of data so building better prize you if she was emitted it's not seeing any more right I'm talking about the cheating but large-scale so basically I'm asking about something that hasn't and from my perspective been researched to too much is exponential improvement in annotation of doing it do you you often think about I mean it's actually been I'm being researched quite a bit you just don't see publications about it's because you know people who publish papers are gonna publish about knowing benchmarks sometimes I enter is a new benchmark people who actually have real-world large-scale dependence they're gonna spend a lot of resources into data annotation and get data annotation pipelines but you don't sink papers that's interesting so do you think there are certainly resources but do you think there's innovation happening oh yeah asked me to clarify a at the point in the twist so machine learning in general is the science of generalization you want to generate knowledge that can be reused across different data sets across different tasks and if instead you are looking at one data set and then you are hard coding knowledge about this task into your architecture this is no more useful than training in network and then saying oh I found these weight values perform well right so that David hah I don't know if you know that David yeah the paper the other day about weight agnostic neural networks this is very interesting paper because it really straights the fact that an architecture even without wickets in architecture is a knowledge about a task it encodes knowledge and when it comes to architectures that are uncraft Admira searchers there in some cases it is very very clear that all they are doing is artificially re-encoding the template that corresponds to the the proper way to solve tasks including given dataset for instance I know if you've looked at a baby data set which is about a natural language question answering it is generated but not by an algorithm so this is question-answer pairs are generated by an algorithm the algorithm is following a certain template turns out if you craft a network that literally encodes this template you can solve this data set with nearly 100% accuracy but that doesn't actually tell you anything about how to solve question answering in general which is the point you know the question is just the linger on it whether it's from the data side from the size of the network I don't know if you've read the blog post by rich Sutton the bitter lesson yeah where he says the biggest lesson that we can read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective so as opposed to figuring out methods that can generalize effectively do you think we can get pretty far by just having something that leverages computation than the improvement of computation yes so I think rich is making very good points which is that a lot of these papers which are actually all about manually hot coding prior knowledge about the task into some system doesn't have to be deep in architecture but into some system right you know is these papers are not actually making any impact instead what's making real long-term impact is very simple very general systems that are agnostic to all these tricks because districts do not generalize and of course the one general and simple thing that you should focus on is that which leverages computation because computation the availability of large-scale computation has been you know increasing exponentially following Moore's law so if your algorithm is all about exploiting this the new algorithm is suddenly exponentially improving right so I think rich is definitely right either you know is right about the past 70 years is regressing the past 70 years I am Not sure that this assessment will still hold true for the next 70 years it's it's might to some extent I suspect it will not because the truth of his assessments is a function of the context right in which in which this research took place and the context is changing like Moore's law might not be applicable anymore for instance in the future and I do believe that you know when you when you we need to pick one aspect of a system when you exploit one aspect of a system some other aspect starts becoming the bottleneck let's say you have unlimited competition well then data is the bottleneck and I think we are already starting to be in a regime where our systems are so large in scale and so data and grain the data today and quality of data and the scale of data is the bottleneck and in this environment the the bitter lesson from rich is it's not going to be true anymore right all right so I think we are gonna move from a focus on a scale of a competition scale to focus on data efficiency their efficiency so that's getting to this the question is symbolic AI but to linger on the deep learning approaches do you have hope for either unsupervised learning or reinforcement learning which are ways of being more data efficient in terms of the amount of data they need that required human annotation so in supervised learning and reinforcement learning of frameworks for learning but they are not like any specific technique so usually when people say reinforcement learning but they really mean is deep printable version which is like one approach research actually very questionable the question I was asking was unsupervised learning with deep neural networks and deep reinforcement learning well he's not really a data efficient because you're still leveraging you know this huge biometric models trying point by point with quite understand it is more efficient in terms of the number of annotations the density of annotations you need so the AG being to to learn the Latin space or on which the data is organized and then map the sparse annotations into it and sure I mean that's that's clearly very good idea it's not real topic I would be working on but it's it's really good idea so we it would get us to solve some problems that it will get us to incremental improvements in in labelled data efficiency do you have concerns about short-term or long-term threats from a a from artificial intelligence yes definitely to some extent and what is the shape of those concerns this is actually something I've briefly written about but the capabilities of deplaning technology can be used in many ways that are concerning from you know massive variance with things like facial recognition in general you know tracking lots of data about everyone and then being able to making sense of this data to do identification to do prediction that's concerning that's something that's being very aggressively preferred by to tell italian states like you know china one thing I am I am very much concerned about is that you know our lives are increasingly online are increasingly digital made of information made of information consumption and information production our digital footprint I would say and if you absorb all of this data and a new are in control of where you consume information you know social networks and so on recommendation engines then you can build a sort of reinforcement loop for human behavior you can observe the state of your minds at time T you can predict how you would react to different pieces of contents how to get you to move your mind you know in a certain direction in the new then you can feed in video as the specific piece of content that would move you in in a specific direction and you can do this at scale you know at scale in terms of doing it continuously in real time you can also do it at scale in terms of skinning these to many many people to entire populations so potentially artificial intelligence even in its current state if you combine it with the internet with the fact that we have all of our lives are moving to digital devices and digital information consumption and creation what you get is the possibility to do to achieve mass manipulation of behavior and mass mass psychological control and this is a very real possibility yeah so you're talking about any kind of recommender system let's look at the YouTube algorithm Facebook anything that recommends content you should watch next yeah and it's fascinating to think that there's some aspects of human behavior that you can you know say a problem of is this person hold the republican beliefs the Democratic beliefs and this is but trivial that's an objective function and you can optimize and you can measure and you can turn everybody into a Republican or everybody absolutely yeah I do believe it's true so the human mind is is very if you look at the human mind as a kind of computer program it is a very large exploit surface right it has many many abilities ways ways ways you can control it for instance when it comes to your political beliefs this is very much tied to your identity so for instance if I'm in control of your news feed on your favorite social media platforms this is actually where you're getting your news from and I can of course I can I can choose to only show you news that will make you see the world in a specific way right but I can also you know create incentives for you to post about some political beliefs and then when I when I get you to express a statement if it's a statement that me as the as a controller I I want you I want to reinforce I can just show it to people who will agree and they will like it and that will reinforce the statement in your mind if this is a statement I want you to believe I want you to abandon I can on the other hand show it to opponents right we will attack you and because they attack you at the very least next time you will think twice about passing it but maybe you will even you know start believing this because you got pushback right so there are many ways in which social media platforms can potentially control your opinions and today the so all of these things are already being controlled by a Uyghur isms algorithms do not have any explicit political goal today while potentially they could like if some totalitarian government takes over you know social media platforms and decides that you know now we are going to use this knowledge for mass surveillance but also for mass opinion comes from and behavior control very bad things could happen but it was really fascinating and and actually quite concerning is that even with that an explicit intent to manipulate you're already saying very dangerous dynamics in terms of has this contact recommendation algorithms behave because right now the the goal the objective function of zalgar isms is to maximize engagement right which seems very innocuous at first right however it is not because content that will maximally engage before you know I get people to react in an emotional way I get people to click on something it is very often content that you know is not healthy due to public discourse for instance fake news are far more likely to get you to click on them than real news simply because they are not constrained to reality so they can be as outrageous as surprising as good stories as you want because the artificial right yeah to me that's an exciting world because so much good can come so there's an opportunity to educate people you can balance people's worldview with other ideas so the there's so many objective functions the space of objective functions that create better civilizations is large arguably infinite but there's also a large space that creates division and and and destruction civil war a lot of bad stuff and the worry is naturally probably that space is bigger first of all and if we don't explicitly think about what kind of effects are going to be observed from different objective functions then we can get into trouble but the question is how do we how do we get into rooms and have discussions so inside Google inside Facebook inside Twitter and think about okay how can we drive up engagement and at the same time create a good society is there is it even possible to have that kind of philosophical discussion I think you can different try so from my perspective I would feel rather uncomfortable with companies that are in control of these newsfeed algorithms with them making explicit decisions to manipulate people's opinions or behaviors even if the intent is good because that's that's a very totalitarian mindset so instead what I would like to see as probably never gonna happen because it's not super realistic but that's actually something I care about I would like all these algorithms to present configuration settings to their users so that their users can actually make the decision about how they want to be impacted by this information recommendation content recommendation algorithms for instance as a as a user of something like YouTube or Twitter maybe I want to maximize learning about a specific topic right so I want the algorithm to feed my curiosity right which is in itself a very interesting problem so instead of maximizing my engagement it will maximize half fast and how much I'm learning and it will also take into account the accuracy hopefully you know if the information I'm learning so yeah the user should be able to determine exactly how these algorithms are affecting their lives I I don't want actually any entity making decisions about in which direction they're gonna try to manipulate me right I want I want technology so aii these algorithms are increasingly going to be our interface to a world that is increasingly made of information right and I want I want everyone to be in control of this interface to interface with the world on their own terms so if someone wants this algorithms to serve you know their own personal growth goals they should be able to configure algorithms in such a way yeah but so I know it's painful to have explicit decisions but there is underlying explicit decisions which is some of the most beautiful fundamental philosophy that that we have before us which is personal growth if I want to watch videos from which I can learn what does that mean so if I have a check box that wants to emphasize learning there's still an algorithm with explicit decisions in it that would promote learning what does that mean for me like for example I've watched a documentary on Flat Earth theory I guess a it was very like that I learned a lot I really glad I watched it was a friend recommended it to me not I don't have such an allergic reaction to crazy people as my fellow colleagues do but it was very well it was very eye-opening and for others it might not be from others they might just get turned off for that same with a Republican Democrat and what it's a non-trivial problem when first of all if it's done well I don't think it's something that wouldn't happen that the youtubes wouldn't be promoting or Twitter wouldn't be it's just a really difficult problem how do we do how to give people control well it's mostly an interface design problem right the way since you want to create technology that's like a mentor or a coach or an assistant so that it's not your boss right you are in control of it you are telling it what to do for you and if you feel like it's manipulating you it's not actually it's not actually doing what you want you should be able to switch to different algorithm you know so that fine-tuned control you kind of learn you're trusting the human collaboration and that's how I see autonomous vehicles too is giving as much information as possible and you learn that dance yourself mmm yeah Adobe I don't use Adobe products like Photoshop yeah they're trying to see if they can inject YouTube into their interface but basically allow you to show you all these videos that cuz everybody's confused about what to do with feature so basically teach people by linking to and that way it's an assistant that shows users videos as a basic element of information yeah okay so what practically should people do to try to to try to fight against abuses of these algorithms or algorithms that manipulate us us it's a very very difficult problem because the star is very little public awareness of these issues there are a few people would think as you know anything wrong with their news feed algorithm even though there is actually something wrong already which is that it's trying to maximize engagement rest of the time which as a very negative side effects right so ideally so the very first thing is to stop trying to purely maximize engagement try to propagate contents based on popularity right instead take into account the goals and the profiles of each user so you will you will be one example is for instance when I look at tactic recommendations on Twitter's like you know they have this a news tab where I will switch recommendations it's always the worst garbage because it's it's content that appeals to them the smallest common denominator to all Twitter users because they are trying to optimize the purely trying to opportunist popularity the purely friendship you know as an engagement but that's not what I want so this should put me in control of some setting so that I define was the objective function and the twitter is going to be following - to show me this cannon so and honestly so this is all about interface design and we are not where it's not realistic to give you this control of a bunch of knobs that define algorithm instead we should purify man charge of defining the objective function like let the user tell us what they want to achieve how they want this algorithm to impact their lives so do you think it is that or do they provide individual article by article reward structure where you give a signal I'm glad I saw this or I'm glad I didn't so like a Spotify type yeah a feedback mechanism it works to some extent I'm kind of skeptical about it because the only way algorithm the algorithm will attempt to relate your choices with the choices of everyone else which might you know if you have an average profile that works fine I'm sure it's pretty far accommodations work fine if you just like mainstream stuff if you don't it can be a it's not optimal election will be in an efficient search for the for the part of the Spotify world that represents you so it's it's a tough problem but do notes that even even a feedback system like what Spotify has does not give me control over what the algorithm is trying to optimize for well public awareness which is what we're doing now it's a good place to start do you have concerns about long term existential threats of artificial intelligence well as I was saying our world is increasingly made of information a Iger ism so increasingly gonna be our interface to this wallet information and somebody will be in control of these algorithms and that puts us in in any kind of a bad situation right it has risks it has risks coming from potentially large companies wanting to optimize their own goals maybe profit maybe something else also from governments I might want to use these algorithms as a means of control organs of populations do you think there's existential threat that could arise from that so kind of existential threats so maybe you're referring to the singularity narrative where robots just take over well I don't not terminator robots and I don't believe it has to be a singularity we're just talking to just like you said the algorithm controlling masses of populations the existential threat being hurt ourselves much like a nuclear war would hurt ourselves mm-hmm that kind of thing I don't think that requires a singularity that requires a loss of control over AI algorithm yes so I do agree to all concerning trends honestly I I wouldn't want to make any any any long-term predictions I don't I don't think today we we really have the capability to see what the dangerous if they are going to be in 50 years in 100 years I do see that we are already faced with concrete and present dangers sir the negative side-effects of content recommendation systems of newsfeed algorithms concerning algorithmic bias as well so we are dedicating more and more decision processes to algorithms some of these algorithms aren't crafted some are Ireland from data but we are we are we are delegating control sometimes it's a good thing sometimes not so much and there is in general very little supervision of this process right so we we're still in this period a very fast change even chaos where society is is restructuring itself turning into an information society I wish itself is turning into an increasingly automated information processing society and well yeah I think the best we can do today is try to to raise awareness around some of these issues and I think we're actually making good progress if you if you look at algorithmic bias for instance three years ago even three years ago very very few people were talking about it and now all the big companies are talking about it there are often not in a very serious way but at least it is part of the public discourse you see people in Congress talking about it so and it all started from raising awareness right so in terms of alignment problem try to teach as we allow algorithms just even recommender systems on Twitter encoding human values and morals decisions to touch on ethics how hard do you think that problem is how do we have lost functions in neural networks that have some component some fuzzy components of human morals well I think this is really all about objective function engineering which it's probably going to be increasingly a topic of concerned if you like for now where we are just using very naive loss functions because the hard part is not actually what you're trying to minimize it's everything else but as the everything else is going to be increasingly automated we're going to be focusing on our human attention on increasingly high level components like what's actually driving the whole learning system like the objective function so the last function engineering is gonna be last function janilla is probably going to be a job title in the future you know and then the tooling you're creating with Kerris essentially takes care of all the details underneath and basically the human expert is needed for exactly that last engineer characters the interface between the data you're collecting and the business goals and your job as an engineer is going to be to express your business goals and your understanding of your business or your product your system as a kind of class function all kind of set of constraints does the possibility of creating an AGI system excite you or scare you or bore you so intelligence can never be be general you know at best it can have some degree of generality like human intelligence it's also always has some specialization in the same way that human intelligence is specialized in a certain category of problems is specialized in the human experience and when people talk about AGI I'm never quite sure if they're talking about very very smart AI so smart that it's Stephens modern humans or they're talking about human-like intelligence because it's our different things let's say presumably I'm impressing you today with my humaneness so imagine that I was in fact a robot so what does that mean I'm impressing you with natural language processing maybe if you weren't able to see me maybe this is a phone call yes Zack okay so companion so that that's very much about building human-like AI and you're asking me you know is this is this an exciting perspective yes I think so yes not so much because of what I artificial human-like intelligence could do but you know from an intellectual perspective I think if you could build truly human right intelligence that means you could actually understand human intelligence which is fascinating right yeah human-like intelligence is gonna require emotions it's gonna require consciousness which is not things that would normally be required by an intelligent system you do get you know we were mentioning your next science as superhuman problem-solving a agent not system it does not have consciousness enough emotions in general so emotions I see consciousness is being understand spectrum as emotions it is a component of the subjective experience that is meant very much to guide behavior generation right hands meant to guide your behavior in zone human intelligence and animal intelligence as evolved for the purpose of behavior generation right including in a social context so that's why we actually need emotions that's why we need cash is an artificial intelligence system developed in different context may well never need them may well may will never become just like science at that point I would argue it's possible to imagine that there's echoes of consciousness in science when viewed as an organism that science is consciousness so I mean how would you go about testing this hypothesis how do you I probed the subjective experience of an abstract system like science well the point of probing any subjective experience is impossible is that I'm not science I'm Lex so I can't probe another entities the another it's no more than when bacteria on my skewer lacks I can ask you questions about your subjective expanse and you can answer me and that's how I know you're conscious yes but that's because you speak the same language you perhaps we have to speak the language of science as I say I don't think consciousness just like emotions of pain and pleasure is not something that inevitably arises from any sort of sufficiently intelligent information processing it is a feature of the mind and if you've not implemented it explicitly is not there so you think it's a fee it's an emergent feature of a particular architecture so do you think it's it's a feature in the Simpsons so again the subjective experience is all about guiding behavior if you if if the problems you're trying to solve don't really involve and bedight agents maybe in a social context generating the view and pursuing goals like this and if you get fans that's not sure what's what's happening even though it is it is a form of artificial AR in artificial intelligence in the sense that it is solving problems this is a community knowledge creating a solutions and so on so if you're not explicitly implementing a subjective experience implementing certain emotions and implementing consciousness it's not going to just spontaneously emerge yeah but so for system like human-like intelligence system that has consciousness yeah do you think he needs to have a body yeah it's definitely I mean doesn't have to be a physical body right and there's not that much difference between a realistic simulation or your world so there has to be something you have to preserve kind of thing yes but human-like intelligence can only arise in the in human right context intelligence in other humans in order for you to demonstrate that you have human-like intelligence essentially yes so what kind of test and demonstration would be sufficient for you to demonstrate human-like intelligence yeah I just started curiosity you you talked about in terms of theorem proving and program synthesis I think you've written about that there's no good benchmarks for this yeah that's one of the problems so let's let's talk programs program synthesis so what do you imagine is the goods I think it's related questions for human-like intelligence therefore program synthesis what's a good benchmark for either both right so I mean you're actually asking asking two questions which is one is about quantifying intelligence and comparing the intelligence of an artificial system to the intelligence of a human and the other is about a degree to which this intelligence is human right is actually two different questions so if you look at you mentioned earlier the Turing test well I actually don't like the Turing test because it's very lazy it's it's all about completely bypassing the problem of defining and measuring intelligence right and instead delegating to a human judge or panel of human judges so it's it's it's at or cop-out right if you want to measure how human-like an agent is I think you have to make it interact with other humans maybe it's it's not necessarily good idea to have these other humans be the judges maybe you should just observe behavior and comparison where the human will actually have done when it comes to measuring how smart our clever an agent is and comparing that today to the degree of human intelligence so we're already talking about two things right the degree I kind of like the magnitude magnitude of an intelligence and its direction right like the norm of the vector right and its direction and the direction is like human likeness and the magnitude the norm is intelligence you could call it intelligence right so the the direction here your sense the the space of directions that are human-like is very narrow yeah so the the way you would measure the magnitude of intelligence in a system in a way that that also enables you to compare it to that of a human well if if you look at different benchmarks for intelligence today they're all too focused on skill at a given task let's scale that playing chess yeah spirit playing goes skillet playing Duda and I I think that's that's not the right way to go about it because you can always be too human it at one specific task the reason why our skill at playing goal or our juggling or anything is impressive is because we are expressing this skill within a certain set of constraints if you remove the constraints the constraints that we have one lifetime that we have this body and so on if you remove the context if you have unlimited string data if you can add access to you know for instance if you look at juggling at if you have no restriction on the hardware then achieving arbitrary levels of skill is not very interesting it and says nothing about the amount of intelligence you've achieved so if you want to measure intelligence you need to rigorously define what intelligence is which in itself units it's a very challenging problem and do you think that's possible if you define integers yes absolutely I mean you can provide many people have provided you know some definition I have my own definition where does your definition begin if it doesn't end well I think intelligence is essentially the efficiency with which you turn experience into generalizable programs so what that means is it's the efficiency with which you turn a sampling of experience base into the ability to process a larger chunk of experience base so measuring skill can be one proxy because many management tasks can be one proxy for measure intelligence but if you want the only measured skill you should control for two things you should control form a mod effects be ins that your system has and the priors that your system has but if you if you control if you look at two agents and you give them the same priors and you give them the same amount of experience there is one of the agents that is going to learn programs representation something the model that will perform well on the larger trunk effects payin space and the other and that is the smaller agent yes so if you you have fixed the experience which generate better programs get better meaning more generalizable that's really interesting and that's a very nice clean definition of oh by the way in this definition it's it is already very obvious that intelligence has to be specialized because you're talking about experience space and you're talking about segments of experience space you're talking about priors and you're talking about experience all of these things define the context in which intelligence emerges and you you can never look at the totality of experience space right so intelligence has to be specialized and but it can be sufficiently large the experience space even though specialized there's a certain point when the experience base is large enough to where it might as well be general it feels general it looks general sure I mean it's it's very less developed for instance many people would say human intelligence is general in fact it is it is quite specialized you know the we can definitely build systems that start from the same innate priors that's what humans have at Birth because we already understand very well what sort of priors we have as humans like many people have worked on this problem most notably as a bethe a spelke from how about I know if you know her his work the rotten and what she calls a core knowledge and it is very much about trying to determine and and and describe what priors we are born with like language skills and so on and all that kind of stuff exactly so we we have some some pretty good understanding of what price we are born with so we could so I've actually been working on a benchmark for the past couple years you know on earth I hope to be able is it at some point is to measure intelligence of systems by culturing for priors culturing for amount of expands and by assuming the same priors as with humans are born with so that you can actually compare this course to human intelligence and you can actually have humans pass the same test in in a way that's fair yeah and so importantly such a benchmark should be such that any amount of practicing does not increase your score so try to picture a game where no matter how much you play this game that does not change your skill at the game can you picture that as a person who deeply appreciates practice I cannot actually so it is not I can a there's actually a very simple trick so in order to come up with a task so the only thing you can measure is skill at the task yes all tasks are gonna involve Pryor's here the trick is to know where they are and and to describe that and then you make sure that this is the same set of priors as what human stuff is so you create a task that assumes this priors that exactly documents is brightest so that's the price I made explicit and I'll no other priors involved and then you generate a certain number of samples in experience base for this task right and this for one task assuming that the task is new for the agent passing it that's one test of this definition of intelligence and between that we set up and now you can scale that to management tasks that all you know each task should be new to the agent bassinet so the switch will be human human interpretive ball and the son of also lets you can actually have a human pass the same test and then you can compare the squash a machine and squash your human which could be a lot as they could even start a task a chemist just as long as you start with the same set of yes so the problem is M missed humans already trained to recognize digits right and but let's say let's say we're considering objects that are not digits some completely arbitrary patterns while humans already come with visual priors about how to process that mm-hmm so in order to to make the game fair you would have to isolate these priors and describe them and then express them as computational rules having worked a lot with vision science people as exceptionally difficult process has been a there's been a lot of good tests and basically reducing all human vision into some good priors and we're still probably far away from that perfectly but as a start for a benchmark that's an exciting possibility yeah so I said with spelke actually lists abjectness as one of the core knowledge buyers abjectness Koha Brickner yeah so we have priors about object nests like about the visual space about time about agents but goal-oriented behavior we have many different priors but what's interesting is that sure we have you know this is pretty diverse and an enriched set of pairs but it was - not that diverse right we are not born into this world wheel with a ton of knowledge about the world with only a small set of cual knowledge he has hardly ever sense of how it feels to us humans that that set is not that large but just even the nature of time that we kind of integrate pretty effectively through all of our perception all of our reasoning maybe how you know do you have a sense of how easy it is to encode those priors maybe it requires building a universe mm-hmm and the human brain in order to encode those priors what do you have a hope that it's can be listed like an accent I don't think so so you have to keep in mind that any knowledge about the world that we are born with is something that has to have been encoded into our DNA by evolution at some point right and Gina is a very very low bandwidth medium like it's extremely long and expensive to include anything into DNA because first of all you need some sort of evolutionary pressure to guide this writing process and then you know the higher level information in trying to write the longer it's gonna take and the thing in the environment that you are trying to encode knowledge but has to be stable over this this duration yes so you can only include into DNA things that constitute an evolutionary advantage so this is actually a very small subset of all possible knowledge about the world you can only encode things that are a stable that are true with a very very long period of time to begin millions of years for instance we might have some visual prior but the shape of snakes right but all what makes a face what's the difference between a face and on face but consider this interesting question do we have any innate sense of the visual difference between a male face and a female face what do you think for human I mean I would have to look back into evolutionary history when the genders emerged but yeah most I mean the faces of humans are quite different to my face of great hips great apes right yeah like you didn't say you couldn't tell the face of human pansy from the face of a male ship and she probably yeah that'll hide us humans of all that so we do have innate knowledge of what makes a face but it's actually impossible for us to have any DNA encoded knowledge of the difference between a female human face and a male human face because the that knowledge that information came up into the world actually very recently if you look at the at the slowness of the process of encoding knowledge into DNA yeah so that's interesting that's a really powerful argument the DNA is a low bandwidth and it takes a long time to encode here that naturally creates a very efficient encoding but hence the yeah one one important consequence of this is that so yes we are born into this world with a bunch of knowledge sometimes I high-level knowledge about the world like the shape the rough shape of the snake of the raft shape of face but importantly because this knowledge takes so long to write almost all of this innate knowledge is shared with our cousins with with great apes right so it is not actually this innate knowledge and that makes us special but to throw it right back at you from the earlier on in our discussion it's that encoding might also include the entirety of the environment of Earth to some extent so it can it can include things that are important to survival and reproduction so the for which there is some evolutionary pressure and things that are a stable constant over very very very long time players and honestly it's not that much information there's also beside the bandwidths constrain and constraints of the writing process there's also a memory constraints like DNA the part of DNA that deals with the human brain is actually very small it's like you know on the order of megabytes right it's not that much high-level knowledge about the world you can encode that's quite brilliant and hopeful for benchmark of that you're referring to of encoding priors actually look forward to i'm skeptical whether you can do in this couple years but hopefully i've been working so honestly it's a very simple benchmark and it's not like a big breakfast or anything it's more like a fun a fun side project right these fun so is imagenet these fun side projects could launch entire groups of efforts towards uh towards creating reasoning systems and so on and i think yeah that's Nicole it's trying to measure a strong generalization to measure the strength of abstraction you know minds right now mind something in a in a fishery contagion and if there's anything through about this science organism is its individual cells love competition so in benchmarks encourage competition so that's uh yeah that's an exciting possibility if you are do you think an AI winter is coming and how do we prevent it not really so an AI winter is something that would occur when there's a big mismatch between how we are selling the capabilities of VI and and the actual capabilities of VI and today's when the planning is creating a lot of value and we keep creating a lot of value in the sense that this is models are applicable to a very wide range of problems that are written today and we are only just getting started with the crimes algorithms to every problem they could be solving so the planning will keep creating a lot of value for the time being what's concerning however is that there's a lot of hype around deplaning anaronnie idea lots of people are over selling the capabilities of these systems not just the capabilities but also over selling them the fact that they might be more or less in a brain like like you've given a kind of a mystical aspect these technologies and also over setting the pace of progress which you know it might look fast in the sense that we have this exponentially increasing number of papers but again that's just a simple consequence of the fact that we have everyone more people coming into the field doesn't mean the progress isn't is actually exponentially fast like let's say you're trying to raise money for your startup or your research lab you might want to tell you know a grandiose stories to investors about how deep learning is just like the brain and hide consume all these incredible problems like self-driving and robotics and so on and maybe you can tell them that the field is progressing so fast and we are gonna have HDI within 15 years or even ten years and oh none of this is true and every time you're like saying these things and an investor or you know a decision maker beliefs them well your this is like the equivalent of taking on credit card debt yeah but for for trust right and maybe this win you know this will this will be what enables you to raise a lot of money but she ultimately you are creating damage to our dimensional fields that's the concern is that that that that's what happens the other day I winters as the the concern is you actually tweet about the skooled autonomous vehicles right there's a almost every single company now have promised that they will have full autonomous vehicles by Twenty twenty one twenty two this is a good example of that the consequences of overhyping the capabilities of AI and the pace of progress because I work especially a lot recently in this area I have a deep concern of what happens when all these companies after I've invested billions have a meeting and say how much do we actually first of all do we have an autonomous vehicles the answer it will definitely be no and second will be wait a minute we've invested one two three for a billion dollars into this and we made no profit and the reaction to that may be going very hard in another directions that might impact either even other industries and that's what we call in the air winter is when there is backlash well no one believes any of these promises anymore because they've turned that be big lies the first time around yeah and this will definitely happen to some extent for autonomous vehicles because the public and decision makers have been convinced that you know around around 2015 they've been convinced by these people who are trying to raise money for a start-up and so on that l5 driving was coming mean maybe 2016 maybe 2017 may 2018 now when 2019 was still waiting for it and so I I don't believe we are going to have a full-on AI winter because we have this technologies that are producing a tremendous amount of free all value right but there is also too much hype so there will be some backlash especially there will be backlash so you know some startups are trying to sell the dream of AGI alright and and the fact that Asia is going to create infinite value like EG is like a free lunch like if you can if you can develop an AI system that passes a certain threshold they of IQ or something then suddenly you have infinite value yes and well there are actually lots of investors buying into this idea and you know they will wait maybe maybe 10 15 years and nothing will happen and and the next time around well maybe maybe there will be an in generation of investors no one will care you know a human memory is very short after all I don't know about you but because I've spoken about AGI sometimes poetically like and I get a lot of emails from people giving me they're usually like a large manifestos of they've they say to me that they have created an AGI system or they know how to do it and there's the long write-up of how to doing it so that was easy man they are there little bit feel like it's generated by an AI system actually but there's usually no guidance recursively sitting here exactly it's you have a transformer generating crank papers about this yeah so what the question is about because you've been such a good you have a good radar for crank papers how do we know they're not onto something how do I so when you start to talk about a GI or anything like the reasoning benchmarks and so on so something that doesn't have a benchmark it's really difficult to know I mean I talked to Jeff Hawkins who's really looking at neuroscience approaches to hug and there's some there's echoes of really interesting ideas in at least just case which is Charlie how do you usually think about this they like preventing yourself from being too narrow-minded and elitist about you know deep learning it has to work on these particular benchmarks otherwise it's trash well you know the thing is intelligence does not exist in the abstract intelligences to be applied so if you don't have a benchmark if you're an improvement and some benchmark maybe it's a new benchmark all right maybe it's not something I've been you again before but you juni is a problem that you're trying so you're not gonna come up with a solution with that a problem so you general intelligence I mean you've clearly highlight a generalization if you want to claim that you have an intelligent system it should come with the benchmark issued yes it should display capabilities of some kind it should it should show that it can create some form of value even if it's a very artificial form of value and that's also the reason why you don't actually need to care about turning which papers actually submit in potential and which do not because if if there is a new technique it's actually creating value you know this is going to be brought to light very quickly because it's actually making a difference so it's the difference between something that's ineffective and something that is actually useful and ultimately usefulness is our guide not just in this field but if you look at science in general maybe there are many many people over the years that have had some really interesting theories of everything but they were just completely useless and you don't actually need to tell the interesting theories from the user series all you need is to see you know is this actually having an effect on something else you know is this actually useful it is this making an impact or not as beautiful put I mean the same applies to quantum mechanics to a string theory to the holographic principle we are doing the planning because it works you know that's like before I started working people you know I considered people working on our neural networks as as cranks very much like you know no one was working anymore and now it's working which is what makes it valuable it's not about being right right it's about being effective and nevertheless the individual entities is a scientific mechanism just like yoshua bengio a young McCune they while being called cranks stuck with it right yeah and so us individual agents even if everyone's laughing at us just stick with it because if you believe you have something you should stick with it and see it's true that's a beautiful inspirational message to end on first of all thank you so much for talking today that was amazing thank you you
Vijay Kumar: Flying Robots | Lex Fridman Podcast #37
the following is a conversation with Vijay Kumar he's one of the top roboticist in the world a professor at the University of Pennsylvania a Dean Afeni engineering former director of grasp lab or the general robotics automation sensing in perception laboratory a pen that was established back in 1979 that's 40 years ago Vijay is perhaps best known for his work in multi robot systems robot swarms and micro aerial vehicles robots that elegantly cooperate in flight under all the uncertainty and challenges that the real-world conditions present this is the artificial intelligence podcast if you enjoy it subscribe on YouTube give it five stars and iTunes supported on patreon simply connect with me on Twitter at lex friedman spelled fri d ma an and now here's my conversation with Vijay Kumar what is the first robot you've ever built over a part of building way back when I was in graduate school I was part of a fairly big project that involved building a very large hexapod suede close to 7,000 pounds and it was powered by hydraulic actuation or was actuated by hydraulics with 18 motors hydraulic motors controlled by an Intel 80-85 processor and an internal 8086 coprocessor and so imagine this huge monster that had 18 joints each controlled by an independent computer and there was a 19th computer that actually did the coordination between these 18 joints so as part of this project and my thesis work was how do you coordinate the 18 legs and in particular the the pressures and the hydraulic cylinders to get efficient locomotion it sounds like a giant mess so how difficult is it to make all the motors communicate presumably you have to send signals hundreds of times a second or at least this was not my work but the folks who worked on this wrote what I believe to be the first multiprocessor operating system this was in the 80s and you have to make sure that obviously messages got across from one joint to another you have to remember the the clock speeds on those computers were about half a megahertz right the eighties so not to romanticize the notion but how did it make you feel to make to see that robot move it was amazing in hindsight it looks like well we built this thing which really should have been much smaller and of course today's robots are much smaller you look at you know Boston Dynamics or ghost robotics has been off from from Penn but back then you're stuck with the substrate you had the compute you had so things were unnecessarily big but at the same time and this is just human psychology somehow bigger means grander you know people never had the same appreciation for nanotechnology or nano devices as they do for the space shuttle or the Boeing 747 yeah you've actually done quite a good job at illustrating that small is beautiful in terms of robotics so what is on that topic is the most beautiful or elegant robe in motion that you've ever seen not to pick favorites or whatever but something that just inspires you that you remember well I think thing that I'm I'm most proud of that my students have done is really think about small UAVs that can maneuver and constrain spaces and in particular their ability to coordinate with each other and form three-dimensional patterns so once you can do that you can essentially create 3d objects in the sky and you can deform these objects on the fly so in some sense your toolbox of what you can create has suddenly got enhanced and before that we did the two-dimensional version of this so we had ground robots forming patterns and and so on so that that was not as impressive it was not as beautiful but if you do it in 3d suspended in midair and you've got to go back to 2011 when we did this now it's actually pretty standard to do these things eight years later but back then it was a big accomplishment so the distributed cooperation is where is what Beauty emerges in your eyes I think beauty to an engineer is very different from from Beauty to you know someone who's looking at robots from the outside if you will yeah but what I meant there so before we said that grand is associated with size and another way of thinking about this is just the physical shape and the idea that you can get physical shapes in midair and have them deform that's beautiful but the individual components the agility is beautiful too right so then how quickly can you actually manipulate these three-dimensional shapes and the individual components yes right oh by the way said UAV unmanned aerial vehicle what was a good term for drones UAVs quad copters is there a term that's then being standardized I don't know if that is everybody wants to use the word drones and I often said there's drones to me is a pejorative word it signifies something that's that's dumb the pre program that does one little thing and to anything but drones so I actually don't like that word but that's what everybody uses you could call it unpiloted and paladin ah but even unpiloted could be radio control could be remotely controls in many different ways and I think the right word is thinking about it is an aerial robot you also say agile autonomous aerial robot right yeah so agility is an attribute but they don't have to be so what biological system because you've also drawn a lot of inspiration of those I've seen bees and ants that you've talked about what living creatures have you found to be most inspiring as an engineer instructive in your work in robotics to me so ants are really quite incredible creatures right so you I mean the individuals arguably are very simple and how they're they're built and yet they're incredibly resilient as a population and as individuals they're incredibly robust so you know if you take an ant at six legs you remove one like it still works just fine and it moves along and I don't know that even realizes it's lost alike so that's the robustness at the individual ant level but then you you look about this instinct for self-preservation of the colonies and they adapt in so many amazing ways you know transcending transcending gaps and and by just chaining themselves together when you have a flood being able to recruit other team mates to carry big morsels of food and then going out in different directions looking for food and then being able to demonstrate consensus even though they don't communicate directly with each other the way we communicate with each other in some sense they also know how to do democracy probably better than what we do yeah somehow the even democracy is emergent it seems like all the phenomena that we see is all emergent it seems like there's no centralized communicator there is so that I think a lot is made about that word emergen and means lots of things of different people but you're absolutely right I think as an engineer you think about what element elemental behaviors were primitives you could synthesize so that the whole looks incredibly powerful incredibly synergistic the whole definitely being greater than the sum of the parts and ants are living proof of that so when you see these beautiful swarms where there's biological systems of a robots do you sometimes think of them as a single individual living intelligent organism so it's the same as thinking of our human civilization as one organism or do you still as an engineer think about the individual components and all the engineering that went into the individual components oh that's very interesting so again philosophically as engineers what we want to do is to go beyond the individual components the individual units and think about it as a unit as a cohesive unit without worrying about the individual components if you start obsessing about the individual building blocks and what they do you inevitably will find it hard to scale up and just mathematically just think about individuals things you want a model and if you want to have ten of those then you essentially are taking cartesian products of ten things that makes it really complicated than to do any kind of synthesis or design and that high dimensional space is really hard so the right way to do this is to think about the individuals in a clever way so that at the higher level when you look at lots and lots of them abstractly you can think of them in some low dimensional space so what does that involve for the individual you have to try to make the way they see the world as local as possible and the other thing do you just have to make them robust to collisions like you said with the ants if something fails that the whole swarm doesn't fail right I think as engineers we do this I mean you know think about we build planes will rebuild iPhones and we know that by taking individual components well engineered components with well specified interfaces that behave in a predictable way you can build complex systems so that's ingrained I would I would claim and most engineers thinking and it's true for computer scientists as well I think what's different here is that you want the individuals to be robust in some sense as we do in these other settings but you also want some degree of resiliency for the population and so you really want them to be able to re-establish communication with their neighbors you want them to rethink their strategy for group behavior you want them to reorganize and that's where I think a lot of the challenges lie so just at a high level what does it take for a bunch of which we call them flying robots to create a formation just for people when I familiar with robotics in general how much information is needed how do you how do you even make it happen without a centralized controller so I mean there are a couple of different ways of looking at this if you are a purist you think of it as a as a way of recreating what nature does so nature forms groups for several reasons but mostly it's because of this instinct that organisms have of preserving their colonies their population which means what you need shelter you need food you need to procreate and that's basically it so the kinds of interactions you see are all organic they're all local and the only information that they share and mostly it's indirectly is to again preserve the herd of the flock or the swarm in and either by looking for new sources of food are looking for new shelters right as engineers when we build swarms we have a mission and when you think of a mission and it involves mobility most often it's described in some kind of a global coordinate system as a human as an operator as a commander or as a collaborator I have my coordinate system and I want the robots to be consistent with that so I might think of it slightly differently I might want the robots to recognize that coordinate system which means not only do they have to think locally in terms of who their immediate neighbors are but they have to be cognizant of of what the global environment looks like so if I go if I say surround this building and protect this from intruders well they're immediately in a building centered coordinate system and I have to tell them where the building is and they're globally collaborating on the map of that building there they're maintaining some kind of global not just in the frame of the building but there's information that's ultimately being built up explicitly as opposed to kind of implicitly like nature might correct correct so in some sense nature is very very sophisticated but the tasks that nature solves or needs to solve are very different from the kind of engineered tasks artificial paths that we are Forrester address and again there's nothing preventing us from solving these other problems but ultimately through our impact you want these forms to do something useful and so you're kind of driven into this very unnatural if you will unnatural meaning not like how nature does setting and it's a little probably a little bit more expensive to do it the way nature does because nature is less sensitive to the loss of the individual and cost wise in robotics I think you're more sensitive to losing individuals I I think that's true although if you look at the price to performance ratio of robotic components it's it's coming down dramatically right it continues to come down so I think we're asymptotically approaching the major where we would get yeah the cost of individuals will really become insignificant yeah so let's step back at a high low of you the impossible question of what kind of as an overview what kind of autonomous flying vehicles are there in general I think the ones that receive a lot of notoriety are obviously the military vehicles military vehicles are controlled by a base station but have a lot of human supervision but I have limited autonomy which is the ability to go from point A to point B and even the more sophisticated now sophisticated vehicles can do autonomous takeoff and landing and those usually have wings and they're heavy usually their wings but then there's nothing preventing us from doing this for helicopters as well so I mean there are many military organizations that have autonomous helicopters in the same vein and by the way you look at auto pilots and airplanes and it's it's actually very similar in fact I can one interesting question we can ask is if you look at all the air safety violations all the crashes that occurred yeah would there happen if the plane were truly autonomous and I think you'll find that any other cases you know because of pilot error we made silly decisions and so in some sense even an air-traffic commercial air traffic there's a lot of applications although we only see autonomy being enabled at very high altitudes when when the pilot to the the plane is on autopilot there's still a role for the human and that kind of autonomy is you're kind of implying I don't know what the right word is but it's a little dumb dumber and it could be right so so in the lab right course we could we can we can afford to be a lot more aggressive and the question we try to ask is can we make robots that will be able to make decisions without any kind of external infrastructure right so what does that mean so the most common piece of infrastructure that airplanes use today is GPS GPS is also the most brutal form of information if you have driven in a city tried to use GPS navigation you know in tall buildings you immediately lose GPS and so that's not a very sophisticated way of building autonomy I think the second piece of infrastructure they rely on is communications again it's very easy to jam communications in fact if you use Wi-Fi you know that Wi-Fi signals drop out cell signals drop out so to rely on something like that is not is not good the third form of infrastructure we we use and I hate to call it infrastructure but but it is that in the sense of robots it's people so you could rely on somebody to pilot you right and so the question you want to ask is if there are no pilots there's no communications of any base station if there's no knowledge of position and if there's no a priori map a priori knowledge of what the environment looks like a priori model of what might happen in the future can robots navigate so that is true autonomy right so that's that's true autonomous and we're talking about you may like military applications and drones okay so what else is there you talk about agile autonomous flying robots aerial robots so that's a different kind of it's not winged it's not big at least its small so I used the word agility mostly or at least we're motivated to do agile robots mostly because robots can operate and should be operating in constrained environments and if you want to operate the way a Global Hawk operates I mean the kinds of conditions in which you operate are and very very restrictive if you go want to go inside a building for example for search and rescue or to locate an active shooter or you want to navigate under the canopy in an orchard to look at health of plants or to look for to count to count fruits to measure the tree the tree trunks these are things we do by the way as cool agriculture stuff you've shown in the past is really alright so in those kinds of settings you do need that agility agility and does not necessarily mean you break records for the hundred meters - what it really means is you see the unexpected and you're able to maneuver in a safe way and in a way that that gets you the most information about the thing you're trying to do by the way you may be the only person who in a TED talk has used a math equation which is amazing people should go see what actually it's very interesting because the Ted curator Chris Anderson told me you can't show math and you know I thought about it but but that's who I am and that's that's what that's our work and so I felt compelled to give the audience a taste for at least some math so on that point simply what does it take to make a thing with four motors fly a quadcopter one of these little flying robots you know how hard is it to make it fly how do you coordinate them four motors what's how do you convert there's those motors into actual movement so this is an interesting question we've been trying to do this since 2000 it is a commentary on the sensors that were available back then the computers that were available back then and a number of things happened between 2000 and 2007 one is the advances in computing which is and so we all know about Moore's law but I think 2007 was a tipping point the year of the iPhone the year of the cloud lots of things happen in 2007 but going back even further inertial measurement units as a sensor really matured again lots of reasons for that certainly there's a lot of federal funding particularly DARPA in the US but they didn't anticipate this boom in I amuse but if you look subsequently what happened is it every year every car manufacturer had to put an airbag in which meant you had to have an accelerometer onboard and so that drove down the price to performance ratio oliver's so I should know this that's very interesting yeah it's very interesting the connection there and that's why research is very it's very hard to predict the outcomes and again the federal government spent a ton of money on things that they thought were useful for resonators but it ended up enabling these small UAVs yeah which is great because I could have never raised that much money and told you no soul this project hey we want to build these small UAVs can you can you actually fund the development of low-cost dire news so why do you need an IMU and so so so I was I'll come back to that but but so in 2007 2008 we were able to build these and then the question you're asking was a good one how do you coordinate the motors to develop this but over the last 10 years everything is commoditized a high school kid today can pick up a Raspberry Pi kit and build us all the low levels functionality is all automated but basically at some level you have to drive the motors at the right rpms the right velocity in order to generate the right amount of thrust in order to position it and orient it in a way that you need to in order to fly the feedback that you get is from onboard sensors and the IMU is an important part of it the IMU tells you what the acceleration is as well as what the angular velocity is and those are important pieces of information in addition to that you need some kind of local position or velocity information for example when we walk we implicitly have this information because we kind of know how how would our stride length is we also are looking at images fly past our retina if you will and so we can estimate velocity we also have accelerometers in our head and we're able to integrate all these pieces of information to determine where we are as we walk and so robots have to do something very similar you need an IMU you need some kind of a camera or other sensor that's measuring velocity and then you need some kind of a global reference frame if you really want to think about doing something in a world coordinate system and so how do you estimate your position with respect to that global reference frame that's important as well so coordinating the RPMs of the four motors is what allows you to first of all fly and hover and then you can change the orientation and the velocity of the and so on exactly exactly bunch of degrees of freedom six degrees of freedom but you only have four inputs the four motors and and it turns out to be a remarkably versatile configuration you think at first well I only have four motors how do I go sideways but it's not too hard to say well if I tell myself I can go sideways and then you have four motors pointing up how do i how do I rotate in place about a vertical axis well you rotate them at different speeds and that generation reaction moments in that allows you to turn so it's actually a pretty it's an optimal configuration from from engineer standpoint it's it's very simple very cleverly done and and very versatile so if you could step back to a time so I've always known flying robots as the to me it was natural that the quadcopter should fly but when you first started working with it like how surprised are you that you can make do so much with the four motors how surprising is e this thing fly first of all you can make it hover then you can add control to it firstly this is not the four motor configuration is not ours you could it has at least a hundred year history and with various people various people try to get quad rotors to fly without much success as I said we've been working on this since 2000 our first designs were well this is way too complicated why not we try to get an omnidirectional flying robots or so our early designs we had eight folders and so these eight rotors were arranged uniformly on a sphere if you will so you can imagine a symmetric configuration and so you should be able to fly anywhere but the real challenge we had is the strength to weight ratio is not enough and of course we didn't have the sensors and so on so everybody knew or at least the people who work with rotor crafts knew four rotors we get it done so that was not our idea but it took a while before we could actually do the onboard sensing and the computation that was needed for the kinds of agile maneuvering that we wanted to do in our little aerial robots and that only happened between 2007 and 2009 in our life yeah and you have to send the signal may hundred times a second so the compute there is everything has to come down in price and what are the steps of getting from point A to point B so you just talked about like local control but if all the kind of cool dancing in the air that I've seen you show how do you make it happen it would have trajector make a trajectory first of all okay figure out a trajectory so planet trajectory and then how do you make that trajectory happen I think planning is a very fundamental problem in robotics I think you know 10 years ago it was an esoteric thing but today with self-driving cars you know everybody can understand this basic idea that a car sees a whole bunch of things and it has to keep a lane or maybe make a right turn or switch lanes it has to plan a trajectory it has to be safe it has to be efficient so everybody's familiar with that that's kind of the first step that that you have to think about when you when you when you when you say autonomy and so for us it's about finding smooth motions motions that are safe so we think about these two things one is optimality we want a safety clearly you don't you cannot compromise safety so you're looking for safe optimal motions the other thing you have to think about is can you actually compute a reasonable trajectory in a fast manner in a small amount of time because you have a time budget so the optimal becomes suboptimal but in our lab we we focus on synthesizing smooth trajectory that satisfy all the constraints in other words don't violate any safety constraints and is as efficient as possible and when I say efficient it could mean I want to get from point A to point B as quickly as possible or I want to get to it as gracefully as possible or I want to consume as little energy as possible but always staying within the safety constraints but yes always finding a safe trajectory so there's a lot of excitement and progress in the field of machine learning yes and reinforcement learning and the neural network variant of that with deep reinforcement learning DS do you see a role of machine learning in so a lot of the successful flying robots did not rely on machine learning except for maybe a little bit of the perception the computer vision side on the control side and the planning do you see there's a role in the future for machine learning so let me disagree a little bit with you I think we never perhaps called out and my work called out learning but even this very simple idea of being able to fly through a constrained space the first time you try it you'll invariably you might get it wrong even if the task is challenging and the reason is to get it perfectly right you have to model everything in the environment and flying is notoriously hard to model there are aerodynamic effects that we constantly discover even just before I was talking to you I was starting to a student about how blades flap when they fly well and that ends up changing how a rotor craft is accelerated in the angular direction this is like micro flaps or something is smooth it's not microfiber you assume that each blade is is rigid but actually it flaps a little bit oh it bends interesting yeah and so the models rely on the fact on the on an assumption that they're they're actually rigid but that's not true if you're flying really quickly these effects become significant if you're flying close to the ground you get pushed off by the ground right something which every pilot knows when he tries to land or she tries to land this is called a ground effect something very few pilots think about is what happens when you go close to a ceiling well you get sucked into a ceiling there are very few aircrafts that fly close to any kind of ceiling likewise when you go close to close to a wall there are these wall effects and if you've gone on a train and you pass another train that's traveling in the opposite direction you feel the buffeting and so these kinds of microclimates effect our UAVs significantly so impossible to model essentially if I wouldn't say they're impossible to model but the level of sophistication you would need in the model and the software would be tremendous plus to get everything right would be awfully tedious so the way we do this is over time we figure out how to adapt to these conditions so we've early on we use the form of learning that we call iterative learning so this idea if you want to perform a task there are a few things that you need to change and iterate over few parameters that over time you can you can you can figure out so I could call it policy gradient reinforcement learning but actually this is iterative learning learning and so this was their way back I think what's interesting is if you look at autonomous vehicles today learning occurs could occur in two pieces one is perception understanding the world second is action taking actions everything that I've seen that is successful is on the perception side of things so in computer vision we've made amazing strides in the last ten years so recognizing objects actually detecting objects classifying them and and tagging them in some sense annotating them this is all done through machine learning on the action side on the other hand I don't know if any examples where there are fielded systems where we actually learn the right behavior outside a single demonstration of successfully you know in the laboratory this is the holy grail can you do end-to-end learning can you go from pixels to motor block mode occurrence this is really really hard and I think if you look go forward the right way to think about these things is data driven approaches learning based approaches in concert with model-based approaches which is the traditional way of doing things by so I think there's a piece there's a role for each of these methodologies so what do you think just jumping out in topic since you mention autonomous vehicles what do you think are the limits and the perception sighs so I've talked to Elon Musk and they're on the perception side they're using primarily computer vision to see the environment in your work with because you work with a real world a lot in the physical world what are the limits of computer vision do you think you can solve autonomous vehicles focus on the perception side focusing on vision alone and machine learning so you know we also have a spin-off company X and technologies that that works underground in mines you go into mines there they're dark they're dirty you fly in a dirty area there's stuff you kick up from by the propellers the downwash kicks up dust I challenge you to get a computer vision algorithm to work there yeah so we used lighters in that setting indoors and even outdoors when we fly through fields I think there's a lot of potential for just solving the problem using computer vision alone but I think the bigger question is can you actually solve or can you actually identify all the corner cases using a sense single sensing modality and using learning alone so what's your intuition there so look if you have a corner case and your algorithm doesn't work your instinct is to go get data about the corner case and patch it up learn how to deal with that corner case but at some point this is going to saturate this approach is not viable so today computer vision algorithms can detect 90% of the objects or can detect objects 90% of the time classify them 90% of the time cats on the Internet I probably can do 95 percent on here but to get from 90% to 99% you need a lot more data and then I tell you well that's not enough because I have a safety critical application I want to go from 99% to 99.9 percent that's even more data so I think if you look at wanting accuracy on the x-axis and look at the amount of data on the y-axis I believe that curve is an exponential curve Wow okay it's even hard if it's linear it's hard if it's linear totally but I think it's exponential and the other thing you have to think about is that this process is a very very power hungry process to run data farms or solar power you mean literally power literally power literally power so in 2014 five years ago and I don't have more recent data two percent of US electricity consumption was from data forms so we think about this as an information science and information processing problem actually it is an energy processing problem and so unless we figured out better ways of doing this I don't think this is viable so talking about driving which is a safety critical application and some aspect the flight is safety critical maybe philosophical question maybe an engineering one what problem do you think is harder to solve autonomous driving or autonomous flight that's a really interesting question I think autonomous flight has several advantages that autonomous driving doesn't have so look if I want to go from point A to point B I have a very very safe trajectory go vertically up to a maximum altitude fly horizontally to just about the destination and then come down vertically this is pre-programmed the equivalent of that is very hard to find in a self-driving car car world because you're on the ground you're in a two-dimensional surface and the trajectories in the two-dimensional surface are more likely to encounter obstacles I mean this in an intuitive sense but mathematically true that's mathematically as well that's true there's another option on the 2g space of platooning or because there's so many obstacles you can connect with those obstacles and all these those exist in the three-dimensional space is wrong so they do so the question also implies how difficult are obstacles in the three-dimensional space in flight so so that's the downside I think in three-dimensional space you're modeling three-dimensional world not just just because you want to avoid it but you want a reason about it you want to work in the three-dimensional environment and that's significantly harder so that's one disadvantage I think the second disadvantage is of course anytime you fly you have to put up with the peculiarities of aerodynamics and they're complicated environments how do you negotiate that so that's always a problem if you see a time in the future where there is you mentioned them there's an agriculture application so there's a lot of applications of flying robots but do you see a time in the future where there is tens of thousands or maybe hundreds of thousands of delivery drones they fill the sky a delivery flying robots I think there's a lot of potential for the last mile delivery and so in crowded cities I don't know if you go if you go to a place like Hong Kong just crossing the river can take half an hour and while a drone can just do it in in five minutes at most I think you look at a delivery of supplies to remote villages I work with a nonprofit called weave robotics they work in the Peruvian Amazon where the only highways are rivers and to get from point A to point B may take five hours while with a drone you can get there in 30 minutes so just delivering drugs retrieving samples for for testing vaccines I think there's huge potential here so I think if the challenges are not technological that the challenge is economical the one thing I'll tell you that nobody thinks about is the fact that we've not made huge strides in battery technology yes it's true batteries are becoming less expensive because we have these mega factories that are coming up but they're all based on lithium based technologies and if you look at the energy density and the power density those are two fundamentally limiting numbers so power density is important because for a UAV to take off vertically into the air which most drones do they're not they don't have a runway you consume roughly two and watts per kilo at the small size that's a lot right in contrast the human brain consumes less than 80 watts the whole of the human brain so just imagine just lifting yourself into the air is like two or three lightbulbs which makes no sense to me yes so you're going to have to at scale solve the energy problem then charging the batteries storing the the energy and so on and then the storage is the second problem but storage limits the range but you know you you you have to remember that you you have to you have to burn a lot of it for a given time so the turning which is the pop which is a power question yes and do you think just your intuition there there are breakthroughs in batteries on the horizon how hard is that problem look there are a lot of companies that are promising flying cars but there are autonomous and that are clean I think there over-promising the autonomy piece is durable the clean piece I don't think so there's another company that I work with called jet otra they make small jet engines hmm and they can get up to 50 miles an hour very easily and left 50 kilos but their jet engines they're efficient there are little louder than electric vehicles but they can bail flying cars so your sense is that there's a lot of pieces that have come together so on this crazy question if you look at companies like Kitty Hawk working on electrics of clean I'm talking to Sebastian Thrun right it's a it's a crazy dream you know but you work with flight a lot you've mentioned before that manned flights are carrying of the human body is very difficult to do so how crazy is flying cars do you think there will be a day when we have vertical takeoff and landing vehicles that are sufficiently affordable that we're going to see a huge amount of them and they would look like something like we dream of when we think about flying cars yeah like the Jetsons The Jetsons yeah so look there are a lot of smart people working on this and you never say something is not possible when you're people like Sebastian Thrun working on it so I totally think it's viable I question again the electric piece they let you pee on it and again for short distances you can do it and there's no reason to suggest that these are all just have to be rotorcraft you take off vertically but the new morph into a forward flight I think there are a lot of interesting designs the question to me is is are these economically viable and if you agree to do this with fossil fuels that instinct immediately becomes viable that's a real challenge do you think it's possible for robots and humans to collaborate successfully on tasks so a lot of robotics folks that I talk to and work with I mean humans just add a giant mess to the picture so it's best to remove them from consideration when solving specific tasks it's very difficult to model there's just a source of uncertainty in your work with these agile flying robots do you think there's a role for collaboration with humans or is it best to model tasks in a way that that doesn't have a human in the picture well I I don't think we should ever think about robots without human in the picture ultimately robots are there because we want them to solve problems for humans right but there is no general solution to this problem I think if you look at human interaction and how humans interact with robots you know we think of these in three different ways one is the human commanding the robot the second is the human collaborating with the robot so for example we work on how a robot can actually pick up things with a human will carry things that's like true collaboration and third we think about humans as by standards so driving cars what's the human's role and how does how do self-driving cars acknowledge the presence of humans so I think all of these things are different scenarios it depends on what kind of humans were kind of tasks and I think it's very difficult to say that there's a general theory that we all have for this but at the same time it's also silly to say that we we should think about robots independent of humans so to me human robot interaction is almost a mandatory aspect of everything we do yes so but the Jewish to agree so with your thoughts if you jump to autonomous vehicles for example there's a there's a big debate between what's called level 2 and level 4 so semi autonomous and autonomous vehicles instead of the Tesla approach currently at least has a lot of collaboration between human and machine so the human is supposed to actively supervise the operation of the robot the part of the safety a definition of how safe a robot is in that case is how effective is the human and monitoring it do you think that's ultimately not a good approach in sort of having a human in the picture not as a bystander or part of the infrastructure but really as part of what's required to make the system safe this is harder than it sounds yes I think you know if you if you if I mean I'm sure you've driven the driven before and highways and so on it's it's really very hard to have to relinquish controls to a machine and then take over when needed so I think Tesla's approach is interesting because it allows you to periodically establish some kind of contact with the car Toyota on the other hand is thinking about shared autonomy as a collaborative autonomy as a paradigm if I may argue these are very very simple ways of human-robot collaboration because the task is pretty boring you sit in a vehicle you go from point A to point B I think the more interesting thing to me is for example search-and-rescue I've got a human first responder robot first responders I got to do something it's important I have to do it in two minutes the building is burning there's been an explosion it's collapsed how do I do it I think to me those are the interesting things where it's very very unstructured and what's the role of the human was the role robot clearly there's lots of interesting challenges and as a field I think we're gonna make a lot of progress in this area yeah it's an exciting form of collaboration you're right in the town was driving the main enemy is just boredom of the human yes as opposed to in rescue operations it's literally life and death and the collaboration enables the effective completion of the mission so exciting in some sense you know we're also doing this you think about the human driving a car and almost invariably the humans trying to estimate the state of the cars estimate the state of the environment so on but what if the car were to estimate the state of the human so for example I'm sure you have a smartphone the smartphone tries to figure out what you're doing and send you reminders and often times telling you to drive to a certain place although you have no intention of going there because it thinks that that's why you should be a cause of some gmail calendar entry or something like that and and you know it's trying to constantly figure out who you are what you're doing if a car were to do that maybe that would make the driver safer because the car's trying to figure out is the driver paying attention looking at his or her eyes looking at saccadic movements so I think the potential is there but from the reverse side it's not robot modeling but it's human modeling it's more in the human right and I think the robots can do a very good job of modeling humans if you if you really think about the framework that you have a human sitting in a in a cockpit surrounded by sensors all staring at him in addition to be staring out staring outside but also staring at him I think there's a real synergy there yeah I love that problem because it's a new 21st century form of psychology actually AI enabled psychology a lot of people have sci-fi inspired fears of walking robots like those from Boston Dynamics if you just look at shows on Netflix and so on or flying robots like those you work with how would you how do you think about those fears how would you alleviate those fears do you have Inklings echos of those same concerns you know anytime we develop a technology meaning to have positive impact in the world there's always a worry that you know somebody could subvert those technologies and use it in an adversarial setting and robotics is no exception right so I think it's very easy to weaponize robots I think we talked about swarms one thing I worry a lot about is so you know for us to get swamps to work and do something reliably it's really hard but suppose I have there's this challenge of trying to destroy something and I have a swarm of robots well only one out of the swarm needs to get to its destination so that suddenly becomes a lot more doable worry about you know this gentle idea of using autonomy with lots and lots of agents I mean having said that looked a lot of this technology is not very mature my favorite saying is that if somebody had to develop this technology wouldn't you rather the good guys do it so the good guys have a good understanding of the technology so they can figure out how this technology is being used in a bad way or could be used in a bad way and try to defend against it so we think a lot about that so we have a were doing research on how to defend against swarms for example that's a there's in fact a report by the National Academies on counter UAS technologies this is a real threat but we're also thinking about how to defend against this and and knowing how swarms work knowing how autonomy works is I think very important so it's not just politicians you think engineers have a role in this discussion absolutely I think the days where politicians can be agnostic to technology are gone III think every tech politician needs to be literate in technology and I often say technology is the new liberal art understanding how technology will change your life I think is important and every human being needs to understand that and maybe we can elect some engineers to office as well on the other side what are the biggest open problems in robotics and you said we're in the early days in some sense what are the problems we would like to solve in robotics I think there are lots of problems right but I would phrase it in the following way if you look at the robots or a building they're still very much tailored towards doing specific tasks and specific settings I think the question of how do you get them to operate in much broader settings with where things can change you know unstructured environments is up in the air so you know think of a self-driving cars today we can build a self-driving car in a parking lot we can do level fire autonomy in a parking lot but can you do level five autonomy in the streets of Napoli in Italy or Mumbai in India no no so in some sense when we think about robotics we have to think about where they're functioning what kind of environment what kind of a task we have no understanding of how to put both those things together so we're in the very early days of applying it to the physical world and I was just enables actually and that's there's levels of difficulty in complexity depending on which area you're applying it to I think so we don't have a systematic way of understanding that you know everybody says just because computer can now beat a human at any board game we suddenly know something about intelligence that's not true a computer board game is very very structured it is the equivalent of working in a Henry Ford factory where things parts come you assemble move on it's a very very very structured setting that's the easiest thing and we know how to do that so you've done a lot of incredible work at the University of Pennsylvania grasp ah you know Dean of engineering at UPenn what advice do you have for a new bright-eyed undergrad interested in robotics or AI or engineering well I think there's really three things one is one is you have to get used to the idea that the world will not be the same in five years or four years whenever you graduate right which is really hard to do so this this thing about predicting the future every one of us needs to be trying to predict the future always not because you'll be any good at it but by thinking about it I think you sharpen your senses and you become smarter so that's number one number two it's a corollary of the first piece which is you really don't know what's going to be important so this idea that I'm going to specialize in something which will allow me to go in a particular direction it may be interesting but it's important also to have this breadth so you have this jumping-off point I think the third thing and this is where I think Penn excels I mean we teach engineering but it's always in the context of the liberal arts it's always in the context of society as engineers we cannot afford to lose sight of that so I think that's important but I think one thing that people underestimate when they do robotics is the important of mathematical foundations important of represent importance of representations not everything can just be solved by looking for Ross packages on the internet or to find a deep neural network that works I think the representation question is key even to machine learning where if you hope ever hope to achieve or get to explainable AI somehow there need to be representations that you can understand so if you want to do robotics you should also do mathematics and you said liberal arts a little literature if you want to build it all I should be reading Dostoyevsky I agree with that very good the v-j thank you so much for talking today was an honor thank you it's just exciting conversation thank you you
Yann LeCun: Deep Learning, ConvNets, and Self-Supervised Learning | Lex Fridman Podcast #36
the following is a conversation with Jana kun he's considered to be one of the fathers of deep learning which if you've been hiding under a rock is the recent revolution in AI that's captivated the world with the possibility of what machines can learn from data he's a professor in New York University a vice president and chief AI scientist a Facebook & Co recipient of the Turing Award for his work on deep learning he's probably best known as the founding father of convolutional neural networks in particular their application to optical character recognition and the famed M NIST data set he is also an outspoken personality unafraid to speak his mind in a distinctive French accent and explore provocative ideas both in the rigorous medium of academic research and the somewhat less rigorous medium of Twitter and Facebook this is the artificial intelligence podcast if you enjoy it subscribe on YouTube give it five stars on iTunes support and on patreon we're simply gonna equip me on Twitter Alex Friedman spelled the Fri D ma N and now here's my conversation with Yann Laocoon you said that 2001 Space Odyssey is one of your favorite movies Hal 9000 decides to get rid of the astronauts for people haven't seen the movie spoiler alert because he it she believes that the astronauts they will interfere with the mission do you see how is flawed in some fundamental way or even evil or did he do the right thing neither there's no notion of evil in that in that context other than the fact that people die but it was an example of what people call value misalignment right you give an objective to a machine and the Machine strives to achieve this objective and if you don't put any constraints on this objective like don't kill people and don't do things like this the Machine given the power will do stupid things just to achieve this dis objective or damaging things to achieve its objective it's a little bit like we are used to this in the context of human society we we put in place laws to prevent people from doing bad things because fantasy did we do those bad things right so we have to shave their cost function the objective function if you want through laws to kind of correct an education obviously to sort of correct for for those so maybe just pushing a little further on on that point how you know there's a mission there's a this fuzziness around the ambiguity around what the actual mission is but you know do you think that there will be a time from a utilitarian perspective or an AI system where it is not misalignment where it is alignment for the greater good of society that kneei system will make decisions that are difficult well that's the trick I mean eventually we'll have to figure out how to do this and again we're not starting from scratch because we've been doing this with humans for four millennia so designing objective functions for people is something that we know how to do and we don't do it by you know programming things although the legal code is called code so that tells you something and it's actually the design of an object you function that's really what legal code is right it tells you you can do it what you can't do if you do it you pay that much that's that's an objective function so there is this idea somehow that it's a new thing for people to try to design objective functions are aligned with the common good but no we've been writing laws for millennia and that's exactly what it is so this that's where you know the science of lawmaking and and computer science will come together will come together so it's nothing there's nothing special about how or a I systems is just the continuation of tools used to make some of these difficult ethical judgments that laws make yeah and we and we have systems like this already that you know make many decisions for ourselves in society that you know need to be designed in a way that they like you know rules about things that sometimes sometimes have bad side effects and we have to be flexible enough about those rules so that they can be broken when it's obvious that they shouldn't be applied so you don't see this on the camera here but all the decorations in this room is all pictures from 2001 a Space Odyssey Wow and by accident or is there a lot about accident it's by design Wow so if you were if you were to build hell 10,000 so an improvement of Hal 9000 what would you improve well first of all I wouldn't ask you to hold secrets and tell lies because that's really what breaks it in the end that's the the fact that it's asking itself questions about the purpose of the mission and it's you know pieces things together that it's heard you know all the secrecy of the preparation of the mission and the fact that it was discovery and on the lunar surface that really was kept secret and and one part of Hal's memory knows this and the other part is does not know it and it's supposed to not tell anyone and that creates a internal conflict do you think there's never should be a set of things that night AI system should not be allowed like a set of facts that should not be shared with the human operators well I think no I think the I think it should be a bit like in the design of autonomous AI systems there should be the equivalent of you know the the the oath that hypocrite Oh calm yourself yeah that doctors sign up to right so the certain thing certain rule said that that you have to abide by and we can sort of hardwire this into into our into our machines to kind of make sure they don't go so I'm not you know advocate of the the 303 dollars of Robotics you know the as you move kind of thing because I don't think it's practical but but you know some some level of of limits but but to be clear this is not these are not questions that are kind of really worth asking today because we just don't have the technology to do this we don't we don't have a ton of missing teller machines we have intelligent machines so my intelligent machines that are very specialized but they don't they don't really sort of satisfy an objective they're just you know kind of trained to do one thing so until we have some idea for design of a full-fledged autonomous intelligent system asking the question of how we design use objective I think is a little a little too abstract it's a little tough rat there's useful elements to it in that it helps us understand our own ethical codes humans so even just as a thought experiment if you imagine that in a GI system is here today how would we program it is a kind of nice thought experiment of constructing how should we have a law have a system of laws far as humans it's just a nice practical tool and I think there's echoes of that idea too in the AI systems left today it don't have to be that intelligent yeah like autonomous vehicles there's these things start creeping in that were thinking about but certainly they shouldn't be framed as as hell yeah looking back what is the most I'm sorry if it's a silly question but what is the most beautiful or surprising idea and deep learning or AI in general that you've ever come across sort of personally well you said back and and just had this kind of wow that's pretty cool moment that's nice well surprising I don't know if it's an idea rather than a sort of empirical fact the fact that you gigantic neural nets trying to train them on you know relatively small amounts of data relatively with the caste grid in the center that it actually works breaks everything you read in every textbook right every pre deep learning textbook that told you you need to have fewer parameters and you have data samples you know if you have non-convex objective function you have no guarantee of convergence you know all the things that you read in textbook and they tell you stay away from this and they were all wrong huge number of parameters non-convex and somehow which is very relative to the number of parameters data it's able to learn anything right does that surprise you today well it it was kind of obvious to me before I knew anything that that's that this is a good idea and then it became surprising that it worked because I started reading those text books okay so okay you talk to the intuition of why was obviously if you remember well okay so the intuition was it's it's sort of like you know those people in the late 19th century who proved that heavier than than air flight was impossible right and of course you have birds right they do fly and so on the face of it it it's obviously wrong as an empirical question right and so we have the same kind of thing that you know the we know that the brain works we don't know how but we know it works and we know it's a large network of neurons and interaction and the learning takes place by changing the connection so kind of getting this level of inspiration without copying the details but sort of trying to derive basic principles you know that kind of gives you a clue as to which direction to go there's also the idea somehow that I've been convinced of since I was an undergrad that even before that intelligence is inseparable from running so you the idea somehow that you can create an intelligent machine by basically programming for me was a non-starter you know from the start every intelligent entity that we know about arrives at this intelligence to learning so learning you know machine learning was completely obvious path also because I'm lazy so you know it's automate basically everything and learning is the automation of intelligence right so do you think so what is learning then what what falls under learning because do you think of reasoning is learning where reasoning is certainly a consequence of learning as well just like other functions of of the brain the big question about reasoning is how do you make reasoning compatible with gradient based learning do you think neural networks can be made to reason yes that there's no question about that again we have a good example right the question is is how so the question is how much prior structure you have to put in the neural net so that something like human reasoning will emerge from it you know from running another question is all of our kind of model of what reasoning is that are based on logic are discrete and and and are therefore incompatible with gradient based learning and I was very strong believer in this idea Grandin baserunning I don't believe that other types of learning that don't use kind of gradient information if you want so you don't like discrete mathematics you don't like anything discrete well that's it's not that I don't like it it's just that it's it's incompatible with learning and I'm a big fan of running right so in fact that's perhaps one reason why deep learning has been kind of looked at with suspicion by a lot of computer scientists because the math is very different the method you use for deep running you know we kind of as more to do with you know cybernetics the kind of math you do in electrical engineering then the kind of math you doing computer science and and you know nothing in in machine learning is exact right computer science is all about sort of you know obviously compulsive attention to details of like you know every index has to be right and you can prove that an algorithm is correct right machine learning is the science of sloppiness really that's beautiful so okay maybe let's feel around in the dark of what is a neural network that reasons or a system that is works with continuous functions that's able to do build knowledge however we think about reasoning builds on previous knowledge build on extra knowledge create new knowledge generalized outside of any training set ever built what does that look like if yeah maybe do you have Inklings of thoughts of what that might look like well yeah I mean yes or no if I had precise ideas about this I think you know we'd be building it right now but and there are people working on this or whose main research interest is actually exactly that right so what you need to have is a working memory so you need to have some device if you want some subsystem they can store a relatively large number of factual episodic information for you know a reasonable amount of time so you you know in the in the brain for example it kind of three main types of memory one is the sort of memory of the the state of your cortex and that sort of disappears within 20 seconds you can't remember things for more than about 20 seconds or a minute if if you don't have any other form of memory the second type of memory which is longer term is short term is the hippocampus so you can you know you came into this building you remember whether where the the exit is where the elevators are you have some map of that building that's stored in your hippocampus you might remember something about what I said you know if you minutes ago and forgot all our stars being raised but you know but that does not work in your hippocampus and then the the longer term memory is in the synapse the synapses right so what you need if you want for a system that's capable reasoning is that you want the hippocampus like thing right and that's what people have tried to do with memory networks and you know no Turing machines and stuff like that right and and now with transformers which have sort of a memory in their kind of self attention system you can you can think of it this way so so that's one element you need another thing you need is some sort of network that can access this memory get an information back and then kind of crunch on it and then do this iteratively multiple times because a chain of reasoning is a process by which you you you can you update your knowledge about the state of the world about you know what's gonna happen etc and that there has to be this sort of recurrent operation basically and you think that kind of if we think about a transformer so that seems to be too small to contain the knowledge that's that's to represent the knowledge as containing Wikipedia for example but transformer doesn't have this idea of recurrence it's got a fixed number of layers and that's number of steps that you know limits basically it's a representation but recurrence would build on the knowledge somehow I mean yeah it would evolve the knowledge and expand the amount of information perhaps or useful information within that knowledge yeah but is this something that just can emerge with size because it seems like everything we have now is just no it's not it's not it's not clear how you access and right into an associative memory in efficient way I mean sort of the original memory network maybe had something like the right architecture but if you try to scale up a memory network so that the memory contains all we keep here it doesn't quite work right so so this is a need for new ideas there okay but it's not the only form of reasoning so there's another form of reasoning which is true which is very classical so in some types of AI and it's based on let's call it energy minimization okay so you have some sort of objective some energy function that represents the the the quality or the negative quality okay energy goes up when things get bad and they get low when things get good so let's say you you want to figure out you know what gestures do I need to to do to grab an object or walk out the door if you have a good model of your own body a good model of the environment using this kind of energy minimization you can make a you can make you can do planning and it's in optimal control it's called it's called Marie put model predictive control you have a model of what's gonna happen in the world as consequence for your actions and that allows you to buy energy minimization figure out the sequence of action that optimizes a particular objective function which measures you know minimize the number of times you're gonna hit something and the energy gonna spend doing the gesture and etc so so that's performer reasoning planning is a form of reasoning and perhaps what led to the ability of humans to reason is the fact that or you know species you know that appear before us had to do some sort of planning to be able to hunt and survive and survive the winter in particular and so you know it's the same capacity that you need to have so in your intuition is if you look at expert systems in encoding knowledge as logic systems as graphs in this kind of way is not a useful way to think about knowledge graphs are your brittle or logic representation so basically you know variables that that have values and constraint between them that are represented by rules as well too rigid and too brittle right so one of the you know some of the early efforts in that respect were were to put probabilities on them so a rule you know you know if you have this in that symptom you know you have this disease with that probability and you should describe that antibiotic with that probability right this my sin system from the for the 70s and that that's what that branch of AI led to you know busy networks in graphical models and causal inference and vibrational you know method so so there there is I mean certainly a lot of interesting work going on in this area the main issue with this is is knowledge acquisition how do you reduce a bunch of data to graph of this type near relies on the expert and a human being to encode at add knowledge and that's essentially impractical yeah the question the second question is do you want to represent knowledge symbols and you want to manipulate them with logic and again that's incomparable we're learning so one suggestion with geoff hinton has been advocating for many decades is replace symbols by vectors think of it as pattern of activities in a bunch of neurons or units or whatever you wanna call them and replace logic by continuous functions okay and that becomes now compatible there's a very good set of ideas by region in a paper about 10 years ago by leon go to on who is here at face book the title of the paper is for machine learning to machine reasoning and his idea is that learning learning system should be able to manipulate objects that are in the same space in a space and then put the result back in the same space so is this idea of working memory basically and it's a very enlightening and in the sense that might learn something like the simple expert systems I mean it's with you can learn basic logic operations there yeah quite possibly yeah this is a big debate on sort of how much prior structure you have to put in for this kind of stuff to emerge that's the debate I have with Gary Marcus and people like that yeah yeah so and the other person so I just talked to judea pearl mm-hmm well you mentioned causal inference world his worry is that the current knew all networks are not able to learn what causes what causal inference between things so I think I think he's right and wrong about this if he's talking about the sort of classic type of neural nets people also didn't worry too much about this but there's a lot of people now working on causal inference and there's a paper that just came out last week by Leon Mbutu among others develop his path and push for other people exactly on that problem of how do you kind of you know get a neural net to sort of pay attention to real causal relationships which may also solve issues of bias in data and things like this so I'd like to read that paper because that ultimately the challenges also seems to fall back on the human expert to ultimately decide causality between things people are not very good at its direction causality first of all so first of all you talk to a physicist and physicists actually don't believe in causality because look at the all the busy clause or microphysics are time reversible so there is no causality the arrow of time is not right yeah it's it's as soon as you start looking at macroscopic systems where there is unpredictable randomness where there is clearly an arrow of time but it's a big mystery in physics actually well how that emerges is that emergent or is it part of the fundamental fabric of reality yeah or is it bias of intelligent systems that you know because of the second law of thermodynamics we perceive a particular arrow of time but in fact it's kind of arbitrary right so yeah physicists mathematicians they don't care about I mean the math doesn't care about the flow of time well certainly certainly macro physics doesn't people themselves are not very good at establishing causal causal relationships if you ask is I think it was in one of Seymour Papert spoken on like children learning you know he studied with Jean Piaget you know he's the guy who co-authored the book perceptron with Marvin Minsky that kind of killed the first wave but but he was actually a learning person he in the sense of studying learning in humans and machines that's what he got interested in for scepter on and he wrote that if you ask a little kid about what is the cause of the wind a lot of kids will say they will think for a while and they'll say oh it's the the branches in the trees they move and that creates wind right so they get the causal relationship backwards and it's because their understanding of the world and intuitive physics is not that great right I mean these are like you know four or five year old kids you know it gets better and then you understand that this it can't be right but there are many things which we can because of our common sense understanding of things what people call common sense yeah and we understanding of physics we can there's a lot of stuff that we can figure out causality even with diseases we can figure out what's not causing what often there's a lot of mystery of course but the idea is that you should be able to encode that into systems it seems unlikely to be able to figure that out themselves well whenever we can do intervention but you know all of humanity has been completely deluded for millennia probably since existence about a very very wrong causal relationship where whatever you can explain you attributed to you know some deity some divinity right and that's a cop-out that's the way of saying like I don't know the cause so you know God did it right so you mentioned Marvin Minsky and the irony of you know maybe causing the first day I winter you were there in the 90s you're there in the 80s of course in the 90s what do you think people lost faith and deep learning in the 90s and found it again a decade later over a decade later yeah it wasn't called dethroning yeah it was just called neural nets you know yeah they lost interests I mean I think I would put that around 1995 at least the machine learning community there was always a neural net community but it became disconnected from sort of ministry machine owning if you want there were it was basically electrical engineering that kept at it and computer science just gave up give up on neural nets I don't I don't know you know I was too close to it to really sort of analyze it with sort of a unbiased eye if you want but I would I would I would would make a few guesses so the first one is at the time neural nets were it was very hard to make them work in the sense that you would you know implement back prop in your favorite language and that favorite language was not Python it was not MATLAB it was not any of those things cuz they didn't exist right you had to write it in Fortran or C or something like this right so you would experiment with it you would probably make some very basic mistakes like you know badly initialize your weights make the network too small because you read in the textbook you know you don't want too many parameters right and of course you know and you would train on x4 because you didn't have any other data set to try it on and of course you know it works half the time so we'd say you give up also 22 the batch gradient which you know isn't it sufficient so there's a lot of bag of tricks that you had to know to make those things work or you had to reinvent and a lot of people just didn't and they just couldn't make it work so that's one thing the investment in software platform to be able to kind of you know display things figure out why things don't work and I get a good intuition for how to get them to work have enough flexibility so you can create you know network architectures well completion ads and stuff like that it was hard yeah when you had to write everything from scratch and again you didn't have any Python or MATLAB or anything right so what I read that sorry to interrupt but I read he wrote in in Lisp the first versions of Lynette accomplished in your networks which by the way one of my favorite languages that's how I knew you were legit the Turing Award whatever this would be programmed and list that's still my favorite language but it's not that we programmed in Lisp it's that we had to write or this printer printer okay cuz it's not that's right that's one that existed so we wrote a lisp interpreter that we hooked up to you know back in library that we wrote also for neural net competition and then after a few years around 1991 we invented this idea of basically having modules that know how to forward propagate and back propagate gradients and then interconnecting those modules in a graph loom but who had made proposals on this about this in the late 80s and were able to implement this using all this system eventually we wanted to use that system to make build production code for character recognition at Bell Labs so we actually wrote a compiler for that disp interpreter so that Christy Martin who is now Microsoft kind of did the bulk of it with Leone and me and and so we could write our system in lisp and then compiled to seee and then we'll have a self-contained complete system that could kind of do the entire thing neither Python or turn pro can do this today yeah okay it's coming yeah I mean there's something like that in Whitehorse called you know tor script and so you know we had to write or Lisp interpreter which retinol is compiler way to invest a huge amount of effort to do this and not everybody if you don't completely believe in the concept you're not going to invest the time to do this right now at the time also you know it were today this would turn into torture by torture and so for whatever we put it in open-source everybody would use it and you know realize it's good back before 1995 working at AT&T there's no way the lawyers would let you release anything in open source of this nature and so we could not distribute our code really and at that point and sorry to go on a million tangents but on that point I also read that there was some almost pad like a patent on convolution your network yes it was labs so that first of all I mean just to actually that ran out the thankfully 8007 in 2007 that what look can we can we just talk about that first I know you're a facebook but you're also done why you and and what does it mean patent ideas like these software ideas essentially or what are mathematical ideas or what are they okay so they're not mathematical idea so there are you know algorithms and there was a period where the US Patent Office would allow the patent of software as long as it was embodied the Europeans are very different they don't they don't quite accept that they have a different concept but you know I don't I know no I mean I never actually strongly believed in this but I don't believe in this kind of patent Facebook basically doesn't believe in this kind of pattern Google Files patterns because they've been burned with Apple and so now they do this for defensive purpose but usually they say we're not going to see you if you infringe Facebook has a similar policy they say you know we file pattern on certain things for defensive purpose we're not going to see you if you infringe unless you sue us so the the industry does not believe in in patterns they are there because of you know the legal landscape and and and various things but but I don't really believe in patterns for this kind of stuff yes so that's that's a great thing so I tell you a war story yeah you so what happens was the the first the first pattern of a condition that was about kind of the early version Congress on that that didn't have separate pudding layers it had the conditional layers which tried more than one if you want right and then there was a second one on commercial nets with separate pudding layers train with back probably in 89 and 1992 something like this at the time the life life of a pattern was 17 years so here's what happened over the next few years is that we started developing character recognition technology around commercial Nets and in 1994 a check reading system was deployed in ATM machines in 1995 it was for a large check reading machines in back offices etc and those systems were developed by an engineering group that we were collaborating with AT&T and they were commercialized by NCR which at the time was a subsidiary of AT&T now it ain't he split up in 1996 99 in 1996 and the lawyers just looked at all the patterns and they distributed the patterns among the various companies they gave the the commercial net pattern to NCR because they were actually selling products that used it but nobody I didn't see are at any idea where they come from that was yeah okay so between 1996 and 2007 there's a whole period until 2002 I didn't actually work on machine on your couch on that I resumed working on this around 2002 and between 2002 and 2007 I was working on them crossing my finger that nobody and NCR would notice nobody noticed yeah and I and I hope that this kind of somewhat as you said lawyers decide relative openness of the community now will continue it accelerates the entire progress of the industry and you know the problems that Facebook and Google and others are facing today is not whether Facebook or Google or Microsoft or IBM or whoever is ahead of the other it's that we don't have the technology to build the things we want to build we only build intelligent virtual systems that have common sense we don't have a monopoly on good ideas for this we don't believe with you maybe others do believe they do but we don't okay if a start-up tells you they have the secret to you know human level intelligence and common sense don't believe them they don't and it's going to take the entire work of the world research community for a while to get to the point where you can go off and in each of the company is going to start to build things on this we're not there yet it's absolutely in this this calls to the the gap between the space of ideas and the rigorous testing of those ideas of practical application that you often speak to you've written advice saying don't get fooled by people who claim to have a solution to artificial general intelligence who claim to have an AI system that work just like the human brain or who claim to have figured out how the brain works ask them what the error rate they get on em 'no store imagenet this is a little dated by the way that mean five years who's counting okay but i think your opinion it's the Amna stand imagenet yes may be data there may be new benchmarks right but i think that philosophy is one you still and and somewhat hold that benchmarks and the practical testing the practical application is where you really get to test the ideas well it may not be completely practical like for example you know it could be a toy data set but it has to be some sort of task that the community as a whole has accepted as some sort of standard you know kind of benchmark if you want it doesn't need to be real so for example many years ago here at fair people you know chosen Western art one born and a few others proposed the the babbitt asks which were kind of a toy problem to test the ability of machines to reason actually to access working memory and things like this and it was very useful even though it wasn't a real task amnesties kind of halfway a real task so you know toy problems can be very useful it's just that i was really struck by the fact that a lot of people particularly our people with money to invest would be fooled by people telling them oh we have you know the algorithm of the cortex and you should give us 50 million yes absolutely so there's a lot of people who who tried to take advantage of the hype for business reasons and so on but let me sort of talk to this idea that new ideas the ideas that push the field forward may not yet have a benchmark or it may be very difficult to establish a benchmark I agree that's part of the process establishing benchmarks is part of the process so what are your thoughts about so we have these benchmarks on around stuff we can do with images from classification to captioning to just every kind of information can pull off from images and the surface level there's audio datasets there's some video what can we start natural language what kind of stuff what kind of benchmarks do you see they start creeping on to more something like intelligence like reasoning like maybe you don't like the term but AGI echoes of that kind of yeah sort of elation a lot of people are working on interactive environments in which you can you can train and test intelligent systems so so there for example you know it's the classical paradigm of supervised running is that you you have a data set you partition it into a training site validation set test set and there's a clear protocol right but what if the that assumes that this apples are statistically independent you can exchange them the order in which you see them doesn't shouldn't matter you know things like that but what if the answer you give determines the next sample you see which is the case for example in robotics right you robot does something and then it gets exposed to a new room and depending on where it goes the room would be different so that's the decrease the exploration problem the what if the samples so that creates also a dependency between samples right you you if you move if you can only move it in in space the next sample you're gonna see is going to be probably in the same building most likely so so so the all the assumptions about the validity of this training set test set a potus's break whatever a machine can take an action that has an influence in the in the world and it's what is going to see so people are setting up artificial environments where what that takes place right the robot runs around a 3d model of a house and can interact with objects and things like this how you do robotics by simulation you have those you know opening a gym type thing or mu Joko kind of simulated robots and you have games you know things like that so that that's where the field is going really this kind of environment now back to the question of a GI like I don't like the term a GI because it implies that human intelligence is general and human intelligence is nothing like general it's very very specialized we think it's general we'd like to think of ourselves as having your own science we don't we're very specialized we're only slightly more general than why does it feel general so you kind of the term general I think what's impressive about humans is ability to learn as we were talking about learning to learn in just so many different domains is perhaps not arbitrarily general but just you can learn in many domains and integrate that knowledge somehow okay that knowledge persists so let me take a very specific example yes it's not an example it's more like a a quasi mathematical demonstration so you have about 1 million fibers coming out of one of your eyes okay two million total but let's let's talk about just one of them it's 1 million nerve fibers your optical nerve let's imagine that they are binary so they can be active or inactive right so the input to your visual cortex is 1 million bits now they connected to your brain in a particular way on your brain has connections that are kind of a little bit like accomplish on that they're kind of local you know in space and things like this I imagine I play a trick on you it's a pretty nasty trick I admit I I cut your optical nerve and I put a device that makes a random perturbation of a permutation of all the nerve fibers so now what comes to your to your brain is a fixed but random permutation of all the pixels there's no way in hell that your visual cortex even if I do this to you in infancy will actually learn vision to the same level of quality that you can got it and you're saying there's no way you ever learn that no because now two pixels that on your body in the world will end up in very different places in your visual cortex and your neurons there have no connections with each other because they only connect it locally so this whole our entire the hardware is built in many ways to support the locality of the real world yeah yes that's specialization yep okay it's still now really damn impressive so it's not perfect generalization I even closed no no it's it's it's it's not that it's not even close it's not at all yes it's socialize so how many boolean functions so let's imagine you want to train your visual system to you know recognize particular patterns of those 1 million bits ok so that's a boolean function right either the pattern is here or not here this is a to to a classification with 1 million binary inputs how many such boolean functions are there okay if you have 2 to the 1 million combinations of inputs for each of those you have an output bit and so you have 2 to the 2 to the 1 million boolean functions of this type okay which is an unimaginably large number how many of those functions can actually be computed by your visual cortex and the answer is a tiny tiny tiny tiny tiny tiny sliver like an enormous little tiny sliver yeah yeah so we are ridiculously specialized you know okay but okay that's an argument against the word general I think there's there's a I there's I agree with your intuition but I'm not sure it's it seems the breath the the brain is impressively capable of adjusting to things so it's because we can't imagine tasks that are outside of our comprehension right we think we think we are general because we're general of all the things that we can apprehend so yeah but there is a huge world out there of things that we have no idea we call that heat by the way heat heat so at least physicists call that heat or they call it entropy which is kokkonen you have a thing full of gas right call system for gas right goes on a coast it has you know pressure it has temperature has you know and you can write the equations PV equal NRT you know things like that right when you reduce a volume the temperature goes up the pressure goes up you know things like that right for perfect gas at least those are the things you can know about that system and it's a tiny tiny number of bits compared to the complete information of the state of the entire system because the state when HR system will give you the position and momentum of every every molecule of the gas and what you don't know about it is the entropy and you interpret it as heat the energy containing that thing is is what we call heat now it's very possible that in fact there is some very strong structure in how those molecules are moving is just that they are in a way that we are just not wired to perceive they are ignorant to it and there's in your infinite amount of things we're not wired to perceive any right that's a nice way to put it well general to all the things we can imagine which is a very tiny a subset of all things that are possible it was like coma growth complexity or the coma was charged in some one of complexity you know every bit string or every integer is random except for all the ones that you can actually write down yeah okay so beautifully put but you know so we can just call it artificial intelligence we don't need to have a general whatever novel human of all Nutella transmissible oh you know you'll start anytime you touch human it gets it gets interesting because you know it's just because we attach ourselves to human and it's difficult to define with human intelligences yeah nevertheless my definition is maybe damn impressive intelligence ok damn impressive demonstration of intelligence whatever and so on that topic most successes in deep learning have been in supervised learning what is your view on unsupervised learning is there a hope to reduce involvement of human input and still have successful systems that are have practically used yeah I mean there's definitely a hope is it's more than a hope actually it's it's you know mounting evidence for it and that's basically or I do like the only thing I'm interested in at the moment is I call it self supervised running not unsupervised cuz unsupervised running is a loaded term people who know something about machine learning you know tell us how you doing clustering or PCA yeah she's nice and the way public we know when you say enterprise only oh my god you know machines are gonna learn by themselves and without supervision you know there's the parents yeah so so I could sell supervised learning because in fact the underlying algorithms that I use are the same algorithms as the supervised learning algorithms except that what we trained them to do is not predict a particular set of variables like the category of an image and and not to predict a set of variables that have been provided by human labelers but what you're trying to machine to do is basically reconstruct a piece of its input that it's being this being masked masked out essentially you can think of it this way right so show a piece of a video to a machine and ask it to predict what's gonna happen next and of course after a while you can show what what happens and the machine will kind of train itself to do better at that task you can do like all the latest most successful models the natural language processing use cell supervised running you know sort of bird style systems for example right you show it a window of a thousand words on a test corpus you take out 15% of the words and then you train a machine to predict the words that are missing that's out supervised running it's not predicting the future it's just you know predicting things in middle but you could have you predict the future that's what language models do so you construct it so in an unsupervised way you construct a model of language do you think or video or the physical world or whatever right how far do you think that can take us do you think very far it understands anything to some level it has you know a shallow understanding of of text but it needs to I mean to have kind of true human level intelligence I think you need to ground language in reality so some people are attempting to do this right having systems that can I have some visual representation of what what is being talked about which is one reason you need interactive environments actually this is like a huge technical problem that is not solved and that explains why such super versioning works in the context of natural language that does not work in the context on at least not well in the context of image recognition and video although it's making progress quickly and the reason that reason is the fact that it's much easier to represent uncertainty in the prediction you know context of natural language than it is in the context of things like video and images so for example if I ask you to predict what words are missing you know 15 percent of the words that I've taken out the possibility is small that means small right there is 100,000 words in the in the lexicon and what the Machine spits out is a big probability vector right it's a bunch of numbers between 0 & 1 that's 1 to 1 and we know how to do how to do this with computers so they are representing uncertainty in the prediction is relatively easy and that's in my opinion why those techniques work for NLP for images if you ask if you block a piece of an image and you as a system reconstruct that piece of the image there are many possible answers there are all perfectly legit right and how do you represent that the set of possible answers you can't train a system to make one prediction you can train a neural net to say here it is that's the image because it's there's a whole set of things that are compatible with it so how do you get the machine to represent not a single output but all set of outputs and you know similarly with video prediction there's a lot of things that can happen in the future video you're looking at me right now I'm not moving my head very much but you know I might you know what turn my my head to the left or to the right right if you don't have a system that can predict this and you train it with least Square to kind of minimize the error with the prediction and what I'm doing what you get is a blurry image of myself in all possible future positions that I might be in which is not a good prediction but so there might be other ways to do the self supervision right for visual scenes like what if i I mean if I knew I wouldn't tell you publish it first I don't know I know there might be so I mean these are kind of there might be artificial ways of like self play in games the way you can simulate part of the environment you can oh that doesn't solve the problem it's just a way of generating data but because you have more of a country might mean you can control yeah it's a way to generate data and that's right and because you can do huge amounts of data generation that doesn't you write this well it's it's a creeps up on the problem from the side of data and you don't think that's the right way to it doesn't solve this problem of handling uncertainty in the world right so if you if you have a machine learn a predictive model of the world in a game that is deterministic or quasi deterministic it's easy right just you know give a few frames of the game to a combat put a bunch of layers and then half the game generates the next few frames and and if the game is deterministic it works fine and that includes you know feeding the system with the action that your little character is going to take the problem comes from the fact that the real world and certain most games are not entirely predictable that's what they're you get those blurry predictions and you can't do planning with very predictions all right so if you have a perfect model of the world you can in your head run this model with a hypothesis for a sequence of actions and you're going to predict the outcome of that sequence of actions but if your model is imperfect how can you plan yeah it quickly explodes what are your thoughts on the extension of this which topic I'm super excited about it's connected to something you're talking about in terms of robotics is active learning so as opposed to sort of unemployed and supervisors self supervised learning you ask the system for human help right for selecting parts you want annotated next so if you talk about a robot exploring a space or a baby exploring a space or a system exploring a data set every once in a while asking for human input you see value in that kind of work I don't see transformative value it's going to make things that we can already do more efficient or they will learn slightly more efficiently but it's not going to make machines sort of significantly more intelligent I think and I and by the way there is no opposition there is no conflict between self supervisor on reinforcement learning and supervisor on your imitation learning or active learning I see sub super wrestling as a as a preliminary to all of the above yes so the example I use very often is how is it that so if you use enforcement running deep enforcement running if you want the best methods today was so-called model free enforcement training to learn to play Atari games take about 80 hours of training to reach the level that any human can reach in about 15 minutes they get better than humans but it takes a long time alpha star okay the you know are your videos and his team's the system to play to to play Starcraft plays you know a single map a single type of player and which better than human level is about the equivalent of 200 years of training playing against itself it's 200 years right it's not something that no no human can could every I'm not sure what it doesn't take away from that okay now take those algorithms the best our algorithms we have today to train a car to drive itself it would probably have to drive millions of hours you will have to kill thousands of pedestrians it will have to run into thousands of trees it will have to run off cliffs and you had to run the cliff multiple times before it figures out it's a bad idea first of all yeah and second of all the figures that had not to do it and so I mean this type of running obviously does not reflect the kind of running that animals and humans do there is something missing that's really really important there and my apart is is which have been advocating for like five years now is that we have predictive models of the world that include the ability to predict under uncertainty and what allows us to not run off a cliff when we learn to drive most of us can learn to drive in about 20 or 30 hours of training without ever crashing causing any accident if we drive next to a cliff we know that if we turn the wheel to the right the car is going to run off the cliff and nothing good is gonna come out of this because we have a pretty good model of intuitive physics that tells us you know the car is gonna fall we know we know about gravity babies run this around the age of eight or nine months that objects don't float they fall and you know we have a pretty good idea of the effect of turning the wheel of the car and you know we know we need to stay on the road so there is a lot of things that we bring to the table which is basically or predictive model of the world and that model allows us to not do stupid things and to basically stay within the context of things we need to do we still face you know unpredictable situations and that's how we learn but that allows us to learn really really really quickly so that's called model-based reinforcement running there's some imitation and supervised running because we have a driving instructor that tells us occasionally what to do but most of the learning is Mauro bass is learning the model yeah running physics that we've done since we were babies that's where all almost all are learning and the physics is somewhat transferable from is transferable from scene to scene stupid things are the same everywhere yeah I mean if you you know you have experience of the world you don't need to be particularly from a particularly intelligent species to know that if you spill water from a container you know the rest is gonna get wet and you might get wet so you know cats know this right yeah so the main problem we need to solve is how do we learn models of the world that's and that's what I'm interesting that's what's a supervised learning is all about if you were to try to construct a benchmark for let's let's look at happiness I'd love that dataset but if you do you think it's useful interesting / possible to perform well on eminence with just one example of each digit and how would we solve that problem yeah so it's probably yes the question is what other type of running are you allowed to do so if what you like to do is train on some gigantic data set of labelled digit that's called transfer running and we know that works okay we do this at Facebook like in production right we we train large commercial nets to predict hashtags that people type on Instagram and we train on billions of images literally billions and and then we chop off the last layer and fine-tune on whatever task we want that works really well you can be you know the image net record with we actually open source the whole thing like a few weeks ago yeah that's still pretty cool but yeah so what in yet won't be impressive and what's useful an impressive what kind of transfer learning would be useful impressive is it Wikipedia that kind of thing no no I don't think transfer learning is really where we should focus we should try to do you know have a kind of scenario for benchmark where you have only ball data and you can and it's very large number of enabled data it could be video clips it could be what you do you know frame prediction it could be images you could choose to you know mask a piece of it it could be whatever but they're only bold and you're not allowed to label them so you do some training on this and then you train on a particular supervised task imagenet or nist and you measure how your test our decrease or variation error decreases as you increase the number of label training samples okay and and what what you would like to see is is that you know your your error decreases much faster than if you trained from scratch from random weights so that to reach the same level of performance and a completely supervised purely supervised system would reach you would need way fewer samples so that's the crucial question because it will answer the question to like you know people are interested in medical image analysis okay you know if I want to get to a particular level of error rate for this task I know I need a million samples can I do you know soft supervised pre-training to reduce this to about 100 or something anything the answer there is soft supervised retraining yep some form some form of it telling you active learning but you disagree you know it's not useless it's just not gonna lead to a quantum leap it's just gonna make things that we already do so you're way smarter than me I just disagree with you but I don't have anything to back that it's just intuition so I've worked a lot of large-scale data sets and there's something there might be magic and active learning but okay at least I said it publicly at least some being an idea publicly okay it's not bigoted yet it's you know working with the data you have I mean I mean certainly people are doing things like okay I have three thousand hours of you know imitation running for in car but most of those are incredibly boring what I like is select you know 10% of them that are kind of the most informative and with just that I would probably reach the same so it's a weak form of of active running if you want yes but there might be a much stronger version yeah that's right that's what another notion question is the question is how much talking yet Elon Musk is confident talk to him recently he's confident that large-scale data and deep learning can solve the autonomous driving problem what are your thoughts on the limitless possibilities of deep learning in this space I was it's obviously part of the solution I mean I don't think we'll ever have a set driving system or it is not in the foreseeable future that does not use deep running you put it this way now how much of it so in the history of sort of engineering particularly is sort of sort of a I like systems is generally your first phase where everything is built by hand and it was the second phase and that was the case for autonomous driving you know 23 years ago there's a phase where this a little bit of running is used but there's a lot of engineering that's involved in kind of you know taking care of corner cases and and putting limits etc because the learning system is not perfect and then I as technology progresses we end up relying more and more on learning that's the history of character recognition is a history of speech recognition now computer vision that ronnie was processing and I think the same is going to happen with with the term is driving that currently the the the methods that are closest to providing some level of autonomy some you know a decent level of autonomy where you don't expect a driver to kind of do anything is where you constrain the world so you only run within you know 100 square kilometers or square miles in Phoenix but the weather is nice and the roads are wide it wishes what Weimer is doing you completely over engineer the car with tons of light hours and sophisticated sensors that are too expensive for consumer cars but they're fine if you just run a fleet and you engineer the thing the hell out of the everything else you you map the entire world so you have complete 3d model of everything so the only thing that the perception system has to take care of is moving objects and and and construction and sort of you know things that that weren't in your map and you can engineer a good you know slam system or eye stuff right so so that's kind of the current approach that's closest to some level of autonomy but I think eventually the long term solution is going to rely more and more on learning and possibly using a combination of supervised learning and model-based reinforcement or something like that but ultimately learning will be at not just at the core but really the fundamental part of the system yeah it already is but it'll become more and more what do you think it takes to build a system with human level intelligence you talked about the AI system and then we her being way out of reach our current reach this might be outdated as well but this is still way out of reach what would it take to build her do you think so I can tell you the first two obstacles that we have to clear but I don't know how many obstacles they are after this so the image I usually use is that there is a bunch of mountains that we have to climb and we can see the first one but we don't know if there are 50 mountains behind it or not and this might be a good sort of metaphor for why AI researchers in the past I've been overly optimistic about the result of AI you know for example New Orleans Simon Wright wrote the general problem solver and they call it the general problems you have problems okay and of course if it's you realize is that all the problems you want to solve is financial and so you can't actually use it for anything useful but you know yes oh yeah all you see is the first peak so in general what are the first couple of peaks for her so the first peak which is precisely what I'm working on is self supervisor running high how do we get machines to learn models of the world by observation kind of like babies and like young animals so I we've been working with you know cognitive scientists so this Amanda depuis who is at fair and in Paris is half-time is also a researcher and French University and he he has his chart that shows that which how many months of life baby humans kind of learned different concepts and you can met you can measure this various ways so things like distinguishing animate objects from animate inanimate object you can you can tell the difference at age to three months whether an object is going to stay stable is gonna fall you know about four months you can tell you know things like this and then things like gravity the fact that objects are not supposed to float in the air but as opposed to fall you run this around the age of eight or nine months if you look at a lot of you know eight month old babies you give them a bunch of toys on the highchair first thing they do is it's why I'm on the ground that you look at them it's because you know they're learning about actively learning about gravity gravity yeah okay so they're not trying to know you but they you know they need to do the experiment right yeah so you know how do we get machines to learn like babies mostly by observation with a little bit of interaction and learning those those those models of the world because I think that's really a crucial piece of an intelligent autonomous system so if you think about the architecture of an intelligent autonomous system it needs to have a predictive model of the world so something that says here is a wall that time T here is a stable world at time T plus one if I take this action and it's not a single answer it can be education yeah yeah well but we don't know how to represent distributions in high dimension continuous basis so it's got to be something we care that data Hey but with some summer presentation with certainty if you have that then you can do what optimal control theory is called model predictive control which means that you can run your model with the hypothesis for a sequence of action and then see the result now what you need the other thing you need is some sort of objective that you want to optimize am i reaching the goal of grabbing the subject about minimizing energy am I whatever right so there is some sort of objectives that you have to minimize and so in your head if you had this model you can figure out the sequence of action that will optimize your objective that objective is something that ultimately is rooted in your basal ganglia at least in the human brain that's that's what is available Gambia computes your level of contentment or miss contentment oh no noise that's a word unhappiness okay yeah this contentment this contentment and so your entire behavior is driven towards kind of minimizing that objective which is maximizing your contentment computed by your your basal ganglia and what you have is an objective function which is basically a predictor of what your basal ganglia is going to tell you so you're not going to put your hand on fire because you know it's gonna you know it's gonna burn and you're gonna get hurt and you're predicting this because of your model of the world and your your predictor of this objective right so you if you have those you have those three components you have four components you have the the hard-wired contentment objective good computer if you want calculator and then you have the three components one is the objective predictor which basically predicts your level of contact and one is the model of the world and there's a third module I didn't mention which is a module that will figure out the best course of action to optimize an objective given your model okay yeah cool it's a policy policy network or something like that right now you need those three components to act autonomously intelligently and you can be stupid in three different ways you can be stupid because your model of the world is wrong you can be stupid because your objective is not aligned with what you actually want to achieve okay and in humans that would be a psychopath right and then the the third thing you the third way you can be stupid is that you have the right model you have the right objective but you're unable to figure out a course of action to optimize your objective given your model some people who are in charge of big countries actually have all three that are wrong all right which countries I don't know okay so if we think about this this agent if you think about the movie her you've criticized the art project that is Sophia the robot and what that project essentially does is uses our natural inclination to anthropomorphize things that look like human and given more do you think that could be used by AI systems like in the movie her so do you think that body is needed to create a feeling of intelligence well if Sophia was just an art piece I would have no problem with it but it's presented as something else let me add that comics real quick if creators of Sofia could change something about their marketing or behavior in general what would it be what what's just about everything I mean don't you think here's a tough question I mean so I agree with you so Sofia is not in the general public feels that Sofia can do way more than she actually can that's right and the people will create a Sofia are not honestly publicly communicating trying to teach the public right but here's a tough question don't you think this the same thing is scientists in industry and research are taking advantage of the sameness misunderstanding in the public when they create AI companies or published stuff some companies yes I mean there is no sense of there's no desire to delude there's no desire to kind of over claim what something is done right you know you should paper on AI that you know has this result on image net you know it's pretty clear I mean it's not even not even interesting anymore but you know I I don't think there is that I mean the reviewers are generally not very forgiving of of you know unsupported claims of this type and but there are certainly quite a few startups that have had a huge amount of hype around this that I find extremely damaging and I've been calling it out when I've seen it so yeah but to go back to your original question like the necessity of embodiment I think I don't think embodiment is necessary I think grounding is necessary so I don't think we're gonna get machines that I really understand language without some level of grounding in the world world and it's not clear to me that language is a kind of bandwidth medium to communicate how the real world works I think what this doctor ground our grounding means so running me he's that so there is this classic problem of common sense reasoning you know the the Winograd Winograd schema right and so I tell you the the trophy doesn't fit in the suitcase because this tool is too big what the trophy doesn't fit in the suitcase because it's too small and the it in the first case refers to the trophy in the second case to the suitcase and the reason you can figure this out is because you know what the trophy in the suitcase are you know one is supposed to fit in the other one and you know the notion of size and the big object doesn't fit in a small object and this is a TARDIS you know it things like that right so you have this got this knowledge of how the world works of geometry and things like that I don't believe you can learn everything about the world by just being told in language how the world works I think you need some low-level perception of the world you know be a visual touch you know whatever but some higher bandwidth perceptions of the world but by reading all the world's text you still may not have enough information that's right there's a lot of things that just will never appear in text and that you can't really infer so I think common sense will emerge from you know certainly a lot of language interaction but also with watching videos or perhaps even interacting in the in virtual environments and possibly you know robot interacting in the real world but I don't actually believe necessarily that this last one is absolutely necessary but I think there's a need for some grounding but the final product doesn't necessarily need to be embodied you know who say no it just needs to have an awareness a grounding right but it needs to know how the world works to have you know to not be frustrated frustrating to talk to and you talked about emotions being important that's that's a whole nother topic well so you know I talked about this the the basal ganglia ganglia as the you know this thing that could you know calculates your level of miss contentment contentment and then there is this other module that sort of tries to do a prediction of whether you're going to be content or not that's the source of some emotion so here for example is an anticipation of bad things that can happen to you right you have this inkling that there is some chance that something really bad is gonna happen to you and that creates here when you know for sure that something bad is gonna happen to you you cannot give up right it's not bad anymore it's uncertainty it creates fear so so the punchline is yes we're not gonna have a ton of intelligence without emotions whatever the heck emotions are so you mentioned very practical things of fear but there's a lot of other mess around but there are kind of the results of you know drives yeah there's deeper biological stuff going on and I've talked a few folks on this there's a fascinating stuff that ultimately connects to our joy to our brain if we create an AGI system sorry interminable human level intelligence system and you get to ask her one question what would that question be you know I think the the first one we'll create would probably not be that smart did you like a four-year-old okay so you would have to ask her a question - no she's not that smart yeah well what's a good question to ask you know to be responsive wind and if she answers oh it's because the leaves of the tree are moving in that creates wind she's on to something and if she says yeah that's a stupid question she's really obtuse no and then you tell her actually you know here is the the real thing and she says oh yeah that makes sense so questions that that reveal the ability to do common-sense reasoning about the physical world yeah and you know someone will call 20 ferns causal evidence well it was a huge honor congratulations returning award you know and thank you so much for talking today thank you you
Jeremy Howard: fast.ai Deep Learning Courses and Research | Lex Fridman Podcast #35
the following is a conversation with Jeremy Howard he's the founder of fast AI a Research Institute dedicated to making deep learning more accessible he's also a distinguished research scientist at the University of San Francisco a former president of Kegel as well as the top ranking competitor there and in general he's a successful entrepreneur educator researcher and an inspiring personality in the AI community when someone asked me how do I get started with deep learning fast AI is one of the top places that point them to it's free it's easy to get started it's insightful and accessible and if I may say so it has very little BS they can sometimes dilute the value of educational content on popular topics like deep learning fast AI has a focus on practical application of deep learning and hands-on exploration of the cutting edge that is incredibly both accessible to beginners and useful to experts this is the artificial intelligence podcast if you enjoy it subscribe on YouTube give it five stars and iTunes supported on patreon or simply connect with me on Twitter Alex Friedman spelled Fri D ma N and now here's my conversation with Jeremy Howard what's the first program you've ever ridden this program I wrote that I remember would be at high school I did an assignment where I decided to try to find out if there were sand like better musical scales and the normal twelve tone twelve interval scale so I wrote a program on my Commodore 64 in basic let's search through other scale sizes to see if you could find one where they were more accurate you know harmonies like mid tone like sliding like he won an actual exactly 3 to 2 ratio where else with a 12 interval scale it's not exactly 3 to 2 for example so that's in the car well tempered as I say you know and basic on a Commodore 64 yeah where was the interest in music from or is it just I took music all my life so I played the phone and clarinet and piano and guitar and drums and whatever so how does that threat go through your life where's music today yeah it's not where I wish it was I for various reasons couldn't really keep it going particularly because I had a lot of problems with RSI with my fingers and so I had to kind of like cut back anything that used hands and fingers I hope one day I'll be able to get back to it health-wise so there's a love for music underlying it all yeah what's your favorite instrument sex the phone sex baritone saxophone well probably bass saxophone but they're awkward well I'm I always love it when music is coupled with programming there's something about a brain that utilizes those that emerges with creative ideas so you've used and studied quite a few programming languages can you given an overview of what you've used one of the pros and cons of each well my favorite programming environment almost certainly was Microsoft Access back in like the earliest days so that was Visual Basic for applications which is not a good programming language for the programming environment fantastic it's like the ability to create you know user interfaces and tie data and actions to them and create reports and all that as I've never seen anything as good there's things nowadays like air table which you're like small subsets of that which people love for good reason but unfortunately nobody's ever achieved anything like that what is that if you could pause in there for a second no access this is it a database database program that Microsoft produced part of office and the kind of wizard you know but basically it lets you in a totally graphical way create tables and relationships and queries and tie them to forms and set up you know event handlers and calculations and it was very plate powerful system designed for not massive scalable things but fair like useful little applications that I loved so what's the connection between excel and access so very close so access kind of was the relational database equivalent if you like so people still do a lot of that stuff it should be an access in Excel excels they don't know what Excel is great as well so but it's just not as rich a programming model as VBA combined with a relational database and so I've always loved relational databases but today programming on top of a relational database is just a lot more of a headache you know you generally either need to kind of you know you need something that connects that that runs some kind of database server unless you use circle light which has its own issues then you can often if you want to get a nice programming model you'll need to like create and add an ORM on top and then I don't know there's all these pieces tie together and it's just a lot more awkward than it should be there are people that are trying to make it easier so in particular I think of if sharp you know Don Syme who him and his team have done a great job of making something like a database appear in the type system so you actually get like tab completion for fields and tables and stuff like that anyway so that was kind of anyway so like that whole VBA office thing I guess was a starting point which I still miss I got into standard Visual Basic that's interesting just to pause on them for a second it's interesting that you're connecting programming languages to the ease of management of data yeah so in your use of programming languages you always had a love and a connection with data I've always been interested in doing useful things for myself and for others which generally means getting some data and doing something with it and putting it out there again so that's been my interest throughout so I also did a lot of stuff with Apple script back in the early days so it's kind of nice being able to get the computer and computers to talk to each other and to do things for you and then I could think that one night the programming language I most loved then would have been Delphi which was object pascal created by under sales berg who previously did to it by pascal and then went on to create dotnet and then went on create typescript delphi was amazing because it was like a compiled fast language that was as easy to use as Visual Basic Delphi what is it similar to in in more modern languages Visual Basic Visual Basic yeah that a compiled fast version so I'm not sure there's anything quite like it anymore if you took like C shop or Java and got rid of the virtual machine and replaced it with something you could compile a small type binary I feel like it's where um Swift could get to with the new Swift UI and the cross-platform development going on like that's one of my dreams is that will hopefully get back to where Delphi was there is actually a free Pascal project nowadays called Lazarus which is also attempting to kind of recreate Delphi though they're making good progress so ok Delphi that's one of your favorite programming languages programming environments again I hate Pascal's not a nice language if you wanted to know specifically about what languages I like they would definitely pick J there's being an amazingly wonderful language well woods j.j are you aware of APL I am NOT okay so from doing a little research on work you've done okay so not at all surprising you're not familiar with it cuz it's not well known but it's actually one of the main families of programming languages going back to the late 50s early 60s so there was a couple of major directions one was the kind of lambda calculus Alonzo Church direction which I guess kind of listens game and whatever which has a history going back to the early days of computing the second was the kind of imperative /o o you know algo Simula going under C C++ so forth there was a third which Accord array oriented languages which started with a paper by a guy called Ken Iverson which was actually a math theory paper not a programming paper it was called notation as a tool for thought and it was the development of a new way a new type of math notation and the idea is that this math notation would be was was much more flexible expressive and also well-defined then traditional math notation which is none of those things math notation is awful and so he actually turned that into a programming language and because this was the early 50s although that's very late 50s although names were available so he called his language a programming language or APL ABL APL is a implementation of notation as a tool for thought by which he means math notation and Ken and his son went on to do many things but eventually they actually produced you know a new language that was built on top of all the learnings of APL that was called J and J is the most expressive composable language of you know beautifully designed language I've ever seen this didn't have object-oriented components deserve that kind of thing there's not really it's an array oriented language it's a new it's a it's an it's it's the third half using array array oriented yes so I need to be a ray warrior so arranged it means that you generally don't use any loops but the whole thing is done with kind of a extreme version of broadcasting if you're familiar with that none got an umpire slash Python concept so you do a lot with one line of code it looks a lot like math notation basically I'll compact mm-hm and the idea is that you can kind of because you can do so much with one line of code a single screen of code is very unlikely to you very rarely need more than that to in the rest your program and so you can kind of keep it all in your head and you can kind of clearly communicate it it's interesting that the APL created two main branches k and j j is this kind of like open source niche community of crazy enthusiasts like me and then the other path k was fascinating it's an astonishingly expensive programming language which many of the world's most ludicrous a rich hedge funds use so the entire machine is so small it sits inside level 3 cache on your CPU and and it easily wins every benchmark I've ever seen in terms of data processing speed hey you don't come across it very much because it's like $100,000 per CPU to to run it yeah but it's like this this this this path of programming languages it's just so much that are not so much more powerful in every way than the ones that almost anybody uses every day so though it's all about computation it's really focused pretty heavily focused on computation I mean so much of programming is data processing by definition and so there's a lot of things you can do with it but yeah there's not much work being done on making like use user interface talking us or whatever I mean this some but it's they're not great at the same time you've done a lot of stuff with Perl and Python yeah so where does that fit into the picture of J and K and APO and well you know it's much more pragmatic like in the end you kind of have to end up where the where the libraries are you know like because to me my my focus is on productivity I just want to get stuff done and solve problems so Perl was great for I created an email company called fast mail and Perl was great cuz back in the late 90s early 2000s it just had a lot of stuff it could do I still had to write my own monitoring system and my own web framework my own whatever because like none of that stuff existed but it was the super flexible language to do that in and you used Perl fast ball used as a back-end think so everything was written in Perl yeah yeah everything everything was fell why do you think Perl hasn't succeeded or hasn't dominated the market where Python really takes over a lot yeah well I mean it felt did dominate it was for time everything everywhere but then the guy that Pal Larry will kind of just didn't put the time in anymore and no project can be successful if there isn't you know it's particularly one that's data with a strong leader that that loses that strong leadership so then python is kind of replaced - you know python is a lot less elegant language in nearly every way but it has the data science libraries and a lot of them are pretty great so I kind of use it because it's the best we have but it's definitely not good enough what do you think the future programming looks like what do you hope the future programming looks like if we zoom in on the computational fields on data science on machine learning I hope Swift is successful because the goal is Swift the way Chris Lattner describes it is to be infinitely hackable and that's what I want I want something where me and the people I do research with and my students can look at and change everything from top to bottom there's nothing mysterious and magical and inaccessible unfortunately with Python it's the opposite of that because pythons so slow it's extremely unhackable you get to a point where it's like okay from here on down at sea so your debugger doesn't works in the same way your profiler doesn't work in the same way your build system doesn't work in the same way it's really not very happy ball at all what's the part you would like to be hackable is it for the objective of optimizing training of neural networks inference in your networks is it performance of the system or is there some non performance related just it's it's a greater thing I'm in the end I want to be productive as a practitioner so that means that so like at the moment our understanding of deep learning is incredibly primitive there's very little we understand most things don't work very well even though it works better than anything else out there there's so many opportunities to make it so you look at any domain area like I don't know speech recognition with deep learning or natural language processing classification with deep learning or whatever every time I look at an area with deep learning I always see like oh it's terrible there's lots and lots of obviously stupid ways to do things that need to be fixed so then I want to be able to jump in there and quickly experiment and make them better using the programming language is has a role in a huge role yes so currently Python has a big gap in terms of our ability to innovate particularly around recurrent neural networks and natural language processing because it because it's so slow the the actual loop where we actually loop through words we have to do that whole thing in CUDA C so we actually can't innovate with the kernel the heart of that most important algorithm and it's just a huge problem and this happens all over the place so we hit you know research limitations another example convolutional neural networks which actually the most popular architecture for lots of things maybe most things in declining we almost certainly should be using space convolutional neural networks but only like two people are because to do it you have to rewrite all of that CUDA sea level stuff and yeah this researchers and practitioners don't so like there's just big gaps in like what people actually research on what people actually implement because of the programming language problem so you think you think it's it's just too difficult to write in CUDA see that a programming like a higher level programming language like Swift should enable the the easier input fooling around creative stuff with RN ends or was parse convolution your noise kind of who's a who's at fault who's who's a charge of making it easy for a research - player I mean no one's at fault just know what he's got around to it yet or it's just it's hard right and I mean part of the fault is that we ignored that whole APL kind of direction most prominently everybody did for 60 years 50 years but recently people have been starting to reinvent pieces of that and kind of create some interesting new directions in the compiler technology so the place where that's particularly happening right now is something called ml ir which is something that ok I'm Kris lat know this rift guy is leading and because it's actually not gonna be swift on its own that solves his problem because the problem is they're currently writing a acceptable fast you know GPU program is too complicated regardless of what language you use no and that's just because if you have to deal with the fact that I've got you know 10,000 threads and I have to synchronize between them all and I have to put my thing in to grid blocks and think about warps and all this stuff it's just it's just so much boilerplate to do that well you have to be a specialist at that and it's going to be a year's work to you know optimize that algorithm in that way but with things like tensor comprehensions and tile and ml ir and t vm there's all these various projects which are all about saying let's let people create like domain-specific languages for tensor computations these are the kinds of things we do are generally in on the GPU for deep learning and then have a compiler which can optimize that tensor computation a lot of this work is actually sitting on top of a project called halide which was is a mind-blowing project where they came up with such a domain-specific language in fact true one domain-specific language for expressing this is what my tensor computation is and another domain-specific language for expressing this is the kind of the way I want you to structure the compilation of that like do it block by block and do these bits in parallel they were able to show how you can compress the amount of code by 10x compared to optimized GPU code and get the same performance so that's like so these other things are kind of sitting on top of that kind of research and ml ir is pulling a lot of those best practices together and now we're starting to see work done on making all of that directly accessible through Swift so that I could use Swift to kind of write those domain-specific languages and hopefully we'll get them Swift CUDA kernels written in a very expressive and concise way that looks a bit like J in APL and then Swift layers on top of that and then a swift UI on top of that and you know it'll be so nice if we can get to that point that does it all eventually boil down to CUDA and NVIDIA GPUs unfortunately at the moment it does but one of the nice things about ml ir if AMD ever gets their act together which they probably won't is that they or others could write MLA our backends for other GPUs or other or other tensor computation devices of which today there are increasing number are like graph core or vertex AI or whatever so yeah being able to target lots of backends would be another benefit of this and the market really needs competitions at the moment NVIDIA is massively overcharging for their kind of enterprise class cards because there is no serious competition because nobody else is doing the software properly in the cloud there is some competition right but not really other than TP used for heavy use are almost unprogrammed well at the moment you can't the GPUs has the same problem the case is even worse so TP use the Google actually made an explicit decision to make them almost entirely unprogrammed ball because they felt that there was too much IP in there and if they gave people direct access to program them people would learn their secrets yeah so you can't actually directly program the memory in a teepee you you can't even directly like create code that runs on and that you look at on the machine that has the GPU it all goes through a virtual machine so all you can really do is this kind of cookie cutter thing of like plug into high-level stuff together which is just super tedious and annoying and totally unnecessary so what was the tell me if you could the origin story of fast AI what is the motivation its mission its dream so I guess the founding story is heavily tied in my previous startup which is a company called in lytic which was the first company to focus on deep learning for medicine and I created that because I saw that was a huge opportunity to there's a there's a about a 10x shortage of the number of doctors in the world and the developing world that we need expected it would take about three hundred years to train enough doctors to meet that gap but I guess that maybe if we used deep learning for some of the analytics we could maybe make it so you don't need as highly trained doctors diagnosis diagnosis and treatment planning where's the biggest benefit just before get the first day I was where's the biggest benefit of AI in medicine DC today and not much not much happening today in terms of like stuff that's actually out there it's very early but in terms of the opportunity it's to take markets like India and China and Indonesia which have big populations Africa small numbers of doctors and provide diagnostic particularly treatment planning and triage kind of on device so that if you do a you know test for malaria or tuberculosis or whatever you immediately get something that even a health care worker that's had a month of training can get a very high quality assessment of whether the patient might be at risk until you know okay we'll send them off to a hospital so for example in Africa outside of South Africa there's only five pediatric radiologists for the entire continent so most countries don't have any so if your kid is sick and they need something diagnose your medical imaging the person even if you're able to get medical imaging done the person that looks at it will be you know a nurse at best yeah but actually in India for example and in China almost no x-rays are read by anybody by any trained professional because they don't have enough so if instead we had a algorithm that could take the most likely high-risk 5% and say triage basically say okay somebody needs to look at this it would massively change the kind of way that what's possible with medicine in the developing world and remember they have increasingly they have money there the developing world they're not imported Apella people so they have the money so that they're building the hospitals they're getting the diagnostic equipment but they just there's no way for a very long time will they be able to have the expertise shortage of their sweeties okay and that's where the deep learning systems could step in and magnify the expertise they do exactly yeah so you do see just a longer it a little bit longer yeah the interaction you still see the human expert still at the core of these systems yeah absolutely there's something in medicine that can be automated almost completely I don't see the point of even thinking about that because we have such a shortage of people why would we not why would we want to find a way not to use them like we have people so the idea of like even from an economic point of view if you can make them 10x more productive getting rid of the person doesn't impact your unit economics at all and it totally ignores effect that there are things people do better than machines so it's just to me that's not a useful way of framing the problem I guess just to clarify I guess I meant there may be some problems where you can avoid even going to the expert ever sort of maybe preventive care or some basic stuff flowing and food allowing the expert to focus on the things that are that are really that well that's what the triage would do right so the triage would say okay it's ninety ninety nine percent sure there's nothing here right so you know that can be done on device and they can just say okay go home so the experts are being used to look at the stuff which has some chance it's worth looking at which most things is it's not you know it's fine why do you think we haven't quite made progress on that yet in terms of the the scale of how much AI is applied in the middle there's a lot of reasons I mean one is it's pretty new I only started and let it can like 2014 and before that like it's hard to express to what degree the medical world was not aware of the opportunities here so I went to iris na which is the world's largest radiology conference and I told everybody I could you know like I'm doing this thing this deep learning please come and check it out and no one had any idea what I was talking about and no one had any interest in it so like we've come from absolute zero which is hard and then the whole regulatory framework education system everything is just set up to think of doctoring in a very different way so today there is a small number of people who are deep learning practitioners and doctors at the same time and that we're starting to see the first ones come out of their PhD programs so that Kinane over in fostering Cambridge has a number of students now who are data data science experts deep learning experts and and actual medical doctors quite a few doctors have completed first day of course now and are publishing papers and creating journal reading groups in the American Council of radiology and like it's just starting out but it's going to be a long process they regulators have to learn how to regulate this they have to build you know guidelines and then the lawyers at hospitals have to develop a new way of understanding that sometimes it makes sense for data to be you know looked at in raw form in large quantities in order to create world-changing results he has a regulation around data all that it sounds it was probably the hardest problem but sounds reminiscent of autonomous vehicles as well many of the same regulatory challenges meaning the same data challenges yeah I mean funnily enough that problem is less their regulation and more the interpretation of that regulation by by lawyers in hospital so hipper is actually was designed to its it to P and hipper is not standing does not stand for privacy it stands for portability it's actually meant to be a way that data can be used and it was created with lots of gray areas because the idea is that would be more practical and would help people to use this this legislation to actually share data in a more thoughtful way unfortunately it's done the opposite because when a lawyer sees a gray area they see oh if we don't know we won't get sued then we can't do it today so hipper is not exactly the problem the problem is more than there's hospital lawyers are not incentive to make bold decisions about data portability or even to embrace technology that saves lives no they more want to not get in trouble for embracing the right but also it is also so slaves in a very abstract way which is like oh we've been able to release these hundred thousand and on most records I can't point at the specific person whose life that's saved I can say like oh we've ended up with this paper which found this result which you know diagnosed a thousand more people otherwise but it's like which ones were helped it's it's very abstract and on the counter side of that you may be able to point to a life that was taken because of something though yeah or or or a person whose privacy was violated it was like oh this specific person you know there was de-identified so we've identified just a fascinating topic we're jumping around I'll get back to fast AI but on the question of privacy data is the fuel for so much innovation in deep learning what's your sense and privacy whether we're talking about Twitter Facebook YouTube just the technologies like in the medical field that rely on people's data in order to create impact how do we get that right respecting people's privacy and yet creating technology that just learns from data one of my areas of focus is on doing more with less data which so most vendors unfortunately are strongly incented to find ways to require more data and more computation so Google and IBM being the most obvious IBM yeah so Watson you know so Google and IBM both strongly push the idea that you have to be you know that they have more data and more computation and more intelligent people than anybody else and so you have to trust them to do things because nobody else can do it and Google's very upfront about this like Geoff Dana's going out there and given talks and said our goal is to require a thousand times more computation but less people our goal is to use the people that you have better and the data you have better in the computation you have better so one of the things that we've discovered is or or at least highlighted is that you very very very often don't need much data at all and so the data you already have in your organization we'll be enough to get state-of-the-art results so like my starting point would be this going to say around privacy is a lot of people are looking for ways to share data and aggregate data but I think often that's unnecessary they assume that they need more data than they do because they're not familiar with the basics of transfer learning which is this critical technique for needing orders of magnitude less data is your sense one reason you might want to collect data from everyone is like in the recommender system context where your individual Jeremy Howard's individual data is the most useful for freeing for providing a product that's impactful for you so for giving you advertisements for recommending to your movies for doing medical diagnosis is your sense we can build with a small amount of data general models they will have a huge impact for most people that we don't need to have data from punching on the whole I'd say yes I mean they're things like you know recommender systems have this cold-start problem where you know Jeremy is a new customer we haven't seen him before so we can't recommend him things based on what else he's bought and liked with us and there's various workarounds to that like in a lot of music programs we'll start out by saying which of these artists you like which of these albums do you like which of these songs do you like Netflix used to do that nowadays they they tend not to people kind of don't like that because they think oh we don't want to bother the user so you could work around that by having some kind of data sharing where you get my marketing record from axiom or whatever and try to guess from that to me the the benefit to me and to society of saving me five minutes on answering some questions versus the negative externalities of if the privacy issue doesn't add up so I think like a lot of the time the places where people are invading our privacy in order to provide convenience is really about just trying to make them more money and and they move these negative externalities and to places that they don't have to pay for them so when you actually see regulations appear that actually cause the companies that create these negative externalities to have to pay for it themselves they say well we can't do it anymore so the cost is actually too high right but for something like medicine yeah I mean the hospital has my you know medical imaging my pathology studies my medical records and also I own my medical data so you can so I I helped a startup called doc AI one of the things doc AI does is that this has an app you can connect to you know Sutter Health's and webcore and Walgreens and download your medical data to your phone and then upload it again at your discretion to share it as you wish so with that kind of approach we can share our medical information with the people we want to yes of control I mean it really being able to control who you share with us on yeah so that that has a beautiful interesting tangent but to return back to uh the origin story of fast they act right so so before I started fast AI I spent a year researching where the biggest opportunities for deep learning because I knew from my time at Cal in particular that deep learning had kind of hit this threshold point where it was rapidly becoming the state of the art approach in every areas that looked at it and I've been working with neural nets for over 20 years I knew that from a theoretical point of view once it hit that point it would do that in kind of just about every domain and so I kind of spent a year researching what are the domains it's going to have the biggest low-hanging fruit in the shortest time period medicine but there were so many I could have picked and so there was a kind of level of frustration for me of like okay I'm really glad we've opened up the medical deep learning world and today is huge as you know but we can't do you know I can't do everything I don't even know like it took like in medicine it took me a really long time to even get a sense of like what kind of problems to medical practitioners solve what kind of data do they have who has that data so I kind of felt like I need to approach this differently if I want to maximize the positive impact of deep mourning rather than me picking an area and trying to become good at it and building something I should let people who are already domain experts in those areas and who already have the data do it themselves mm-hmm so that was the reason for fast AI is to basically try and figure out how to get deep learning into the hands of people who could benefit from it and help them to do so in as quick and easy and effective way as possible god it's all sort of empowered the the domain expert yeah and like partly it's because like unlike most people in this field my background is very applied and industrial that my first job at MIT was at McKinsey and company I spent 10 years in management consulting I I spend a lot of time with domain experts you know so I kind of respect them and appreciate them and know I know that's where the value generation in society is and so I also know how most of them can't code and most of them don't have the time to invest you know three years and a graduate degree or whatever so it's like how do i skill those two main experts I think it would be a super powerful thing you know biggest societal impact I could have so that yeah that was the thinking so so much a fast AI students and researchers and the things you teach are pragmatically minded right practically minded freaking figuring out ways how to solve real problems and fast right so from your experience what's the difference between theory and practice of deep learning well most of the research in the deep mining world is a total waste of time all right that's what I was getting at yeah it's it's a problem in science in general scientists need to be published which means they need to work on things that their peers are extremely familiar with and can recognize in advance in that area so that means that they all need to work on the same thing and so it really Inc and and the thing they work on there's nothing to encourage them to work on things that are practically useful so you get just a whole lot of research which is minor advances and stuff that's been very highly studied and has no significant practical impact where else the things that really make a difference like I mentioned transfer learning like if we can do better at transfer learning then it's this like world-changing thing we're suddenly like lots more people can do world-class work with less resources and less data and but almost nobody works on that or another example active learning which is the study of like how do we get more out of the human beings in the loop where's my favorite topic yeah so active learning is great but it's almost nobody working on it because it's just not a trendy thing right now you know what somebody's suicide interrupt you're saying that nobody is publishing an active learning but there's people inside companies anybody who actually has to solve a problem they're going to innovate an active learning yeah everybody kind of reinvents active learning when they actually have to work in practice because they start labeling things and they think gosh this is taking a long time and it's very expensive and then they start thinking well why am i labeling everything I'm only the machines only making mistakes on those two classes they're the hard ones maybe I ought to start labeling those two classes and then you start thinking well why did I do that manually why kind of just get the system to tell me which things are going to be hardest it's an obvious thing to do but yeah it's it's just like like transplant learning it's it's under studied and the academic world just has no reason to care about practical results the funny thing is like I've only really ever written one paper I hate writing papers and I didn't even write it it was my colleague sebastian ruder who actually wrote it I just knew did the research for it but it was basically introducing transfer learning successful transfer learning to NLP for the first time the algorithm is called GLM fit and it actually I actually wrote it for the course for the first day of course I wanted to teach people in LP and I thought I only want to teach people practical stuff and I think the only practical stuff is transfer learning and I couldn't find any examples of transfer learning and NLP so I just did it and I was shocked to find that as soon as I did it was you know the basic prototype took a couple of days smashed the state-of-the-art on one of the most important data sets in a field that I knew nothing about and I just thought well this is ridiculous and so I spoke to the best unit and he kindly offered to write it up the results and so it ended up being published in a CL which is the top link with a computational linguistics conference so like people do actually care once you do it but I guess it's difficult for maybe like junior researchers or like like I don't care whether I get citations or papers whatever I was right there's nothing in my life that makes that important which is why I've never actually bothered to write a pic of myself now for people who do I guess they have to pick the kind of safe option which is like yeah make a slight improvement on something that everybody is already working on yeah nobody does anything interesting or succeeds in life or the safe option speed I mean the nice thing is nowadays everybody is now working on you know a transfer learning because since that time we've had GPT and GPT too and Burt and you know it's like it's so yeah once you show that something is possible if nobody jumps you and I guess I hope to be a part of and I hope to see more innovation and active learning in the same way I think yeah try learning an active learning are fascinating public open were I actually helped start a startup called platform AI which is really all about active learning and yeah it's very interesting trying to kind of see what research is out there and make the most of it and there's basically none so we've had to do all our own research once again and just as easy described can you tell the story of the stanford competition dawn bench and fast day eyes achievement on it sure so something which I really enjoy is that I basically teach two courses a year the practical deep money for coders which is kind of the introductory course and then cutting-edge tech mining for coders which is the kind of research level course and while I teach those courses I have a I basically have a big office at the University of San Francisco big enough for like 30 people and I invite anybody any student who wants to come and hang out with me well I built the course and so generally it's full and so we have twenty or thirty people in a big office with nothing to do but study deep learning so it was during one of these times that somebody in the group said oh there's a thing called Don benched it looks interesting and I was like what the hell is that is it about some competition to see how quickly you can train a model seems kind of not exactly relevant to what we're doing but it sounds like the kind of thing which you might be interested in I checked it out and I said oh crap there's only ten days till it's over it's pretty too late and we're kind of busy trying to teach this course yeah maybe like oh it would make an interesting case study for the course like it's all the stuff where you're already doing why don't you just put together our current best practices and ideas so me and I guess about four students just decided to give it a go and we focused on this more one called Sipho ten which is that all 32 by 32 pixels can you say word on benches yeah so it's a competition to train a model as fast as possible I was run by Stanford as cheap as possible - that's also another one first cheap as possible and there was a couple of categories imagenet and so far 10 so image nets is big 1.3 million image thing that took a couple of days to train remember a friend of mine Pete worden who's now at Google I remember he told me how he trained imagenet a few years ago and he basically like had this little granny flat out the back that he turned into his image net training center and he figured you know after like a year of work he figured out how to train it and like ten days or something it's like that was a big job well so far ten at that time you could train in a few hours you know it's much smaller and easier so we thought would try so far 10 and yeah I've really never done that before like I've never really liked things like using more than one gpgpu at a time was something I tried to avoid cuz to me it's like very against the whole idea of accessibility is she better to do things with 1gb here I mean have you asked in the past before after having accomplished something how do I do this faster much faster Oh always but it's always for me it's always how do I make it much faster on a single genus you that a normal person could afford in their day-to-day life it's not how could I do it faster I you know having a huge data center because up to me it's all about like as many people should be to use something as possible without fussing around with infrastructure so anyway so in this case it's like well we can use eight GPUs just by renting a AWS machine so we thought we'd try that and yeah basically using the stuff we were already doing we were able to get you know the speed you know within a few days we had to speed down to I don't know that's a very small number of minutes I can't remember exactly how many minutes it was but I might have in like 10 minutes or something and so yeah we found ourselves at the top of the leaderboard easily for both time and money which really shocked me because the other people competing this were like Google and Intel and stuff we're like know a lot more about this stuff I think we do so that we were emboldened we thought let's try the imagenet one two way out of our league but our goal was to get under 12 hours yeah and we did which was really exciting and but we didn't put anything up on the leaderboard but we were down to like 10 hours but then Google put in some like 5 hours or something about us like oh they're so screwed but we kind of thought we'll keep trying you know if Google can do it info I mean Google did on five hours on someone like a TPU pod or something like a lot of hardware but we kind of like had a bunch of ideas to try like a really simple thing was why are we using these big images they're like 224 256 by 256 pixels you know why don't we try smaller ones and just elaborate there's a constraint on the accuracy that your training model is supposed to achieve yeah you got to achieve 93% I think it was for imagenet exactly which is very tough so you have to yeah 93% like they think that they picked a good threshold it was a little bit higher than what the most commonly used ResNet 50 model could achieve at that time so yeah so it's quite a difficult problem to solve but yeah we realized if we actually just use 64 by 64 images it trained a pretty good model and then we could take that same model and just give it a couple of epochs to learn 224 by 224 images and it was basically already trained it makes a lot of sense like if you teach somebody like here's what a dog looks like and you show them low res versions and then you say here's a really clear picture of a dog they already know what a dog looks like so that like just we jumped to the front and we ended up winning parts of that competition we actually ended up doing a distributed version over multiple machines a couple of months later and ended up at the top of the leaderboard we had 18 minutes in it yeah and it was and people have just kept on blasting through again and again since then so so what's your view on multi-gpu or multiple machine training in general as as a way to speed code up I think it's largely a waste of time both multi-gpu on a single machine and yeah particularly multi machines because it's just clunky motogp use is less clunky than it used to be but to me anything that slows down your iteration speed is a waste of time so you could maybe do your very last you know perfecting of the model on Motty GPUs if you need to that so for example I think doing stuff on imagenet is generally a waste of time why test things on 1.3 million images most of us don't use 1.3 million images and we've also done research that shows that doing things on a smaller subset of images gives you the same relative answers anyway so from a research point of view why waste that time so actually I released a couple of new data sets recently one is called imaginet the French image net which is a small subset of image net which is designed to be easy to classify I would highly spell imaginer it's got an extra T and e at the end because it's very French am i okay yeah I'm okay and then another one called image Wharf which is a subset of the image net that only contains dog breeds that's a hard one right that's a hard one yeah and I've discovered that if you just look at these two subsets you can train things on a single GPU in ten minutes and the results you get directly transferable to imagenet nearly all the time and so now I'm starting to see some researchers start to use these holidays that's so deeply love the way you think because I think you might have written a blog post saying that sort of going these big data sets is encouraging people to not think creatively absolutely so you're - it's sort of constrained you to Train on large resources and because you have these resources you think more research will be bit better and then you start like for some somehow you kill the creativity yeah and even worse than that Lex I keep hearing from people who say I decided not to get into deep learning because I don't believe it's accessible to people outside of Google to do useful work so like I see a lot of people make an explicit decision to not learn this incredibly valuable tool because they've they've drunk the Google kool-aid which is that only Google's big enough and smart enough to do it and I just find that so disappointing and it's so wrong and I think all the major breakthroughs in AI in the next twenty years will be doable on a single GPU like I would say my sense is all the big sort of well let's put it this way none of the big breakthroughs of the last 20 years or acquired multiple GPUs so like fetch norm well you drop out did you demonstrate to everyone of them yeah this is five multiple GPUs against the original Gans didn't require multiple ups well and and we've actually recently shown that you don't even need gains so we've developed gained level outcomes without knitting Gans and we can now do it with again by using transfer learning we can do it in a couple of hours on a single generator might like without the other serial port yeah so we've found loss functions that work super well without the adversarial part and then one of our students guy called Jason antic has created Cordiale defi which uses this technique to colorize old black-and-white movies you can do it on a single GPU color as a whole movie in a couple of hours and one of the things that Jason and I did together was we figured out how to add a little bit of n at the very end which it turns out for colorization makes it just a bit brighter and nicer and then Jason did masses of experiments to figure out exactly how much to do but it's still all done on his home machine on a single GPU in his lounge room and like if you think about like colorizing Hollywood movies that sounds like something a huge studio it would have to do but he has the world's best results on this there's this problem of microphones we're just talking two microphones now yeah it's such a pain in the ass to have these microphones to get good quality audio and I tried to see if it's possible to plop down a bunch of cheap sensors and reconstruct higher quality audio from multiple sources because right now I haven't seen work from okay we can say inexpensive mics automatically combining audio from multiple sources to improve the combined audio right people haven't done that and that feels like a learning problem alright so hopefully somebody can well I mean it's it's eminently doable and it should have been done by now I feel I felt the same way about computational photography four years ago that's right why are we investing in big lenses when three cheap lenses plus actually a little bit of intentional movement so like Holden you don't like take a few frames gives you enough information to get excellent sub pixel resolution which particularly with deep learning you would know exactly what you meant to be looking at we can totally do the same thing with audio I think there's a madness that it hasn't been done yet I live in progress on the photographer tog Rafik um yeah the dog photography is basically standard now so the the Google picks all night light I don't know if you've ever tried it but it's it's astonishing you take a picture in almost pitch black and you get back a very high quality image and it's not because of the lens same stuff is like adding the bouquet to the you know the background wearing have done computationally this depicts over here yeah basically the everybody now is doing most of the fanciest stuff on their phones with computational photography and also increasingly people are putting more than one lens on the back of the camera so the same will happen for audio for sure and there's applications in the audio side if you look at an Alexa type device most people have seen especially I worked at Google before when you look at noise background removal you don't think of multiple sources of audio you don't play with that as much as I would hope people I mean you can still do it even with one like again it's not not much works being done in this area so we're actually going to be releasing an audio library soon which hopefully will encourage development of this because it's so underused the basic approach we used for our super resolution in which Jason uses video defy of generating high quality images the exact same approach would work for audio no-one's done it yet but it would be a couple of months work okay are also learning rate in terms of Don bench there's some magic on learning rate that you played around with yeah interesting yeah so this is all work that came from a guy called Leslie Smith Leslie's a researcher who like us cares a lot about just the practicalities of training neural networks quickly and accurately which i think is what everybody should care about but almost nobody does and he discovered something very interesting which he calls super convergence which is there are certain networks that with certain settings of high parameters could suddenly be trained ten times faster by using a ten times higher learning rate now no one published that paper because it's not an area of kind of active research in the academic world no academics recognized this is important and also deep learning in academia is not considered a experimental science so unlike in physics where you could say like I just saw as a subatomic particle do something which the theory doesn't explain you could publish that without an explanation and then in the next 60 years people can try to work out how to explain it we don't allow this in the deep learning world so it's it's literally impossible for Leslie to publish a paper that says I've just seen something amazing happen this thing trained ten times faster than it should have I don't know why and so the reviewers were like we can't publish that because you don't know why so anyway that's important to pause on because there's so many discoveries that would need to start like that every every other scientific field I know of works is that way I don't know why ours is uniquely disinterested in publishing unexplained experimental results but there it is so it wasn't published having said that I read a lot more unpublished papers and published papers because that's where you find the interesting insights so I absolutely read this paper and I was just like this is astonishingly mind-blowing and weird and awesome and like why isn't everybody only talking about this because like if you can train these things ten times faster they also generalized better because you're you're doing less epochs which means you look at the data less you get better accuracy so I've been kind of studying that ever since and eventually Leslie kind of figured out a lot of how to get it's done and we added minor tweaks and a big part of the trick is starting at a very low learning rate very gradually increasing it so as you're training your model you would take very small steps at the start and it gradually makes them bigger and bigger and tall eventually you're taking much bigger steps than anybody thought as possible a few other little tricks to make it work but ever ever basically we can reliably get super convergence and so for the dawn bench thing we were using just much higher learning rates than people expected to work what do you think the future of I mean makes so much sense for that to be a critical hyper parameter learning rate that you very what do you think the future of learning rate magic looks like well there's been a lot of great work in the last 12 months in this area it's and people are increasingly realizing that up to might like we just have no idea really how optimizers work and the combination of weight decay which is how we regularize optimizers and the learning rate and then other things like the epsilon we use in in the atom optimizer they all work together in weird ways and different parts of the model this is another thing we've done a lot of work on is research into how different parts of the model should be trained at different rates in different ways so we do something we call discriminative learning rates which is really important particularly for transfer learning so really I think in the last 12 months a lot of people have realized that this all this stuff is important there's been a lot of great work coming out and we're starting to see algorithms here which have very very few dials if any that you have to touch selector I think what's going to happen is the idea of a learning rate well it almost already has disappeared in the latest research and instead it's just like you know we we know enough about how to interpret the gradients and the change of gradients we see to know how to set every parameter you can await it so you see the future of of deep learning where really where's the input of a human expert needed well hopefully the input of the human expert will be almost entirely unneeded from the deep learning point of view so again like Google's approach to this is to try and use thousands of times more compute to run lots and lots of models at the same time and hopefully one of them is good at or male CONUS yeah I don't know kind of stuff which i think is insane when you better understand the mechanics of how models learn you don't have to try thousand different models to find which one happens to work the best you can just jump straight to the best one which means that it's more accessible in terms of compute cheaper and also with less hyper parameters to set it means you don't need deep learning experts to train your deep learning model for you which means that domain experts can do more of the work which means that now you can focus the human time on the kind of interpretation data gathering identifying what all errors and stuff like that yeah the data side how often do you work with data these days in terms of the cleaning looking at like Darwin looked at different species while traveling about do you look at data I have you in your roots and cargo always yeah good data I mean it's a key part of our course it's like before we train a model in the course we see how to look at the data and then after the first thing we do after we train our first model which we fine-tune an image net model for five minutes and then the thing we immediately do after that is we learn how to analyze the results of the model by looking at examples of misclassified images and looking at a classification matrix and then doing like research on Google to learn about the kinds of things that it's misclassifying so to me one of the three cool things about machine learning models in general is that you can interpret when you interpret them they tell you about things like what are the most important features which groups you misclassifying and they help you become a domain expert more quickly because you can focus your time on the bits that the model is telling you it is important so it lets you deal with things like data leakage for example if it says all the main feature I'm looking at is customer ID you know and you're like oh customer ID should be predictive and then you can talk to the people that manage customer IDs and they'll tell you like oh yes as soon as a customer's application is accepted we add a one on the end of their customer arm or something you know yeah so yeah model looking at data particularly from the lens of which parts of the date of the model says is important is super important yeah and using kind of using the model to almost debug the data yeah you have learn more about exactly what are the different cloud options for training y'all networks it's the last question related to dawn bench well it's part of a lot of the work we do but from a perspective of performance I think you've written this in a blog post there's AWS there's TPU from Google what's your sense what the future holds what would you recommend now right there was a so from a halfway point of view Google's TP use and the best nvidia gpus are similar I mean maybe the TP is like 30% faster but they're also much harder to program with there isn't a clear leader in terms of hardware right now although much more importantly the GPU nvidia gpus a much more programmable they've got much more written for all them so like that's the clear leader for me and where I would spend my time as a researcher and practitioner millington to the platform I mean we're super lucky now with stuff like Google TCP Google Cloud and AWS that you can access a GPU pretty quickly and easily but I mean for AWS it's still too hard like you have to find an ami and get the instance running and then install the software you want blah blah blah GCP is still is currently the the best way to get started on if the server environment because they have a fantastic fast AI in pi torch ready to go instance which has all the courses pre-installed it has Jupiter notebook pre running Jupiter notebook is this wonderful interactive computing system which everybody basically should be using for any kind of data-driven research but then even better than there are there are platforms like salamander which we own and paper space where literally you click a single button and it pops up a Jupiter notebook straight away without any kind of installation or anything and all the course notebooks are all pre-installed so like for me we this is one of the things we spent a lot of time kind of curating and working on because when we first started our courses the biggest problem was people dropped out of lesson one because they couldn't get an AWS instance running so things are so much better now and like we actually have if you got a cost up faster day I the first thing it says is here's how to get started with your GPU and there's like you just click on a link and you click start and and it's going it will you a go GCP I have to confess I've never used the Google DCP yeah JCP gives you three hundred dollars of compute for free which is really nice that as I say a salamander and paper spacer even even easier still okay so the from the perspective of deep learning frameworks you work with fast AI to go to this framework and PI torch intensive flow what are the strengths of each platform your perspective so in terms of what we've done our research on and taught in our course we started with Theano and care us and then we switch to tensor flow and care us and then we switch to PI torch and then we switched to PI torch and fast AI and that that kind of reflects a growth and development of the ecosystem of dig learning libraries siano intensive flow were great but we're much harder to teach and do research and development on because they define what's called a computational graph upfront less data graph well you basically have to say here are all the things that I'm going to eventually do in my model and then later on you say okay do those things with this data and you can't like debug them you can't do them step-by-step you can't program them interactively in a Jupiter notebook and so forth pi torch was not the first four pi torch was certainly the the strongest entrant to come along and say let's not do it that way let's just use normal Python and everything you know about in Python is just going to work and we'll figure out how to make that run on the GPU as in when and necessary that turned out to be a huge a huge leap in terms of what we could do with our research and what we could weigh with our teaching and because it was a limiting yeah I mean it was critical for us for something like dawn Bench to be able to rapidly try things it's just so much harder to be a researcher and practitioner when you have to do everything up front and you can inspect it problem with pay torch is it's not at all accessible to newcomers because you have to like write your own training loop and manage the gradients and all their stuff and it's also like not great for researchers because you're spending your time dealing with all this boilerplate and overhead rather than thinking about your algorithm so we ended up writing this very multi-layered API that at the top level you can train a state-of-the-art neural network in three lines of code and which kind of talks to an API which talks to an API which talks from API which like you can deep dive into at any level and get progressively closer to the Machine kind of levels of control and this is the first AI library that's been critical for us and for our students and for lots of people that have one big learning competitions with it and written academic papers with it it's made a big difference we're still limited though by Python and particularly this problem with things like recurrent neural nets a where you just can't change things unless you accept it going so slowly that it's impractical so in the latest incarnation of the course and with some of the research risked out now starting to do we're starting to do stuff some stuff in Swift I think we're three years away from that being super practical but I'm in no hurry I'm very happy to invest the time to get there but you know with with that we actually already have a nascent version of the first AI library for vision running on special knowledge and so flow because a Python for tensorflow is not going to cut it it's just a disaster what they did was they tried to replicate the bits that people were saying they like about a torch the is kind of interactive computation but they didn't actually change their foundational runtime components so they kind of added this like syntax sugar they call TF eager tend to flow again which makes it look a lot like pay torch but it's 10 times slower than pi torch to actually hmm do a step so because they didn't invest the time and like retooling the foundations cuz their code base is so horribly copy yeah I think it's probably very difficult to do that kind of rejoin yeah well particularly the way tensorflow was written it was written by a lot of people very quickly in a very disorganized way so like when you actually look in the code as I do it often I'm always just like oh god what were they thinking it's just it's pretty awful so I'm really extremely negative about the potential future if it by the flaws of the fet swift for tensorflow can be a different beast altogether it can be like it can basically be a layer on top of M lar that takes advantage of you know all the great compiler stuff that Swift builds on with LLVM and yeah it could be a thing kit will be absolutely fantastic well you're inspiring me to try evan Roo truly felt the pain of tensorflow 2.0 python it's fine by me but yeah but it does the job if you're using like predefined things that somebody's already written but if you actually compare you know like I've had to do because I've been having to do a lot of stuff with tensorflow recently you actually compare like okay I want to write something from scratch yeah like I just kick fighting is like oh it's running ten times slower than pi torch so is the biggest cost let's throw running time out the window how long it takes you to program that's not too different now thanks to transfer flow eager that's not too different but because because so many things take so long to run yeah you wouldn't run it at ten times slower like you just go like oh this is taking so long yeah and also there's a lot of things which are just less programmable like TF data which is the way they do processing works intensive flow is just this big mess it's incredibly inefficient and they kind of had to write it that way because of the TPU problems I described earlier so I just you know I just feel like they've got this huge technical debt which they're not gonna solve without starting from scratch so here's an interesting question then if there's a new student starting today what would you recommend they use well I mean we obviously recommend fast AI and pi torch because we teach new students and that's what we teach with so we would very strongly recommend that because it will let you get on top of the concepts much more quickly so then you'll become an extra and you'll also learn the actual state-of-the-art techniques you know so you actually get world-class results honestly it doesn't much matter what library you learn because switching from China to MX net to tensorflow to PI torch is going to be a couple of days work as few long as you understand the foundation as well but you think we'll Swift creep in there as a thing that people start using not for a few years particularly because like Swift has no data science community libraries Oh basil wing and the Swift community has a a total lack of appreciation and understanding of numeric computing so like they keep on making stupid decisions you know for years they've just done dumb things around performance and prioritization that's clearly changing now because the developer of Chris Christie at developer of Swift Chris Latner is working at Google on the Swift Potenza flows so like that's that's a priority it'll be interesting to see what happens with Apple because like Apple hasn't shown any sign of caring about numeric programming in Swift so I mean hopefully they'll get off their ass and start appreciating this because currently all of their low-level libraries are not written in Swift they're not particularly swifty at all stuff like whore ml they're really pretty rubbish so yeah so there's a long way to go but at least one nice thing is that Swift for tensorflow can actually directly use Python code and Python libraries you know literally the entire lesson one notebook a fast AI runs in Swift right now in Python mode so that's that's a nice intermediate thing how long does it take the look at the two two facile courses how long does it take to get from point zero to completing both courses it varies a lot somewhere between two months and two years generally so for two months how many hours a day so I sound like a somebody who is a very competent coder can can do 70 hours per course and seventy seven zero yeah that's it okay but a lot of people I know take a year off to study first day I full-time and say at the end of the year they feel pretty competent because generally there's a lot of other things you do like they're generally they'll be entering cowgirl competitions they you might be reading in Goodfellows books they might you know they'll be doing a bunch of stuff and often you know particularly if they are domain expert they're coding skills might be a little on the pedestrian side so part of it's just like doing a lot more writing what do you find is the bottleneck for people usually except getting started and setting stuff up I would say coding just yeah I would say the best the people who are strong coders pick it up the best although another bottleneck is people who have a lot of experience of classic statistics can really struggle because it the intuition is so the opposite of what they used to they're very used to like trying to reduce the number of parameters in their model and looking at individual coefficients and stuff like that so I find people who have a lot of coding background and know nothing about statistics are generally going to be the best off so you taught several course on deep learning and as Fineman says the best way to understand something is to teach it what have you learned about deep learning from teaching it a lot it's a key reason for me to to teach the courses I mean obviously it's going to be necessary to achieve our goal of getting two main experts to be familiar with deep learning but it was also necessary for me to achieve my goal of being really familiar with deep learning I I mean to see so many domain experts from so many different backgrounds it's definitely I wouldn't say taught me but convinced me something that I like to believe was true which was anyone can do it so there's a lot of kind of snobbishness out there about only certain people can learn to code only certain people are going to be smart enough to like do AI that's definitely bullshit you know I've seen so many people from so many different backgrounds get state-of-the-art results in their domain areas now the it's definitely taught me that the key differentiator between people that succeed and people that fail is tenacity that seems to be basically the only thing that matters the people a lot of people give up and but if the ones who don't give up pretty much everybody succeeds you know even if at first I'm just kind of like thinking like wow they're really not quite getting it yet are they but eventually people get it and they succeed so I think that's been any they're both things I'd like to believe was true but I don't feel like I really had strong evidence with them to be true but now I can say I've seen it again and again so what advice do you have for someone who wants to get started in deep learning train lots of models that's that's how you that's how you learn it so like so I would you know I think it's not just me I think I think our course is very good but also lots of people independently I said it's very good it recently won the cog X award for AI courses as being the best in the world let's say come to our course cost up faster day I and the thing I keep on hopping on in my lessons is train models print out the inputs to the models print out to the outputs to the models like study you know change change the inputs of it look at how the outputs very just run lots of experiments to get a you know an intuitive understanding of what's going on to get hooked do think you mentioned training do you think just running the models inference like if we talk about getting started no you've got to find cheering the models so that's that's that's the critical thing because at that point you now have model that's in your domain area so there's there's there's no point running somebody else's model because it's not your model like so it only takes five minutes to fine-tune a model for the data you care about and in lesson two of the course we teach you how to create your own data set from scratch by scripting Google Image Search yeah so and we show you how to actually create a web application running online so I create one in the course that differentiates between a teddy bear or grizzly bear and a brown bear and it does it with basically a hundred percent accuracy took me about four minutes to scrape the images from Google search in the script there's a little graphical widgets we have in the notebook that help you clean up the data set there's other widgets that help you study the results to see where the errors are happening and so now we've had got over a thousand replies in our share your work here thread of students saying here's the thing I built and so those people who like and a lot of them are state of the art like somebody said oh I tried looking at Devon Gehry characters and I couldn't believe it the thing that came out was more accurate than the best academic paper after lesson one and then there's others which are just more kind of fun like somebody who's doing Trinidad and Tobago hummingbirds she said that's kind of their national bird and she's got something that can now classify Trinidad and Tobago hummingbirds so yeah train models fine-tune models with your data set and then study their inputs and outputs how much is fast there of course is free everything we do is free we have no revenue sources of any kind it's just a service to the community you're a saint okay once the person understands the basics trains a bunch of models if we look at the scale of years what advice do you have for someone wanting to eventually become an expert train lots of models train lots of models in your domain area so an expert what right we don't need more expert like create slightly evolutionary research an area that everybody's studying we need experts at using deep learning to diagnose malaria well we need experts at using deep learning to analyze language to study media bias so we need experts in analyzing fisheries to identify problem areas and you know the ocean you know that that's that's what we need so like become the expert in your passion area and this is a tool which you can use just about anything and you'll be able to do that thing better than other people particularly by combining it with your passion and domain expertise so that's really interesting even if you do want to innovate on transfer learning or active learning your thought is that means one I certainly share is you also need to find a domain or dataset that you actually really care for right if you're not working on a real problem that you understand how do you know if you're doing it any good you know how do you know if your results so good how do you know if you're getting bad results why you're getting bad results is it a problem with the data or is like how do you know you're doing anything useful yeah the only to me the only really interesting research is not the only but the vast majority of interesting research is like try and solve an actual problem and solve it really well so both understanding sufficient tools and the deep learning side and becoming a domain expert in a particular domain I really thinks will then reach for anybody yeah I mean to me I would compare it to like studying self-driving cars having never looked at a car or being in a car or turn the car on right you know which is like the way it is for a lot of people they'll study some academic data set where they literally have no idea about the other way I'm not sure how familiar with the thomas vehicles but that is literally you describe a large percentage of robotics folks working in a self-driving cars as they actually haven't considered driving they haven't actually looked at what driving looks right they haven't driven it goes and enterprise because you know when you've actually driven you know like these are the things that happened to me and I was driving it so there's nothing that beats the real-world examples are just experiencing them you've created many successful startups what does it take to create a successful startup same thing is becoming successful deep learning practitioner which is not getting up so you can right out of money or time or run out of something you know but if you keep costs super low and try and save up some money beforehand so you can afford to have some time then just sticking with it it's one important thing doing something you understand and care about is important that by something I don't mean the biggest problem I see with deep whining people is they do a PhD in deep learning and then they try and commercialize their PhD it is a waste of time because that doesn't solve an actual problem you picked your PhD topic because it was an interesting kind of engineering or math or research exercise but yeah if you've actually spent time as a recruiter and you know that most of your time was spent sifting through resumes and you know that most of the time you're just looking for certain kinds of things and you can try doing that with a model for a few minutes and see whether that something which your models be able to do as well as you could then you're on the right track to creating a startup and then I think just yeah being just be pragmatic and train state to capital money as long as possible preferably forever so yeah on that point do you venture capital so did you were able to successfully run startups with was self-funded yeah my first two was self-funded and that was the right way to do it that's scary no species startups are much more scary because you have these people on your back who do this all the time and who have done it for years telling you grow grow grow grow and I don't they don't care if you fail they only care if you don't grow fast enough so that's scary where else doing the ones myself well with with partners who were friends is nice because like we just went along at a pace that made sense and we were able to build it to something which was big enough that we never had to work again but was not big enough that any VC would think it was impressive and that was enough for us to be excited you know so I I thought that's a much better way to do things and most people in generally speaking that for yourself but how do you make money during that process do you cut into savings if I guess so yeah so fir so I started fast mail and optimal decisions at the same time in 1999 with two different friends and for fast mail I guess I spent $70 a month on the server and when the server ran out of space I put a payments button on the front page and said if you want more than 10 makerspace you have to pay $10 a yeah and so run low like keep your cost down yes I came across town and once you know once once I needed to spend more money I asked people to spend the money for me and that that was that basically from then on oh we were making money and I was profitable from then for optimal decisions it was a bit harder because we were trying to sell something that was more like a 1 million dollar sale but what we did was we would sell scoping projects so kind of like prototype he projects but rather than to be free we would sell them 50 to $100,000 so again we were covering our costs and also making the client feel like we were doing something valuable so in both cases we were profitable from six months in yeah nevertheless is scary I mean yeah sure it's it's Gary before you jump in and I just I guess I was comparing it to this scariness of VC I felt like with VC stuff it was more scary you kind of much more in somebody else's hands you know will they fund you or not and what do they think of what you're doing I also found it very difficult with VC's bet startups to actually do the thing which I thought was important for the company rather than doing the thing which I thought would make the VC happy now VCS always tell you not to do the thing that makes them happy but then if you don't do the thing that makes them happy they get set so and do you think optimizing for the whatever they call it they exit is uh as a good thing to optimize for I think I can be but not at the VC level because the VC exit needs to be you know a thousand x so where else the lifestyle exit if you can sell something for ten million dollars I think you've made it right so I don't it depends if you want to build something that's gonna you kind of happy to do forever then fine if you want to build something you want to sell then three is time that's fine too I mean they're both perfectly good outcomes so you're learning Swift now in a way I mean you were a writer and I read that you use at least in some cases spaced repetition as a mechanism for learning new things yeah I used Anki quite a lot yourself sure I actually don't never talk to anybody about it don't don't know how many people do it but it works incredibly well for me can you talk to your experience like how did you what what do you like first of all okay let's back it up what is space repetition so spaced repetition is an idea created by a psychologist named Epping house must be a couple hundred years ago or something hundred and fifty years ago he did something which sounds pretty damn tedious he wrote down random sequences of letters on cards and tested how well he would remember those random sequences a day later or a week later whatever he discovered that there was this kind of a curve where his probability of remembering one of them would be dramatically smaller the next day and then a little bit smaller the next day a little bit smaller next day what he discovered is that if he revised those cards after a day the probabilities would decrease at a smaller rate and then if he revised them again a week later they would decrease it a smaller rate again and so he basically figured out a roughly optimal equation for when you should revise something you want to remember so spaced repetition learning is using this simple algorithm just something like revise something after a day and then three days and then a week and then three weeks and so forth and so if you use a program like Anki as you know it will just do that for you and if you and it will say did you remember this and if you say no it will reschedule it back to be up here again like ten times faster than it otherwise would have it's a kind of a way of being guaranteed to learn something because by definition if you're not learning it it will be rescheduled to be revised more quickly unfortunately though it's also like it doesn't let you for yourself if you not learning something you you know like it your revisions will just get more and more so you have to find ways to learn things productively and effectively like treat your brain well so using like mnemonics and stories and context and stuff like that so yeah it's it's a super great technique is like learning how to loan is something which everybody should learn before they actually learn anything but almost nobody does what have you so certainly works well for learning new languages for I mean for learned like small projects almost but do you you know I started using it for if you had who wrote a blog post about this inspired me I went Ben you I'm not sure is I started when I read papers all all concepts and ideas I'll put them was it Michael Nelson in my Illinois muscle strains that Michael started doing this recently and he's been writing about it I so the kind of today's evening house is a guy called Peter was niak who developed a system called super memo and he's been basically trying to become like the world's greatest Renaissance man over the last few decades he's basically lived his life with spaced repeated repetition learning for everything I and sort of like Michaels only very recently got into this but he started really getting excited about doing it for a lot of different things for me personally I actually don't use it for anything except Chinese and the reason for that is that Chinese is specifically a thing I made a conscious decision that I want to continue to remember even if I don't get much of a chance to exercise it because like I'm not often in China so I I don't or else something like programming languages or papers I have a very different approach which is I try not to learn anything from them but instead I try to identify the important concepts and like actually ingest them so like really understand that concept deeply and study it carefully I will decide if it really is important if it is like incorporated into our library you know incorporated into how I do things or decide it's not worth it say so I find I find I didn't remember the things that I care about because I'm using it all the time so I've fell at last 25 years I've committed to spending at least half of everyday learning or practicing something new which is all my colleagues have always hated because it always looks like I'm not working I mean if what I meant to be working on but it always means I do everything faster because I've been practicing a lot of stuff so I kind of give myself a lot of opportunity to practice new things and so I find now I don't yeah I don't often kind of find myself wishing I could remember something because if it's something that's useful then I've been using it a lot that's easy enough to look it up on google fit speaking Chinese you can't look it up on Google so do you have advice for people learning new things so if you what have you learned is a process does it I mean it all starts is just making the hours in the day available yeah you gotta stick with it which is again the number one thing that 99% of people don't do so the people I started learning Chinese with none of them were still doing it twelve months later I'm still doing a ten years later I tried to stay in touch with them but they just no one did it yeah for something like Chinese like study how human learning works so my every one of my Chinese flashcards is associated with a story and that story is specifically designed to be memorable and we find things memorable which are like funny or disgusting or sexy or related to people that we know will care about so I try to make sure all those stories that are in my head have those characteristics yeah so you have to you know you won't remember things well if they don't have some context and yeah you won't remember them well if you don't regularly practice them whether it be just part of your day to day life or the Chinese for me flashcards I mean the other thing is I'll let yourself fail sometimes so like I've had various medical problems over the last few years and basically my flashcards just stopped for about three years and then they've been other times I've stopped for a few months and it's so hard because you get back to it and it's like you have 18,000 cards June and so you just have to go alright well I can either stop and give up everything or just decide to do this every day for the next two years until I get back to it the amazing thing has been that even after three years I you know the Chinese were still in there like yeah it was so much faster to relearn than it was to learn the first time yeah absolutely it's it's in there the same with with guitar with music and so on it's sad because the work sometimes takes away and then you won't play for a year but really if you then just get back to it every day you're right through right there again what do you think is the next big breakthrough in artificial intelligence what are your hopes in deep learning or beyond that people should be working on or you hope there'll be breakthroughs I don't think it's possible to predict I think yeah I think what we already have is an incredibly powerful platform to solve lots of societally important problems that are currently unsolved so I just hope that people will lots of people will learn this toolkit and try to use it I don't think we need a lot of new technological breakthroughs to do a lot of great work right now and when do you think we're going to create a human level intelligence system do you think know how hard is it how far away are we don't know don't have no way to know I don't know like I don't know why people make predictions about this because there's no data and nothing to go on and the Senate that's right it's just like there's so many societally important problems to solve right now I just don't find it a really interesting question to even answer so in terms of societally important problems what's the problem well is within reached for it well I mean for example there are problems that AI creates right so most specifically labor force displacement is going to be huge and people keep making this frivolous econometrics argument of being like oh there's been other things that aren't AI that have come along before and haven't created massive labor force displacement therefore AI want it slow so there's a serious concern for you oh yeah Andrew yang is running on it yeah it's it's it's I'm desperately concerned and you see already that the changing workplace has lived to a hollowing out of the middle class you're seeing that students coming out of school today have a less rosy financial future ahead of them and the parents did which has never happened in recent in the last few hundred years you know we've always had progress before and you see this turning into anxiety and despair and and even violence so I very much worry about that quite a bit about ethics too I do think that every data scientist working with deep learning needs to recognize they have an incredibly high leverage tool that they're using that can influence society in lots of ways and if they're doing research that that research is going to be used by people doing this kind of work and they have a responsibility to consider the consequences and to think about things like how will humans be in the loop here how do we avoid runaway feedback loops how do we ensure an appeals process for humans that are impacted by my algorithm how do I ensure that the constraints of my algorithm are ethically explained to the people that end up using them there's all kinds of human issues which only data scientists are actually in the right place to educate people about but data scientists tend to think of themselves as just engineers and that they don't need to be part of that process just know yeah which is wrong well you're in the perfect position to educate them better to read literature to read history to learn from history well Jeremy thank you so much for everything you do for inspiring huge amount of people getting them into deep learning and having the ripple effects the the flap of a butterfly's wings that will probably change the world so thank you very much yes you
Pamela McCorduck: Machines Who Think and the Early Days of AI | Lex Fridman Podcast #34
the following is a conversation with Pamela Romo quartic she's an author who is written on the history and the philosophical significance of artificial intelligence her books include machines who think in 1979 the fifth generation in 1983 with Edie foggy and mom who's considered to be the father of expert systems the edge of chaos the features of women and many more books I came across her work in an unusual way by stumbling in a quote for machines who think that is something like artificial intelligence began with the ancient wish to forged the gods that was a beautiful way to draw connecting line between our societal relationship with AI from the grounded day to day science math and engineering to popular stories and science fiction and myths of automatons that go back for centuries through her literary work she has spent a lot of time with the seminal figures of artificial intelligence including the founding fathers of AI from the 1956 Dartmouth summer workshop where the field was launched I reached out to Pamela for conversation in hopes of getting a sense what those early days were like and how their dreams continue to reverberate for the work of our community today I often don't know where the conversation may take us but I jump in and see having no constraints rules or goals is a wonderful way to discover new ideas this is the artificial intelligence podcast if you enjoy it subscribe on YouTube give it five stars and iTunes supported on patreon or simply connect with me on Twitter at lex friedman spelled fri d ma n and now here's my conversation with Pamela McCourt ik in 1979 yearbook machines who think was published in it you interview some of the early AI pioneers and explore the idea that they I was born not out of maybe math and computer science but out of myth and legend so tell me if you could the story of how you first arrived at the book the journey of beginning to write it I had been a novelist I'd published two novels and I was sitting under the portal at Stanford one day in the house we were renting for the summer and I thought I should write a novel about these weird people in AI I know and then I thought II don't write a novel write a history simple just go around you know interview them splice it together voila instant book hahaha it was much harder than that but nobody else was doing it and so I thought no this is a great opportunity and there were people who John McCarthy for example thought it was a nutty idea there were much you know the field had not evolved yet so on and he had some mathematical thing he thought I should write instead and I said no John I am NOT a woman in search of a project I'm this is what I want to do I hope you'll cooperate and he said Oh mutter mutter well okay it's your your time it was the pitch for the I mean such a young field at that point how do you write a personal history of a field that's so young I said this is wonderful the founders of the field are alive and kicking and able to talk about what they're doing did they sound or feel like founders at the time that they know that they've been found they have founded some oh yeah they knew what they were doing was very important very what they what I now see in retrospect is that they were at the height of their research careers and it's humbling to me they took time out from all the things that they had to do as a consequence of being there and it's a talk to this woman who said I think I'm gonna write a book oh you know it was amazing just amazing so who stands out to you may be looking 63 years ago the Dartmouth conference the so Marvin Minsky was there McCarthy was there Claude Shannon Allen you'll herb Simon some of the folks you've mentioned right then there's the other characters right the would one of your co-authors yeah he wasn't at Dartmouth he was not Dharma no but I mean he was there I think it undergraduate then and and of course Joe draw I mean though all of these are players I'm not a Dartmouth them but in that era right it's seem you and so on so who are the characters if you could paint a picture that stand out to you or memory those people you've interviewed and maybe not people that were just in the India the atmosphere in the atmosphere uh of course the four founding fathers were extraordinary guys they really were who are the founding fathers Allen Newell Herbert Simon Marvin Minsky John McCarthy they were the four who were not only at the Dartmouth conference but Newland Simon arrived there with a working program called the logic theorist everybody else had great ideas about how they might do it but they weren't going to do it yet and you mentioned Joe trout my husband I I was immersed in AI before I met Joe because I had been EDI Gonzalez assistant at Stanford and before that I had worked on a book by edited by Feigenbaum and Julian Feldman called a computers and thought it was the first textbook of readings of AI and they they only did it because they were trying to teach AI to people at Berkeley and there was nothing you know you'd have to send him to this journal in that Journal this was not the Internet where you could go look at an article so I was fascinated from the get-go by AI I was an English major yeah what did I know and yet I was fascinated and that's why you saw that historical but that literary background which i think is very much a part of the continuum of AI that the AI grew out of that same impulse is that yeah that traditional what what was wood Ritu ai how did you even think of it back back then what what was the possibilities the dreams I was interesting to you the idea of intelligence outside the human cranium this was a phenomenal idea and even when I finished machines who think I didn't know if they were gonna succeed I in fact the final chapter is very wishy-washy frankly I'll succeed the fieldid yeah yes it was there the idea that a AI began with the wish to forge the God so the the spiritual component that we crave to create this other thing greater than ourselves for those guys I don't think so Newell and Simon were cognitive psychologists what they wanted was to simulate aspects of human intelligence and they found they could do it on the computer Minsky just thought it was a really cool thing to do likewise McCarthy McCarthy 'add got the idea in 1949 when when he was a Caltech student and he listened to somebody's lecture it's it's in my book I forget who it was and he thought oh that would be fun to do how do we do that and he took a very mathematical approach Minsky was hybrid and Nuland Simon were very much cognitive psychology how can we simulate various things about cockney about human cognition what happened over the many years is of course our definition of intelligence expanded tremendously I mean these days biologists are comfortable talking about the intelligence of cell the intelligence of the brain they not just human brain but the intelligence of any kind of brain cephalopods I mean an octopus is really intelligent by any amount we wouldn't have thought of that in the 60s even the 70s so all these things have worked in and I did hear one behavioural primatologist Fran's Duvall say AI taught us the questions to ask yeah this is what happens right it's when you try to build it is when you start to actually ask questions if it puts a mirror to ourselves yeah right so you were there in the medical it seems like not many people were asking the questions that you were the trees tried trying to look at this field the way you were I was so low I when I went to get funding for this because I needed somebody to transcribe the interviews and I needed travel expenses I went to every thing you could think of the NSF the DARPA there was an Air Force place that doled out money and each of them said well that was that was very interesting that's a very interesting idea but we'll think about it and the National Science Foundation actually said to be in plain English hey you're only a writer you're not at historian of science and I said yeah that's true but you know the historians of science will be crawling all over this field I'm writing for the general audience so I felt and they still wouldn't budge I finally got a private grant without knowing who it was from IDI fredkin and MIT yeah he was a wealthy man and he liked what he called crackpot ideas and he considered this a crackpot is a crackpot idea and he was willing to support it I am ever grateful let me say that you know some would say that a history of science approach to AI or even just a history or anything like the book that you've written hasn't been written since by me I don't maybe I'm not familiar but it's certainly not many if we think about bigger than just these couple of decades few decades what what are the roots of AI oh they go back so far yes of course there's all the legendary stuff The Golem and the early robots of the 20th century but they go back much further than that if you read Homer Homer has robots in the Iliad and a classical scholar was pointing out to me just a few months ago well you said you just read The Odyssey The Odyssey is full of robots it is I said yeah how do you think Odysseus is ship gets from place one place to another he doesn't have two crew people to do that the crew man yeah it's it's magic it's robots whoa I thought how interesting so we've had this notion of AI for a long time and then toward the end of the 19th century the beginning of the 20th century there were scientists who actually tried to make this happen some way or another not successfully they didn't have the technology for it and of course Babbage in the 1850s and 60s he saw that what he was building was capable of intelligent behavior and he when he ran out of funding the British government finally said that's enough he and lady Lovelace decided oh well why don't we make you know why don't we play the ponies with this he had other ideas for raising money too but if we actually reach back once again I think people don't actually really know that robots do appear an ideas of robots you talk about the Hellenic and the Bragg points of view oh yes can you tell me about each I defined it this way the Hellenic point of view is robots are great you know they are party help they help this guy have Festus does God have Festus in his Forge I presume he made them to help him and so on and so forth and and they they welcome the whole idea of robots the break view has to do with I think it's the second commandment thou shalt not make any graven image in other words you better not start imitating humans because that's just forbidden it's the second commandment and a lot of the reaction to artificial intelligence has been a sense that this is this is somehow wicked this is somehow blasphemous we shouldn't be going there now you can say yeah but there gonna be some downsides and I say yes there are but blasphemy is not one of them you know there's a kind of fear that feels to be almost primal there's their religious roots to that because so much of our society has religious roots and so there is a feeling of like you said blasphemy though of creating the other of creating something you know it doesn't have to be artificial intelligence it's creating life in general mm-hmm it's the Frankenstein idea yes the annotator Frankenstein on my coffee table is this tremendous novel it really is just beautifully perceptive yes we we do fear this and we have good reason to fear it but because it can get out of hand maybe you can speak to that fear the psychology if you've thought about it you know there's a practical sort of fears concerns in the short term you can think of if we actually think about artificial intelligence systems you can think about bias of discrimination in algorithms so you you can think about their social networks have algorithms that recommend the content you see there by these algorithms control the behavior of the masses there are these concerns but it to me it feels like the fear that people have is deeper than that so have you thought about the psychology of it I think in a superficial way I have there is this notion that if we produce a machine that can think it will out thank us and therefore replace us I guess that's a that's a primal fear yes I almost look almost kind of a kind of mortality so around the time you said you work with it Stanford with Edie phagon bomb mm-hmm so let's look at that one person throughout this history clearly key person one of the many in the history of AI how has he changed in general around him how the Stanford changed in the last how many years I were talking about here no since that key 65 665 so I mean it does have to be about him I could be bigger but because usually key person an expert systems for example how is that how would these folks who've you've interviewed in the Samedi 79 changed through the decades in Ed's case I know I know him well we are dear friends we see each other every month or so he told me that when machines who think first came out he really thought all the front matter was kind of baloney and ten years later he said no I see what you're getting at yes this is an impulse that has been this has been a human impulse for thousands of years to create something outside the human cranium that has intelligence I think it's very hard when you're down at the algorithmic level and you're just trying to make something work which is hard enough just step back and think of the big picture it reminds me of when I was in Santa Fe I knew a lot of archaeologists and which was a hobby of mine and I would say yeah yeah well you can look at the shards and say oh this came from this tribe and this came from this trade route and so on but what about the big picture and a very distinguished archeologist said to me they don't think that way you do know they're trying to match the shard to the to where it came from that's you know where did this corn the remainder of this corn come from was it grown here was it grown elsewhere and I think this is part of the AI any scientific field you're so busy doing the the hard work and it is hard work that you don't step I can say oh well now let's talk about that you know the general meaning of all yes so none of the even Minsky and McCarthy they oh those guys did yeah the founding fathers did early on or pretty early on what they had but in a different way from how I looked at it the two cognitive psychologists Newell and Simon they wanted to imagine and reforming cognitive psychology so that we would really really understand the brain Minsky was more speculative and John McCarthy saw it as I think I'm doing doing him right by this he really saw it as a great boon for human beings to have this technology and that was reason enough to do it and he had wonderful wonderful fables about how if you do the mathematics you will see that these things are really good for human beings and if you had a technological objection he had an answer a technological answer but here's how we could get over that and then bla bla bla bla and one of his favorite things was what he called the literary problem which of course he presented to me several times that is everything in literature there are conventions in literature one of the conventions is that you have a villain and a hero and the hero in most literature is human and the villain in most literature is a machine and he said that's just not the way it's gonna be but that's the way we're used to it so when we tell stories about AI it's always with this paradigm I thought yeah he's right you know looking back in the classics are you are is certainly the machines trying to overthrow of the humans Frankenstein is different Frankenstein is a creature you never he never has a name Frankenstein of course is the guy who created him the human dr. Frankenstein this creature wants to be loved wants to be accepted and it is only when Frankenstein turns his head in fact runs the other way and the creature is without love that he becomes the monster that he later becomes so who's the villain in Frankenstein it's unclear right oh it is unclear yeah it's really the people who drive him but driving him away right they bring out the worst that's right they gave him no human solace and he is driven away you're right he becomes at one point a friend of a blind man and they he serves this blind man and they become great very friendly but it when the sighted people of the blind man's family come it ah you've got a monster here so it's it's very didactic in its way and what I didn't know is that Mary Shelley and Percy Shelley were great readers of the literature surrounding abolition in the United States the abolition of slavery and they picked that up wholesale you know you are making monsters of these people because you won't give them the respect and love that they deserve do you have if we get philosophical for a second do you worry that once we create machines that are a little bit more intelligent let's look at Roomba that vacuum is the cleaner that that this darker part of human nature where we abuse the other the the somebody who's different will come out I don't worry about it I could imagine it happening but I think that what AI has to offer the human race will be so attractive that yeah people will be won over so you have looked deep into these people have deep conversations and it's yeah interesting to get a sense of stories of the way they were thinking the way it was changed your own thinking about AI has changed see master McCarthy is uh what about the years at CMU Carnegie Mellon with Joe and was sure a Joe was not in AI he was in algorithmic complexity was there always a line between AI and computer science for example is AI its own place of outcasts it was at the feeling there was a kind of outcasts period for AI for instance in 1974 the new field was hardly ten years old the new field of computer science was asked by the net National Science Foundation I believe but it may have been the National Academies I can't remember to you know tell us tell our your fellow scientists where computer science is and what it means and they wanted to leave out AI and they only agreed to put it in because Don Knuth said hey this is important you can't just leave that out really Don Don Knuth yes I talked to mrs. Nisa out of all the people yes but you see yeah an AI person couldn't have made that argument he wouldn't have been believed but Knuth was believed yes so George I worked on the real stuff a Joe was working on algorithmic complexity but he would say in plain English again and again the smartest people I know are in AI really oh yes no question anyway Joe love these guys what happened was that I guess it was as I started to write machines who think herb Simon and I became very close friends he would walk past our house on Northumberland Street every day after work and I would just be putting my cover on my typewriter and I would lean out the door and say herb would you like a sherry and her almost always would like a sherry so he'd stop in and we talked for an hour two hours my journal says we talked this afternoon for three hours what was in his mind at the time if in terms of an AI side of things oh we didn't talk too much about AI we talked about other things life we both loved literature and herb had read Proust in the original French twice all the way through I can't I'm read in English in translation so we talked about literature we talked about languages we talked about music because he loved music we talked about art cuz he was he was actually enough of a painter that he had to give it up because he was afraid it was interfering with his research and so on so no it was really just Chat Chat but it was very warm so one summer I said to her you know well my students have all the really interesting conversations I was teaching at the University of Pittsburgh then in the English department you know they get to talk about the meaning of life and that kind of thing and what do I have I have university meetings where we talk about the photocopying budget and you know whether the course on romantic poetry should be one semester or two so herb laughed he said yes I know what you mean he said but you know you you could do something about that dot I was his wife dot and I used to have a salon at the University of Chicago every Sunday night and we would have essentially an open house and people knew it wasn't for sure a small talk it was really for some topic of depth he said but my advice would be that you choose the topic ahead of time fine I said so the following we exchanged mail over the summer that was us post in those days because you didn't have personal email right and we I decided I would organize it and there would be eight of us Alan Noll and his wife herb Simon and his wife Dorothea there was a novelist in town a man named Mark Harris he had just just arrived and his wife Josephine mark was most famous then for a novel called bang the drum solo slowly which was about baseball and Joe and me so eight people and we met monthly and we we just sank our teeth into really hard topics and it was great fun how have your own views around artificial intelligence changed in through the process of writing machines who think and afterwards the ripple effects I was a little skeptical that this whole thing would work out it didn't matter to me it was so audacious the whole thing being a IAI general yeah and in some ways it hasn't worked out the way I expected so far that is to say there's this wonderful lot of apps thanks to deep learning and so on but those are algorithmic yeah and in the part of symbolic processing there is very little yet yes and that's the feel that lies waiting for industrious and industrious graduate students maybe you can tell me some figures they popped up in your life in the 80s with expert systems where there was the symbolic AI possibilities of what's you know that what most people think of as AI if you dream of the possibilities AI is really expert system and those hit a few walls and those challenges there and I think yes they will reemerge again with some new breakthroughs and so on but what did that feel like both the possibility and the winter that followed slow down ah you know this whole thing about AI winter is to me a crock snow winters because I look at the basic research that was being done in the 80s which is supposed to be my god it was really important it was laying down things that nobody had thought about before but it was basic research you couldn't monetize it hence the winter yeah you know research scientific research goes in fits and starts it isn't this nice smooth Oh this follows this follows this no oh you know it just doesn't work that way there's an interesting thing the way winters happen it's never the fault of the researchers it's the it's the the some source of hype over promising well no let me take that back sometimes it is the fault of the researchers sometimes certain researchers might over-promised the possibilities they themselves believe that we're just a few years away sort of just recently talked to you on musk and he believes he'll have an autonomous Veeck will have autonomous vehicles in a year I and he believes it a year a year yeah would have mass deployment one time for the record this is too fast on 19 right now yes he's talking 2020 to do the impossible you really have to believe it and I think what's going to happen when you believe it cuz there's a lot of really brilliant people around him is some good stuff will come out of it some unexpected brilliant breakthroughs will come out of it when you really believe it when you work that hard I believe that I believe autonomous vehicles will come I just don't believe that it'll be in a year yeah I wish but nevertheless there's I found vehicles is a good example there's a feeling many companies have promised by 2021 by 2022 for GM I basically every single automotive companies promise they'll have autonomous vehicles so that kind of over the promise is what leads to the winter because it will come to those dates there won't be autonomous vehicles and they'll be a feeling well wait a minute if we took your word at that time that means we just spent billions of dollars had made no money and there's a counter response to where everybody gives up on it sort of intellectually and at every level the hope just dies and all that's left is a few basic researchers so you're uncomfortable with some aspects of this this idea well it's the difference between science and commerce so you think science prevail science goes on the way does oh it was science can really be killed by not getting proper funding or timely funding I think Great Britain was a perfect example of that the Whitehill report in remembers a year essentially said there's no use of great britain putting that any money into this it's it's going nowhere and this was all about social factions in Great Britain ed Murrow hated Cambridge and Cambridge hated Manchester and yep somebody else can write that story but it really did have a heart effect on research there now they've come roaring back with deep mind yeah but that's one guy and his visionaries around him but just to push on that it's kind of interesting you have this dislike of the idea of an AI winter the words that where's that coming from where were you oh because I just don't think it's true uh there was a particular periods of time this romantic notion certainly yeah no I I admire science perhaps more than I admire commerce Commerce is fine hey you know we all got to live but science has a much longer view than Commerce and continues almost regardless not it can't continue totally regardless but it almost regardless of what saleable and what's not what's monetizable and what's not so the winter is just something that happens on the Commerce side and the science so it seemed arches that's a beautifully optimistic inspiring message I agree with you I think if we look at the key people that work in AI their work in key scientists most disciplines they continue working out of the love for science no matter the you can always scrape up some funding to stay alive and they continue working diligently but there certainly is a huge amount of funding now and there's a concern on the AI side and deep learning there's a concern that we might with over-promising hit another slowdown and funding which does affect the number of students you know that kind of thing yeah it no it does so the kind of ideas you had two machines who think did you continue that curiosity throw through the decades that followed yes I did and what what was your view historical view of how AI community evolved the conversations about it the work as it persisted the same way from its birth no of course not like it's just as we were just talking the symbolic AI really kind of dried up and it all became algorithmic I remember a young AI student telling me what he was doing and I had been away from the field long enough I'd gotten involved with complexity of the Santa Fe Institute I thought algorithms yeah you there in the surface of but they're not the main event no they became the main event that surprised me and we all know the downside of this we all know that if you're using an algorithm to make decisions based on a gazillion human decisions baked into it are all the mistakes that humans make the bigger trees the shortsightedness and so on and so on so you mentioned santa fe institute i usually you've written the novel edge of chaos but it's inspired by the ideas of complexity where a lot of which have been extensively explored at the Santa Fe Institute right it's a I mean it's another fascinating topic just sort of emergent complexity from chaos nobody knows how it happens really it seems to where all the interesting stuff happened so how the first knotty novel but just complexity in general in the work of Santa Fe fit into the bigger puzzle of the history of AI or maybe even your personal journey through that one of the last projects I did concerning AI in particular was looking at the work of Harold Cohen the painter and Harold was deeply involved with AI he was a painter first and and what his project Aaron which was a lifelong project did was reflect his own cognitive processes okay Harold and I even though I wrote a book about it we had a lot of friction between us and I went I thought this is it you know the book died it was published didn't fell into a ditch this is it I'm finished it's time for me to do something different by chance this was a sabbatical year for my husband and we spent two months at the Santa Fe Institute in two months at Caltech and then the spring semester in Munich Germany okay those two months at the Santa Fe Institute were so restorative for me and I began to it this the Institute was very small then it was in some kind of office complex on old Santa Fe Trail everybody kept their door open so you could crack your head on a problem and if you finally didn't get it you could walk in to see Stewart Coffman or you know any number of people and say I don't get this can you explain and one of the people that I was talking to about complex adaptive systems was Murray Gelman and I told Murray what Harold Cohen had done and I said you know this sounds to me like a complex adaptive system he said yeah it is well what do you know Harold's Erin had all these kissing cousins all over the world in science and in economics and so on and so forth I was so relieved I thought okay your instincts are okay you're doing the right thing I didn't have the vocabulary and that was one of the things that the Santa Fe Institute gave me if I could have rewritten that book no it had just come out I couldn't rewrite it I would have had a vocabulary to explain what Erin was doing okay so I got really interested in what was going on at the Institute and that people were again bright and funny and willing to explain anything to this amateur George Cowan who was then the head of the Institute said he thought it might be a nice idea if I wrote a book about the Institute and I thought about it and I I had my eye on some other project god knows what and I said I'm sorry George yeah I really love to do it but you know just not gonna work for me at this moment I said all too bad I think it would make an interesting book well he was right and I was wrong I wish I'd done it but that's interesting I hadn't thought about that that that was a road not taken' that I wish I'd taken well you know or this just on that on that point it's quite brave for you as an as a writer as a sort of coming from a world of literature and the literary thinking historical thing I mean just from that world and bravely talking to quite I assume large egos and in fact in uh AI or in complexity and so on how did you do it like where did you I mean I suppose they could be intimidated of you as well this is two different worlds I I never picked up that anybody was intimidated by me how are you brave enough where did you find the guts it's not just dumb dumb luck I mean I this is an interesting Rock to turn over I'm gonna write a book about and you know people have enough patience with writers if they think they're gonna end up at a book that they let you flail around and so on as well but they also look if the writer has there's like if there's a sparkle in their eye if they get it yeah sure right when were you at the Santa Fe Institute uh the time I'm talking about is 1990 yeah 1990 1901 92 but we then because Joe is an external faculty member we're in Santa Fe every summer we bought a house there and I didn't have that much to do with the Institute anymore I was writing my novels I was doing whatever I was doing but I loved the Institute and I loved they again the audacity of the ideas that really appeals to me I think that there there's this feeling much like in great great Institutes of neuroscience for example that it's there they're in it for the long game of understanding something fundamental about reality in nature and that's really exciting so if we start to not to look a little bit more recently how you know AI is really popular today how is this world you mentioned the algorithmic but in general is the spirit of the people the kind of conversations you hear through the grapevine and so on is that different than the roots that you remember no the same kind of excitement the same kind of this is really gonna make a difference in the world and it will it has you know a lot of folks especially young you know 20 years old or something they think we've just found something special here we're going to change the world tomorrow on a timescale do you have a sense of what of the time scale at which breakthroughs in AI haven't I really don't because look at deep learning if that was a Geoffrey Hinton came up with the algorithm in 86 but it took all these years for the technology to be good enough to actually be be applicable so no I can't predict that at all I can't I wouldn't even try well let me ask you to not to try to predict but to speak to the you know I'm sure in the 60s as it continues now there's people that think let's call it we can call it this fun word the singularity mmm when there is a phase shift there's some profound feeling where we're all really surprised by what's able to be achieved I'm sure those dreams were there I remember reading quotes in the 60s and those 15 you how have your own views maybe if you look back about the timeline of a singularity changed well I'm not a big fan of the singularity as Ray Kurzweil has presented it how would you define the Ray Kurzweil sort of a holiday well how do you think of singularity in the skin if I understand Kurt's Wiles view it's sort of there's going to be this moment when machines are smarter than humans and you know game game over however the game over is I mean do they put us on a reservation do they etc etc and first of all machines are smarter than humans in some ways all over the place and they have been since adding machines were invented so it's not it's not gonna come like some great eatable crossroads you know where they meet each other and our offspring Oedipus says your dad yeah it's just not gonna happen yes oh it's already game over with calculators right they're already out to do much better basic arithmetic than us but you know there's a human-like intelligence and it's not the ones that destroy us but you know somebody that you can have as a as a friend oh you can have deep connections with that kind of passing the Turing test and beyond those kinds of ideas have you dreamt of those oh yes yes yeah it was possible in a book I wrote with that Feigenbaum there's a little story called the geriatric robot and how I came up with the geriatric robot is a story in itself but here's here's what the Gerry robot does it doesn't just clean you up and feed you and we'll you out into the Sun it's great advantages it listens it says tell me again about the great coup of 73 tell me again about how awful or how wonderful your grandchildren are and so on and so forth and it isn't hanging around to inherit your money it isn't hanging around because it can't get any other job this is its job and so on and so forth well I would love something like that oh yeah I mean for me that deeply excites me so I think there's a lot of us like she got a no it was a joke I dreamed it up because I needed to talk to college students and I needed to give them some idea of what AI might be and they were rolling in the aisles as I elaborated and elaborated and elaborated when it went into the book they took my hide off in the New York Review of Books this is just what we have thought about these people in AI they're inhuman come on get over it don't you think that's a good thing for the world that AI could potentially walk I do absolutely and furthermore I want you know I'm pushing 80 now by the time I need help like that I also want it to roll itself in a corner and shut the up let me let me linger on that point do you really though yeah I do here's what you wanted to push back a little bit a little but I have watched my friends go through the whole issue around having help in the house and some of them have been very lucky and they had fabulous help and some of them have had people in the house who want to keep the television going on all day Oh so want to talk on their phones all day No so basically just yourself in the corner unfortunately as humans when were assistants we we care we're still even when we're assisting others we care about ourselves more of course and so you create more frustration and in robot AI assistant can really optimize the experience of you I was just speaking to the point you actually bring up a very very good point but I was speaking to the fact that as humans they're a little complicated that we don't necessarily want a perfect servant I don't maybe you disagree with that but there's um I think there's a push and pull with humans mm-hmm a little tension a little mystery that of course that's really difficult failure to get right but I do sense especially in today with social media the people are getting more and more lonely even young folks and sometimes especially young folks that loneliness there's a longing for connection and AI can help alleviate some of that loneliness some just somebody who listens like in person that so we speak sodus so so so to speak yeah so to speak yeah that to me is really exciting but so if we look at that that level of intelligence is exceptionally difficult to achieve actually as the singularity or whatever that's the human level bar that people have dreamt of that to touring dreamt of it he had a date time line do you have how of your own timeline evolved on past don't I don't even think about it you don't even know just this field has been so full of surprises for me you just take him in and see yeah it's I just can't maybe that's because I've been around the field long enough to think you know don't don't go that way herps was terrible about making these predictions of when this and that would happen and he was a sensible guy yeah his quotes are often used right as a legend yeah yeah do you have concerns about AI the existential threats that many people like um Oscars and some Harris and others oh yeah yeah that takes up a half a chapter in my book I called it the male gaze well you hang out the male gaze is actually a term from film criticism and I'm blocking on the woman's who dreamed this up but she pointed out how most movies were made from the male point of view that women were objects not subjects they didn't have any agency and so on and so forth so when Elon and his pals Hawking and so on came hey I was gonna eat our lunch our dinner and our midnight snack - I thought what and I said to Ed Val oh this is the first gun first these guys have always been the smartest guy on the block and here comes something that might be smarter ooh let's stamp it out before it takes over and and laughed he said I didn't think about it that way but I did I did and it is the male gaze you know okay suppose these things do have agency well let's wait and see what happens can we imbue them with ethics can we imbue them with a sense of empathy or are they just gonna be uh uh I know we've had centuries of guys like that that's interesting that the the ego the male gaze is immediately threatened and so you can't think in a patient calm way of how the tech could have all and he's speaking of which here and then b6 book the future of women no I think at the time and now certainly now I mean I'm sorry maybe at the time but I'm more cognitive now is extremely relevant you and Nancy Ramsay talk about four possible futures right of women in science and tech so if we look at the decades before and after the book was released can you tell a history sorry of women in science and tech you know how it has evolved how have things changed where do we stand not enough they have not changed enough the way that women are ground down in computing is simply unbelievable but what are the four possible futures for women in tech from the book what you're really looking at our various aspects of the present so for each of those you could say oh yeah we do have backlash look at what's happening with abortion and so on and so forth we have one step forward one step back as a golden age of equality was the hardest chapter to write and I used something from the Santa Fe Institute which is the sand pile effect that you drop sand very slowly onto a pile and it grows and it grows and it grows until suddenly it just breaks apart and in a way me too has done that that was the last drop of sand that broke everything apart that was a perfect example of the sand pile effect and that made me feel good it didn't change all of society but it really woke a lot of people up but I you in general optimistic about maybe after me - I mean me choose about a very specific kind of thing boy solve that and you solve everything but are you in general optimistic about the future yes I'm a congenital optimist I can't help it what about AI what are your thoughts I went one way I of course I get asked what you worry about and the one thing I worry about is the things that we can't anticipate you know there's going to be something out of left field that what we will just say we weren't prepared for that I am generally optimistic when I first took up being interested in AI like most people in the field more intelligence was like more virtue you know what could be bad and in a way I still believe that but I realized that my notion of intelligence has broadened there are many kinds of intelligence and we need to imbue our machines with those many kinds see you've now just finished or in the process of finishing your the book you've been working on a memoir what how have you changed I know it's just writing but how have you changed the process if you look back what kind of stuff did it bring up to you that surprised you looking at the entirety of it all the biggest thing and it really wasn't a surprise is how lucky I was oh my to be it to have access to the beginning of a scientific field that is going to change the world how did I luck out and yes of course my my view of things has widened a lot get back to one feminist part of our conversation without knowing it it really was subconscious I wanted AI to succeed because I was so tired of hearing that intelligence was inside the male cranium and I thought if there was something out there that wasn't a male thinking and doing well then that would put a lie to this whole notion of intelligence resides in the male cranium I did not know that until one night Harold Cohen and I were having a glass of wine maybe two and he said what drew you to AI and I said well you know smartest people I knew great project blah blah blah and I said and I wanted something besides male smarts and it just bubbled up out of me Lex brilliant actually so AI really humbles all of us and humbles the people that need to be humbled the most wow that is so beautiful Pamela thank you so much for talking it was really a pleasure thank you you
Keoki Jackson: Lockheed Martin | Lex Fridman Podcast #33
the following is a conversation with keoki Jackson he's the CTO of Lockheed Martin a company that through his long history has created some of the most incredible engineering Marvel's human beings have ever built including planes that fly fast and undetected defense systems that intersect nuclear threats that can take the lives of millions and systems the venture out into space the moon Mars and beyond in these days more and more artificial intelligence has an assistive role to play in these systems I've read several books reparation for this conversation is a difficult one because in part Lockheed Martin builds military systems that operate in a complicated world that often does not have easy solutions in the grey area between good and evil I hope one day this world will rid itself of war in all its forms but two paths to achieving that in a world that does have evil is not obvious what is obvious is good engineering and artificial intelligence research has a role to play on the side of good Lockheed Martin and the rest of our community are hard at work at exactly this task we talk about these and other important topics in this conversation also most certainly both keoki and I have a passion for space us humans venturing out toward the Stars we talk about this exciting future as well this is the artificial intelligence podcast if you enjoy it subscribe on YouTube give it five stars on iTunes supported on patreon or simply connect with me on Twitter at Lux Friedman spelled Fri D M am and now here's my conversation with K okie Jackson I read several books on Lockheed Martin recently my favorite in particulars by Ben rich called skunkworks personal memoir it gets a little edgy at times but from that I was reminded that the engineers of Lockheed Martin have created some of the most incredible engineering Marvel's human beings have ever built throughout the centuries throughout the 20th century and the 21st do you remember a particular project or system and Lockheed or before that at the space shuttle Columbia that you were just in awe at the fact that us humans could create something like this you know that's a that's a great question there's a lot of things that I could draw in there when you look at the skunk works and Ben Rich's book in particular of course it starts off with basically the start of the jet age and the p80 I had the opportunity to sit next to one of the Apollo astronauts Charlie Duke recently at dinner and I said hey what's your favorite aircraft and he said well was by far the f-104 starfighter which was another aircraft that came out of Lockheed there what was the first Mach 2 jet fighter aircraft they called it the missile with a man in it and so those are the kinds of things I grew up hearing stories about you know of course the sr-71 is incomparable as you know kind of the epitome of speed altitude and just the coolest looking aircraft ever so so there's a connoisseur that's a flame that's a yeah intelligence surveillance and reconnaissance aircraft that was designed to be able to outrun basically go faster than any air defense system but you know I'll tell you I'm a space junkie junkie that's why I came to MIT that's really what took me ultimately to Lockheed Martin and I grew up in so Lockheed Martin for example has been essentially at the heart of every planetary mission like all the Mars missions we've had a part in and we've talked a lot about the 50th anniversary of Apollo here in the last couple of weeks right but remember 1976 July 20th again national space days the landing of the Viking the Viking Lander on the surface of Mars just a huge accomplishment and when I was a young engineer at Lockheed Martin I got to meet engineers who had designed you know various pieces of that mission as well so that's what I grew up on is these planetary missions the start of the Space Shuttle era and ultimately had the opportunity to see Lockheed Martin's part and we can maybe talk about some of these here but Lockheed Martin's part in all of these space journeys over the years do you dream and I apologize for getting philosophical at times or sentimental I do romanticize the notion of space exploration so do you dream of the day when us humans colonize another planet like Mars or a man a woman a human being steps on Mars absolutely and that's a personal dream of mine I haven't given up yet on my own opportunity to fly into space but but as you know from the Lockheed Martin perspective this is something that we're working towards every day and of course you know we're we're building the Orion spacecraft which is the most sophisticated human-rated spacecraft ever built and it's really designed for these deep-space journeys you know starting with the moon but ultimately going to Mars and being the platform you know from a design perspective we call the Mars base camp to be able to take humans to the surface and then after a mission of a couple of weeks bring him back up safely and so that is something I want to see happen during my time at Lockheed Martin so I'm pretty excited about that and I think you know once we prove that's possible you know colonization might be a little bit further out but it's something that I'd hoped to see so maybe you can give a little bit an overview of the Lockheed Martin's partner with a few years ago with Boeing to work with the DoD and NASA to build launch systems and rockets with the ula what's beyond that what's lakis mission timeline and long-term dream in terms of space you mentioned the moon I've heard you talk about asteroids as Mars what's the timeline what's the engineering challenges and what's the dream long term yeah I think the dream long term is to have a permanent presence in space beyond low-earth orbit ultimately with a long-term presence on the moon and then to the planets to Mars and rage interrupt and that's a long term presence means sustained and sustainable presence and an economy a space economy that really goes alongside with human beings being and being able to launch perhaps from those so like hop you know it says that there's a lot of energy that goes in those hops right so I think the first step is being able to get there and to be able to establish sustained basis right and and build from there and a lot of that means getting as you know things like the cost of launch down and you mentioned United Launch Alliance and so I don't want to speak for ula but obviously they're they're working really hard to on their next generation of space launch vehicles to you know maintain that incredible mission success record that ula has but ultimately continue to drive down the cost and make the flexibility the speed and the access ever greater so what's the missions that are in the horizon that you could talk to so I hope to get to the moon absolutely absolutely I mean I think you know this or you may know this you know there's a lot of ways to accomplish some of these goals and so that's a lot of what's in discussion today but ultimately the the goal is to be able to establish a base essentially insist lunar space that would allow for ready transfer from orbit to the lunar surface and back again and so that's sort of that near-term and I say near-term in the next decade or so vision starting off with you know stated objective by this administration to get back to the moon in the nineteen or the 2024 2025 timeframe which is is right around the corner here so how big of an engineering challenge is that I think the big challenge is not so much to go but to stay right and so we demonstrated in the 60s that you could send somebody up to a couple of days of mission and bring them home again successfully now we're talking about doing that I'd say more time I was saying adust real scale but a sustained scale right so permanent habitation you know regular reuse of vehicles the infrastructure to get things like fuel air consumables replacement parts all the things that you need to sustain that kind of infrastructure so those are certainly engineering challenges there are budgetary challenges and those are all things that we're gonna have to work through you know the other thing and I shouldn't I don't want to minimize this I mean I'm excited about human exploration but the reality is our technology and where we've come over the last you know forty years essentially has changed what we can do with robotic exploration as well and you know to me it's incredibly thrilling this seems like old news now but the fact that we have Rovers driving around the surface of Mars and sending back data is just incredible the fact that we have satellites in orbit around Mars that are collecting weather you know they're looking at the terrain they're mapping all these kinds of things on a continuous basis that's incredible and the fact that you know it's you got the time lag of course going to the going to the planets but you can effectively have virtual human presence there in a way that we have never been able to do before and now with the advent of even greater processing power better AI systems better cognitive systems and decision systems you know you put that together with the human Keith and we really opened up the solar system in a whole different way and I'll give you an example we've got osiris-rex which is a mission to the asteroid Bennu so the spacecraft is out there right now I'm basically a year mapping activity to map the entire surface of that asteroid in great detail you know all autonomously piloted right with the idea then that this is not too far away it's gonna go in it's got a sort of fancy vacuum cleaner with a bucket it's gonna collect the sample off the asteroid and then send it back here to earth and so you know we have gone from sort of those tentative steps in the 70s you know early landings video of the solar system so now we've sent spacecraft the Pluto we have gone to comets and brought and an intercept at comets we've brought stardust date you know you know material back so that's we've gone far and there's incredible opportunity to go even farther so it seems quite crazy that this is even possible that can you talk a little bit about what it means to orbit an asteroid and with a bucket to try to keep pick up some soil samples yeah so part of it is just kind of the you know these are the kinds of techniques we use here on earth for high speed high speed high accuracy imagery stitching these scenes together and creating essentially high accuracy world maps right and so that's what we're doing obviously on a much smaller scale with an asteroid but the other thing that's really interesting you put together sort of that neat control and you know data and imagery problem but the stories around how we design the collection I mean as essentially you know this is the sort of the human ingenuity element right that essentially you know had an engineer who had a one-day cycle starts messing around with parts vacuum cleaner bucket you know maybe we could do something like this and that was what led to what we call the pogo stick collection right we're basically I think comes down it's only there for seconds does that collection grabs the essentially blows them the regolith material into the collection hopper and off it goes it doesn't really land almost it's it's a very short landing Wow that's that's incredible so what is uh in those talk a little bit more about space eh what's the role of the human in all of this what are the challenges what are the opportunities for humans as they pilot these these vehicles in space and for humans that may step foot and on either the Moon or Mars yeah it's a great question because you know I just have been extolling the virtues of robotic and you know Rovers autonomous systems and those absolutely have a role I think the thing that we don't know how to replace today is the ability to adapt on the fly to new information and I believe that will come but we're not there yet there's a ways to go and so you know you think back to Apollo 13 and the ingenuity of the folks on the ground and on the spacecraft essentially cobbled together a way to get the carbon dioxide scrubbers to work those are the kinds of things that ultimately young and I'd say not just from dealing with anomalies but you know dealing with new information you see something and rather than waiting twenty minutes or half an hour an hour to try to get information back and forth but be able to essentially Ravech there on the fly collect you know different samples take a different approach choose different areas to explore those are the kinds of things that that human presence enables that is still a ways ahead of us on the AI side yeah there's some interesting stuff we'll talk about on the teaming side here on earth that's that's pretty cool to explore and it's okay so let's not leave the space piece out so what is teaming what does AI and humans working together in space look like yeah one of the things we're working on is a system called Maya which is you can think of it it's what's an AI assistant and in space exactly and you think of it as the Alexa in space right but this goes hand-in-hand with a lot of other developments and so today's world everything is essentially model-based model-based systems engineering to the actual digital tapestry that goes through the design the bill the manufacturer the testing and ultimately the sustainment of the system and so our vision is really that you know when our astronauts are there around Mars you're going to have that entire digital library of the spacecraft of its operations all the test data all the test data and flight data from previous missions to be able to look and see if there are anomalous conditions until the humans and potentially deal with that before it becomes a bad situation and help the astronauts work through those kinds of things and it's not just you know dealing with problems as they come up but also offering up opportunities for additional exploration capability for examples so so that's the vision is that you know these are going to take the best of the human to respond to to changing circumstances and rely on the best of AI capabilities to monitor these you know this almost infinite number of data points and correlations and data points that humans frankly aren't that good at it's how do you develop systems in space like this whether it's a alexa in space or in general any kind of control systems any kind of intelligent systems when you can't really test stuff too much out in space it's very expensive to test stuff here so how do you develop such systems yeah that's that that's the beauty of this digital twin if you will and of course with Lockheed Martin we've over the past you know five plus decades been refining our knowledge of the space environment of how materials behave dynamics that controls the you know radiation environments all of these kinds of things so we're able to create very sophisticated models they're not perfect but they're they're very good and so you can actually do a lot I spent part of my career you know simulating communication spacecraft you know missile warning spacecraft GPS spacecraft in all kinds of scenarios and all kinds of environments so this is really just taking that to the next level the interesting thing is that now you're bringing into that loop a system depending on how its developed that may be non-deterministic it may be learned as it goes in fact we anticipate that it will be learning as it goes and so that brings a whole new level of interest I guess into how do you do verification and validation of these non-deterministic learning systems in scenarios that may go out of the bounds or the envelope that you have initially designed to so had this system in its intelligence has the same complexity some of the same complex as a human does than the it learns over time it's unpredictable in certain kinds of ways in the so you still you also have to model that when you're thinking about saying you're in your thoughts as possible to model the the majority of situations the the important aspects of situations here on earth and in space enough to test stuff yeah that's this is really an active area of research and we're actually funding University Research in a variety of places including MIT this is in the realm of trust and verification and validation of I'd say autonomous systems in general and then as a subset of that autonomous systems that incorporate artificial intelligence capabilities and this is not an easy problem we're working with startup companies we've got internal R&D but our conviction is the autonomy in more and more AI enabled autonomy is going to be in everything that Lockheed Martin develops and fields and it's going to be retrofit autonomy and Nai are gonna be retrofit into existing systems they're going to be part of the design for all of our future systems and so maybe I should take a step back and say the way we define autonomy so we talk we talk about autonomy essentially system that composes selects and then executes decisions with varying levels of human intervention and so you could think of no autonomy so this is essentially human doing the task you can think of effectively partial autonomy where the human is in the loop so making decisions in every case about what the autonomous system can do either in the cockpit or remotely or remotely exactly but still in that control loop and then there's what you'd call supervisory autonomy the autonomous system is doing most of the work the human can intervene to stop it or to change the direction and then ultimately fall at full autonomy where the human is off the loop altogether and for different types of missions want to have different levels of autonomy so now take that spectrum and this conviction that autonomy and more and more AI are in everything that we develop I the kinds of things that Lockheed Martin does you know a lot of times our safety of life critical kinds of missions you think about aircraft for example and so we require and our customers require an extremely high level of confidence one that you know we're going to protect life to that we're going to have that these systems will behave in ways that their operators can understand and so this gets into that whole field again you know they're being able to verify and validate that the systems have been and that they will operate the way they're design and the way they're expected and furthermore that they will do that in ways that can be explained and understood and that is an extremely difficult challenge yes so here's a difficult question I don't mean to bring this up but I think it's a good case study that people are familiar with the Boeing 737 max commercial airplane has had two recent crashes where their flight control software system failed and it's software so I don't mean to speak of all Boeing but broadly speaking we have this than the autonomous vehicle space - semi autonomous we have millions of lines of code software making decisions there is a little bit of a clash of cultures because software engineers don't have the same culture of safety often that people who build systems like at Lockheed Martin uh do where it has to be exceptionally safe you have to test this on so how do we get this right when software is making so many decisions yeah and this there's a lot of things that have to happen and by and large I think it starts with the culture right which is not necessarily something that a is taught in school or B is something that would come here depending on what kind of software you're developing it may not be relevant right if you're targeting ads or something like that so and by and large say not just Lockheed Martin but certainly the aerospace industry of the whole has developed a culture that does focus on safety safety of life operational safety mission success but as you know these systems have gotten incredibly complex and so they're to the point where it's almost impossible you know state space has become so huge that it's impossible to or very difficult to do a systematic verification across the entire set of potential ways that an aircraft could be flown all the conditions that could happen all the potential failure failure scenarios now maybe that's soluble one day maybe when we have our quantum computers at our fingertips we'll be able to actually simulate across an entire you know almost infinite state space but today you know there's a there's a lot of work to really try to bound the system to understand predictable ways and then have this culture of continuous inquiry and skepticism and questioning to say did we really consider the right realm of possibilities have we done the right range of testing do we really understand you know in this case you know human and machine interactions the human decision process alongside the machine processes and so that's that culture that we call it the culture of success in lockheed-martin that really needs to be established and it's not something you know that it's something that people learn by living in it and it's something that has to be promulgated you know and it's done you know from the highest levels at a company of Lockheed Martin like Lockheed Martin yeah and the same as being faced at certain on his vehicle companies where that culture is not there because it started mostly by software engineers so that's what they're struggling with is there lessons that you think we should learn as an industry in a society from the Boeing 737 max questions these crashes obviously are their tremendous tragedies their tragedies for all of the people the the crew the families the passengers the people on the ground involved and you know it's also a huge business and economic setback as well I mean you know we've seen that it's impacting essentially the trade balance of the US so these are these are important questions and these are the kinds that you know we've seen similar kinds of questioning at times you go back to the Challenger accident and it is I think always important to remind ourselves the humans are fallible the systems we create as perfect as we strive to make them we can always make them better and so another element of that culture of mission success is really that commitment to continuous improvement if there's something that goes wrong a real commitment to root cause and true root cause understanding to taking the corrective actions and to making the system future systems better and certainly we want we strive for you know you know no accidents and if you look at the record of the commercial airline industry as a whole and the commercial aircraft industry as a whole you know there's a very nice decaying exponential two years now where we have no commercial aircraft accidents at all right and our fatal accidents and all so that didn't happen by accident it was through the regulatory agencies FAA the airframe manufacturers really working on a system to identify root causes and driving them out so maybe we can take a step back and many people are familiar but Lockheed Martin broadly what kind of categories of systems are you involved in building you know Lockheed Martin we think of ourselves as a company that solves hard mission problems and the output of that might be an airplane or spacecraft or a helicopter or a radar or something like that but ultimately we're driven by these you know like what what is our customer what is that mission that they need to achieve and so that's what drove the sr-71 right how do you get pictures of a place where you got sophisticated air defense systems that are capable of handling any aircraft that was out there at the time right so that you know that's what you'll did an sr-71 was build a nice flying camera exactly and make sure it gets out and it gets back right and that led ultimately to really the start of the space program in the u.s. as well so now take a step back to Lockheed Martin of today and we are you know on the order of 105 years old now between Lockheed and Martin the two big heritage companies of course were made up with a whole bunch of other companies that came in as well General Dynamics yeah kind of go down the list today were you can think of us in this space of solving mission problems so obviously on the aircraft side tactical aircraft building the most advanced fighter aircraft that the world has ever seen you know we're up to now several hundred of those delivered building almost a hundred a year and of course working on the things that come after that on the space side we are engaged in pretty much every venue of space utilization and exploration you can imagine so I mentioned things like navigation timing GP communication satellites missile warning satellites we've built commercial surveillance satellites we've built commercial communication satellites we do civil space so everything from human exploration to the robotic exploration in the outer planets and keep keep going on the space front but I don't you know a couple other areas I'd like to put out we're heavily engaged in building critical defensive systems and so a couple that I'll mention the aegis combat system this is basically the integrated air and missile defense system for the u.s. and allied fleets and so protects you know Carrier Strike groups for example from incoming ballistic missile threats aircraft threats cruise missile threats and kind of go down the list of the the carriers the fleet that itself is the thing that is being protected the carriers aren't serving as a protection for something else well that's that's a little bit of a different application we've actually built the version called Aegis ashore which is now deployed in a couple of places around the world so that same technology I mean basically it can be used to protect either an ocean-going fleet or a land-based activity another one the Thad program so Thad this is the theater high altitude area defense this is to protect you know relatively broad areas against sophisticated ballistic missile threats and so now you know it's deployed with a lot of us capabilities and now we have international customers that are looking to buy that capability as well and so these are systems that defend not just defend militaries and military capabilities but defend population areas we saw you know maybe the first public use of these back in the in the first Gulf War with the Patriot systems and these are these are the kinds of things that Lockheed Martin delivers and there's a lot of stuff that goes with it so think about the radar systems and the sensing systems that cue these the command and control systems that decide how you pair a weapon against an incoming threat and then all the human and machine interfaces to make sure that they can be operated successfully in very strenuous environments yeah there's a there's some incredible engineering that I never find like you but like you said so maybe if we just take a look at Lockheed history broadly maybe even looking at skunk works what are the biggest most impressive milestones of innovations if you look at stealth I would have called you crazy if you said that's possible at the time and supersonic and hypersonic so traveling it first of all traveling at the speed of sound is pretty damn fast and the supersonic and hypersonic three four or five times the speed of sound that seems I would also call you crazy if you say you can do that so can you tell me how it's possible to do these kinds of things and is there other milestones and an innovation that's going on you can talk about yeah well let me start you know on the skunkworks saga and you kind of alluded to it in the beginning and skunk works is as much idea as a place and so it's driven really by Kelly Johnson's 14 principles and I'm not gonna list all 14 of them off but the idea and this I'm sure will resonate with any engineer who's worked on a highly motivated small team before the idea that if you can essentially have a small team of very capable capable people who want to work on really hard problems you can do almost anything especially if you kind of shield them from bureaucratic influences if you create very tight relationships with your customers so that you have that team and shared vision with the customer those are the kinds of things that enable the skunkworks to do these these incredible things and you know we listed off a number that you can you brought up stuff and I mean this this whole you know I wish I could have seen been rich with a ball-bearing you know rolling across the desk to a general officer and saying would you like to have an aircraft that has the radar cross-section of this ball bearing probably one of the great you know the least expensive and most effective marketing campaigns in the history of the industry so just for people the not familiar I mean the way you detect aircraft does I mean I'm sure there's a lot of ways but radar for a longest time yeah there's a big blob they peers in the radar how do you make a plane disappear so it looks as big as a ball bearing what's involved in technology wise there what's the broadly sort of it the stuff you speak about I'll stick to what's in been Rich's but obviously the geometry of how radar gets reflected and the kinds of materials that either reflect or absorb arcum kind of a couple of the critical elements there I mean it's a cat-and-mouse game right I mean you know radars get better stealth capabilities kids better and so it's a it's a really game of continuous improvement and innovation there I'll leave it at that yeah so the idea that something is essentially invisible it says quite fascinating but the other one is flying fast so speed of sound is 750 60 miles an hour with this little SuperSonics 3 you know Mach 3 something like that yeah we talked about the supersonic obviously and we kind of talked about that is that realm from Mach 1 up through about Mach 5 and then hypersonic so you know high supersonic speeds would be past Mach 5 and you got to remember I know Lockheed Martin and actually other companies have been involved in hypersonic development since the late 60s you know you think of everything from the x-15 to the Space Shuttle as examples of that I think the difference now is if you look around the world particularly the threat environment that we're in today you're starting to see you know publicly folks like the Russians and the Chinese saying they have hypersonic weapons capability that could threaten us and allied capabilities and also basically you know the claims are these could get around defensive systems that are out there today and so there's a real sense of urgency you hear it from folks like the Undersecretary of defense for research and engineering dr. Mike Griffin and others in the department of defense that hypersonics is is something that's really important to the nation in terms of both parody but also defensive capabilities and so that's something that you know we're pleased it's something Lockheed Martin's you know had a heritage in we've invested Hardie dollars on our side for many years and we have a number of things going on with various US government customers in that field today that we're very excited about so I I would anticipate we'll be hearing more about that in the future from our customers and I've actually haven't read much about this probably you can't talk about much of it at all but on the defensive side it's the fascinating problem of perception of trying to detect things that are really hard to see can you comment on how hard that problem is and how how hard is it to stay ahead even if we go back a few decades stay ahead of the competition well maybe I again you got to think of these as ongoing the capability development and so think back to the early days of missile defense so this would be in the 80s the SDI program and in that timeframe we proved in Lockheed Martin proof that you could hit a bullet with a bullet essentially and which is something that had never been done before to take out an incoming ballistic missile and so that's led to these incredible hit-to-kill kinds of capabilities pac-3 that's the Patriot advanced capability model 3 the Lockheed Martin builds the THAAD system that I that I talked about so now hypersonics you know they're different from ballistic systems and so we got to take the next step in defensive capability I can leave that there but I can only imagine you know let me just comment sort of Resident Engineer it's sad to know that so much that Lockheed has done in the past is classified or today you know and it's shrouded in secrecy it has to be by the nature of the application so like what I do so we what we do here at MIT would like to inspire young years young scientists and yet in a lucky case some of that engineering has to stay quiet how do you think about that how does that make you feel is there a future where more can be shown or is it just the nature the nature of this world that has to remain secret it's a good question I think the public can see enough of including students who may be in grade school high school college today to understand the kinds of really hard problems that we work on and I mean look at the f-35 right and you know obviously a lot of the detailed performance levels are sensitive and control but you know we can talk about what an incredible aircraft this is you know supersonic super cruise kind of a fighter a a you know stealth capabilities it's a flying information you know system in the sky with data fusion sensor fusion capabilities that have never been seen before so these are the kinds of things that I believe you know there was these are kinds of things that got me excited when I was a student I think these still inspire students today and the other thing and say I mean you know people are inspired by space people are inspired by aircraft our employees are also inspired by that sense of mission and I will just give you an example I had the privilege to work and lead our GPS programs for some time and that was a case where he actually worked on a program that touches billions of people every day and so when I said I worked on GPS everybody knew what I was talking about even though they didn't maybe appreciate the technical challenges that went into that but I'll tell you I got a briefing one time from a major in the Air Force and he said I go by callsign GIMP GPS is my passion yeah I love GPS and he was involved in the operational test of the system he said when I was out in Iraq and I was on a helicopter Black Hawk helicopter and it was bringing back you know sergeant and handful of troops from a deployed location and I you know I said my job is GPS I asked that sergeant he's you know beaten down and kind of half asleep and I said what do you think about GPS and he brightened up his eyes lit up and he said well GPS that brings me and my troops home every day I love GPS and that's the kind of story where it's like okay I'm really making a difference here in the kind of work so that that mission piece is really important the last thing I'll say is that and this gets to some of these questions around advanced technologies it's not you know they're not just airplanes and spacecraft anymore for people who are excited about advanced software capabilities about AI about bringing machine learn the things that we're doing to you know exponentially increase the mission capabilities that go on those platforms and those are the kinds of things I think are more and more visible to the public yeah I think autonomy especially in flight is super exciting do you do you see if a day here we go back into philosophy future when most fighter jets will be highly autonomous to a degree where a human doesn't need to be in the cockpit in almost all cases well I mean that's a world that to a certain extent we're in today now these are remotely piloted aircraft to be sure but but we have hundreds of thousands of flight hours a year now in remotely piloted aircraft and then if you take the f-35 there I mean there are huge layers I guess in levels of autonomy built into that aircraft so that the pilot is essentially more of a mission manager rather than doing the data you know the second second elements of flying the aircrafts on in some ways it's the easiest aircraft in the world to fly and kind of funny story on that so I don't know if you know how aircraft carrier landings work but basically there's what's called a tail hook and it catches wires on the deck of the carrier and that's what brings the the aircraft to a screeching halt Britain and there's typically three of these wires so if you miss the first the second one you catch the the next one right and you know we got a little criticism I don't know how true this story is what we got a little criticism the f-35 is so perfect it always gets the second wires we're wearing out the wire I guess it always hits that one so that but that's the kind of autonomy that just makes these that essentially up levels what the human is doing two more that mission managers so much of that landing by that thirty-five is autonomous well it's just you know the control systems are such that you really have dialed out the variability and what that comes with all the environmental condition wearing it out so my point is to certain extent that world is here today do I think that we're gonna see a day anytime soon when there are no humans in the cockpit I don't believe that but I do think we're gonna see much more human machine teaming and we're gonna see that much more at the tactical edge and we did a demo you asked about what the skunkworks is doing these days and so this is what something I can talk about but we did a demo with the Air Force Research Laboratory or laboratory we called it have Rader and so using an f-16 as an autonomous wingman and we demonstrated all kinds of maneuvers and verus mission scenarios with the autonomous f-16 being that so-called loyal or trusted wingman and so those are the kinds of things that you know we've shown what is possible now given that you've up leveled that pilot to be a mission manager now they can control multiple other aircraft they can almost as extensions of your own aircraft flying alongside with you so that's that's another example of how this is really coming to fruition and then yeah I mentioned the the landings but think about just the implications for humans and flight safety and this goes a little bit back to the discussion we were having about how do you continuously improve the level of safety through automation while working through the complexities that Automation introduces so one of the challenges that you have in high-performance fighter aircraft is what's called G lock so this is G induced loss of consciousness so you pull 9 G's you're wearing a pressure suit that's not enough to keep the blood going to your brain you have a blackout right right and of course that's bad if you happen to be flying low you know near the deck and you know earng obstacle or terrain environment and so we developed the system and our Aeronautics division called Auto G caso autonomous ground collision avoidance system and we built that into the f-16 it's actually saved seven aircraft eight pilots already and a relatively short time it's been deployed it was so successful that the Air Force said hey we need to have this in the f-35 right away so we've actually gone done testing and that now on the f-35 and we've also integrated an autonomous air collision avoidance system so I think the air-to-air problem so now it's the integrated collision avoidance system but these are the kinds of capabilities you know I wouldn't call them a high I mean they're very sophisticated models you know of the aircraft dynamics coupled with the terrain models to be able to predict when essentially the pilot is doing something that is going to take the aircraft into or the pilots not doing something in this case but those it just gives you an example of how autonomy can be really a lifesaver in today's world it's like a autonomous emerges automated emergency braking in cars but is there any exploration of perception of for example detecting G lock that the pilot has is out so as opposed to perceiving the external environment to infer that the pilot is out but actually perceiving the Pala directly yeah this is one of those cases where you'd like to you know not take action if you think the pilots there and it's almost like systems that try to detect if a drivers falling asleep on the road right with limited success so I mean this is what I call the system of last resort right where if the if the aircraft has determined that it's going into the terrain get it out of there and and this is not something that we're just doing in the in the aircraft world and I wanted to highlight we have a technology we call matrix but this is developed at sikorsky innovations the whole idea there is what we call optimal piloting so not optional piloting or unpiloted but optimal piloting so an FAA certified system so you have a high degree of confidence it's generally pretty deterministic so if we know that I'll do in different situations but effectively be able to fly mission with two pilots one pilot no pilots and and have if you can think of it almost as like a dial of the level of autonomy that you want but able so it's running in the background at all times and able to pick up tasks whether it's you know sort of autopilot kinds of tasks or more sophisticated path planning kinds of activities to be able to do things like for example land on an oil rig you know in the North Sea in bad weather zero zero conditions and you can imagine of course there's a lot of military utility to capability like that you know you could have an aircraft that you want to send out for a crewed mission but then in the at night if you want to use it to deliver supplies in an unmanned mode that that could be done as well and so there's there's there's clear advantages there but think about on the commercial side you know if you're an aircraft taking you're gonna fly out to this oil rig if you get out there and you can't land then you gotta bring all those people back reschedule another flight pay the overtime for the crew that you just brought back because they didn't get where they're gonna pay for the or for the folks that are out there and the oil rig this is real economic you know these are dollars and cents kinds of advantages we're bringing in the commercial world as well so this is a difficult question from the a space that I would love it if we're able to comment so a lot of this autonomy in AI you've mentioned just now as this empowering effect that one is the last resort it keeps you safe the other is there's a with the teaming and in general assistive assistive AI and I think there's a there's always a race so the world is full of the world is complex it's full of bad actors so there's there's often a race to make sure that we keep this this country safe right but with AI there is a concern as a slightly different race there's a lot of people in the eye space they're concerned about the AI arms race that as opposed to you the United States becoming you know having the best technology and therefore keeping us safe we even we loose ability to keep control of it so this the AI arms race getting away from all of us humans so do you share this worry do you share this concern when we're talking about military applications that too much control and decision-making capabilities giving too after AI well I don't see it happening today and in fact this is something from a policy perspective you know it's obviously a very dynamic space but the Department of Defense has put quite a bit of thought into that and maybe before talking about the policy I'll just talk about some of the why and you alluded to it being a sort of a complicated and a little bit scary world out there but there's some big things happening today you hear a lot of talk now about a return to great powers competition particularly around China and Russia with the US but there are some other big players out there as well and what we've seen is the deployment of some very I'd say concerning new weapon systems you know particularly with Russia and breaching some of the irbm intermediate range ballistic missile treaties that's been in the news a lot you know the building of Islands artificial islands in the South China Sea by the Chinese and then arming those Islands the annexation of Crimea by Russia and the invasion of Ukraine and so there's there's some pretty scary things and then you add on top of that the North Korean threat has certainly not gone away there's a lot going on in the Middle East with Iran in particular and we see this global terrorism threat has not abated right so there are a lot of reasons to look for technology to assist with those problems whether it's AI or other technologies like hypersonic search which we discussed so now let me give just a couple of hypotheticals so people react sort of in the second timeframe right you know your photon hitting your eye - you know movement is you know on the order of a few tenths of a second kinds of processing times roughly speaking you know computers are operating in the Nano second timescale right so just to bring home what that means a nanosecond to a second is like a second to 32 years so seconds on the battlefield in that sense literally our lifetimes and so if you can bring in autonomous or AI enabled capability that will enable the human to shrink maybe you've heard the term the Gouda loop so this whole idea that a typical battlefield decision is characterized by observe so information comes in orient how does that what does that mean in the context decide what do I do about it and then act take that action if you can use these capabilities to compress that Oda loop to stay inside what your adversary is doing that's an incredible powerful force on the battlefield that's a really nice way to put it at the role of AI and competing in general has a lot to benefit from just decreasing from thirty two years to one second as opposed to on the scale of seconds and minutes and hours making decisions that humans are better at thinking and it actually goes the other way too so that's on the short timescale so humans kind of work in the you know one second two seconds to eight hours after eight hours you get tired you know you gotta go to the bathroom whatever the case might be so there's this whole range of other things think about you know surveillance and guarding you know facilities think about moving material logistics sustainment a lot of these what they called old dirty and dangerous things that you need to have sustained activity but it's sort of beyond the length of time that a human can practically do as well so there's this this range of things that are critical in military and defense applications that AI and autonomy are particularly well suited to now the interesting question that you brought up is okay how do you make sure that stays within human control net so that was the context for now the policy and so there is a DoD directive called 3009 because that's the way we name stuff in this world and and but it you know and I'd say it's well worth reading it's only a couple pages long but it makes some key points and it's really around you know making sure there's human agency and control over use of semi autonomous and autonomous weapon systems making sure that these systems are tested verified and evaluated in realistic real world type scenarios making sure that the people are actually trained on how to use them making sure the systems have human-machine interfaces that can show what state they're in and what kinds of decisions they're making making sure that you establish doctrine and tactics and techniques and procedures for the use of these kinds of systems and so right and by the way I mean the this none of this is easy but it I'm just trying to lay kind of the picture of how the US has said this is the way we're going to treat AI and autonomous systems that it's not a free-for-all and like there are rules of war and rules of engagement with other kinds of systems think chemical weapons biological weapons we need to think about the same sorts of implications and this is something that's really important for Lockheed Martin I obviously we are 100% complying with our customer and the policies and regulations but I mean AI is an incredible enabler say within the walls of Lockheed Martin in terms of improving production efficiency and doing helping engineers doing generative design improving logistics driving down energy costs I mean there's so many application but we're you know we had we're also very interested in so many elements of ethical application you know within Lockheed Martin so we need to make sure that things like privacy is is taken care of that we do everything we can to drive out bias in AI enabled kinds of systems that we make sure that humans are involved in decisions that were not just delegating accountability to algorithms and so it for us you know it all comes back I talked about culture before and it comes back to sort of the Lockheed Martin culture and our core values and so it's pretty simple for us and do what's right respect others perform with excellence and now how do we tie that back to the ethical principles of will govern how AI is used within Lockheed Martin and we actually have a world so you might not know this but they're actually awards for ethics programs Lockheed Martin's had a recognized ethics program for many years and this is one of the things that our ethics team is working with our engineering team on one of the miracles to me perhaps a layman again I was born in the Soviet Union so that I have echoes at least in my family history of World War two and the Cold War give a sense of why human civilization has not destroyed itself their nuclear war so nuclear deterrence and the thinking about the future there's this technology of a role to play here and what is the long-term future of nuclear deterrence look like yeah it's you know this is one of those hard hard questions and I should note that Lockheed Martin is you know both proud and privileged to play a part in multiple legs of our nuclear and strategic deterrent systems like the Trident submarine launched ballistic missiles you know you talk about you know is there still a possibility the human race could destroy itself I'd say that possibility is real but interestingly in some sense I think the strategic deterrence have prevented the kinds of you know incredibly destruct the world wars that we saw in the first half of the 20th century now things have gotten more complicated since that time and since the Cold War is more of a multipolar great powers world today just to give you an example back then you know there were you know in the in the Cold War timeframe just a handful of nations that had ballistic missile capability by last count and this is a few years old there's over 70 nations today that have that similar kinds of numbers in terms of space-based capabilities so so the world has gotten more complex and more challenging and the threats I think have proliferated in ways that we didn't expect you know the the nation today is in the middle of a recapitalization of our strategic deterrent I look at that as one of the most important things that our nation can do what is involved in deterrence is it is it being ready to attack or is it the defensive systems that catch attacks a little bit of both and so it's a complicated game theoretical kind of program but ultimately we are trying to prevent the use of any of these weapons and the theory behind prevention is that even if an adversary uses a weapon against you you have the capability to essentially strike back and do harm to them that's unacceptable and so that will deter them from you know making use of these weapon systems the deterrence calculus has changed of course with you know more nations now having these kinds of weapons but I think you know from my perspective it's very important you know to maintain a strategic deterrent you have to have systems that you will know you know will work when they're required to work you know that they have to be adaptable to a variety of different scenarios in today's world and so that's what this recapitalization of systems that were built over in previous decades making sure that they are appropriate not just for today but for the decades to come so the other thing I'd really like to note is strategic deterrence has a very different character today you know we used to think of weapons of mass destruction in terms of nuclear chemical biological and today we have a cyber threat we've seen examples of the use of cyber weaponry and if you think about the possibilities of using cyber capabilities or an adverse attacking the us to take out things like critical infrastructure electrical grids water systems those are scenarios that are strategic in nature to the survival of a nation as well so that is the kind of world that we live in today and you know part of my hope on this is one that we can also develop technical or technological systems perhaps enabled by a and athon imme that will allow us to contain and to fight back against these kinds of new threats that were not conceived when we first developed our strategic deterrence yeah I know that Lockheed is involved in cyber I saw I studied that you mentioned that it's an incredibly a nuclear almost seems easier than cyber because there's so many attack by there's so many ways that cyber can evolve it's such an uncertain future but talking about engineering with the mission I mean in this case that you're engineering systems that basically saved the world well like like I said we were privileged to deuce to work on some very challenging problems for for very critical customers here in the US and with our allies abroad as well Lockheed bills both military and non-military systems and perhaps the future of Lockheed may be more in non-military applications if you talk about space and beyond I see that as a preface to a difficult question so President Eisenhower in 1961 in his farewell address talked about the military-industrial complex and that it shouldn't grow beyond what is needed so what are your thoughts on those words on the military-industrial complex on the concern of growth of their developments beyond what may be needed that what where it may be needed is a critical phrase of course and and I think it is worth pointing out as you noted that Lockheed Martin we are in a number of commercial businesses from energy to space to commercial aircraft and so I wouldn't and I wouldn't neglect the importance of those parts of our business as well I think the world is dynamic and you know there was a time it doesn't seem that long ago to me it was I was a graduate student here at MIT and we were talking about the peace dividend at the end of the Cold War if you look at expenditure on military systems as a fraction of GDP were far below peak levels of the past and to me at least it looks like a time where you're seeing global threats changing in a way that would warrant you know relevant relevant investments in defense defensive capabilities the other thing I'd note for military and defensive systems it's it's not quite a free market right we don't sell to you know people on the street and that warrants a very close partnership between I'd say the customers and the people that design build and maintain these systems because of the very unique nature the the very difficult requirements the very great importance on you know safety and then you know operating the way they're intended every time and so that does create and it's frankly it's one of Lockheed Martin's great strengths is that we have this expertise built up over many years in partnership with our customers to be able to design and build these systems that meet these very unique mission needs yeah because building those system is very costly it's very little room for mistake I mean it's yeah just been Rich's book and so on just tells the story it's never I can just reading it if you're an engineer it's reads like a thriller okay let me let's go back to space for say I guess I'm always happy to go back to space so a few quick may be out there a fun questions maybe a little provocative what are your thoughts on the efforts of the new folks SpaceX and you know musk what are your thoughts about what Elon is doing you see Miss competition do you enjoy a competition Oh what are your thoughts yeah first of all certainly you on I'd say SpaceX and some of the and some of those other ventures are definitely a competitive force in the in the space industry and do we like competition yeah we do and we think we're very strong competitors I think it's yeah competition is what the US is founded on and in a lot of ways and always coming up with a better way and I I think it's really important to continue you know have fresh eyes coming in new innovation I do think it's important to have level playing fields and so you want to make sure that that you're not giving different requirements to different players but you know I tell people you know I spent a lot of time he places like MIT I'm going to be at the MIT beaver work Summer Institute over the weekend here and I tell people this is the most exciting time to be in the space business in my entire life and it is this explosion of new capabilities that have been driven by things like the you know the massive increase in computing power things like the massive increase in comms capabilities advanced and additive manufacturing are are really bringing down the barriers to entry in this field and it's driving just incredible innovation and it's happening at startups but it's also happening at Lockheed Martin may not realize this but Lockheed Martin working with Stanford actually built the first cubes that was launched here out of the u.s. that was called quakes and we did that with stellar solutions this right around just after 2000 I guess and so we've been in that you know from the very beginning and you know I I talked about some of these like you know Maya and Orion but you know we're in the middle of what we call smart sets and software-defined satellites that can essentially restructure and re map their purpose their mission on orbit to give you almost unlimited flexibility for these satellites over their lifetimes so those are just a couple of examples but yeah this this is a great time to be in space absolutely so Wright brothers flew for the first time 116 years ago so now we have supersonic stealth planes and all the technology we've talked about what innovations obviously you can't predict the future but do you see Lockheed in the next hundred years if you take that same leap how will the world of technology engineering change I know it's an impossible question but nobody could have predicted that we could even fly a hundred and twenty years ago so what do you think is the edge of possibility that we're going to be exploring the next I don't know that there is an edge I you know we've been around for almost that entire time right the Lockheed brothers and Glenn L Martin starting their companies you know basement of a church and a old you know service station we're very different companies today than we were back then right and that's because we've continuously reinvented ourselves over the all of those decades I think it's fair to say yeah I know this for sure the world of the future it's going to move faster it's going to be more connected it's going to be more autonomous and it's going to be more complex than it is today and so this is the world you know as a CTL Lockheed Martin that I think about what are the technologies that we have to invest in whether it's things like AI and autonomy you know you can think about quantum computing which is an area that we've invested in to try to stay of these technological changes and frankly some of the threats that are out there I mean I believe that we're going to be out there in the solar system that we're gonna be defending and defending well against probably you know military threats that nobody has even thought about today we are going to be we're going to use these capabilities to have far greater knowledge of our own planet the depths of the oceans you know all the way to the upper reaches the atmosphere and everything out to the Sun into the edge of the solar system so so that's what I look forward to and I I'm excited I mean just looking ahead in the next decade or so to the steps that I see ahead of us in that time I don't think there's a better place to end K okay thank you so much Lexus been a real pleasure and sorry it took so long to get up here glad we're able to make it happen you
Paola Arlotta: Brain Development from Stem Cell to Organoid | Lex Fridman Podcast #32
the following is a conversation with Paulo Lara Lara she's a professor of stem cell and regenerative biology at Harvard University and is interested in understanding the molecular laws that govern the birth differentiation and assembly of the human brains cerebral cortex she explores the complexity of the brain by studying and engineering elements of how the brain develops this was a fascinating conversation to me it's part of the artificial intelligence podcast if you enjoy it subscribe on YouTube give it five stars and iTunes supported on patreon I simply connect with me on Twitter at Lex Friedman spelled Fri D ma a.m. and I'd like to give a special thank you to Amy Jeffress for support of the podcast on patreon she's an artist you should definitely check out her Instagram and love truth good three beautiful words your support means a lot and inspires me to keep the series going and now here's my conversation with Paola our Lada you studied the development of the human brain for many years so let me ask you and out-of-the-box question first how likely is it that there's intelligent life out there in the universe outside of Earth with something like the human brain so I could put it another way how unlikely is the human brain how difficult is it to build a thing through the evolutionary process well it has happened here I on this planet once yes once so that simply tells you that it could of course happen again other places is only a matter of probability what the probability that you would get a brain like the ones that we have like like the human brain so how difficult is it to make the human brain it's pretty difficult but most importantly I guess we know very little about how this process really happens and there is a reason for that actually multiple reasons for that most of what we know about how the mammalian brain so the brain of mammals developed comes from studying in labs other brains not our own brain the brain of mice for example but if I showed you a picture of a mouse brain and then you put it next to a picture of a human brain they don't look at all like each other so they're very different and and therefore there is a limit to what you can learn about how the human brain is made by studying the mouse brain the region there is a huge value and studying the mouse brain there are many things that we have learned but it's not the same thing so in having studied the human brain or through the mouse and through other methodologies that we'll talk about do you have a sense I mean you're one of the experts in the world how much do you feel you know about the brain and how much how often do you find yourself in awe of this mysterious thing yeah you pretty much find yourself you know all the time it's an amazing process it's a process which by means that we don't fully understand at the very beginning of embryogenesis the structure called the neural tube literally self-assembles and it happens in an embryo and it can happen also from stem cells in a dish okay and then from there these stem cells that are present within the neural tube give rise to all of the thousands and thousands of different cell types per present in the brain through time right with the interesting very intriguing interesting observation is that the time that it takes for the human brain to be made it's human time meaning that for me and you it took almost nine months of gestation to build the brain and then another twenty years of learning postnatally to get the brain we have today that allows us to this conversation a mouse takes twenty days or so too small for an embryo to be born and so and and so the brain is built in a much shorter period of time and the beauty of it is that if you take mouse stem cells and you put them in and culture - the brain the orbit the brain organoid that you get from a mouse is formed faster that if you took human stem cells and put them in the dish and let them make a human brain organoid so the very developmental process is controlled by the speed of the species which means it's by its own purpose it's not accidental or there is something in that temporal it's very exactly that is very important for us to get the brain we have and we can speculate for why that is you know it takes us a long time as as human beings after we're born to learn all the things that we have to learn to have the adult brain it's actually 20 years think about it from when a baby is born or - when a teenager goes through puberty to adults it's a long time do you think you can maybe talk through the first few months and then on to the first 20 years and then for the rest of their lives what is the development of the human brain look like what are the different stages at the beginning you have to build a brain right and the brain is made of cells what's the very beginning which beginning I were talking in the embryo as the embryo is developing in the womb in addition to making all of the other tissues of the embryo the muscle the heart the blood the embryo is also building the brain and it builds from a very simple structure called the neural tube which is basically nothing but a cube of cells that spans sort of the length of the embryo from the head all the way to the tail let's say of the embryo and then over in human means over many months of gestation from that neural tube which contains stem cell-like cells of the brain you will make many many other building blocks of the brain so all of the other cell types there are many many different types of cells in the brain that will form specific structures of the brain so you can think about embryonic development of the brain is just the time in which you are making the building blocks the cells at the stem cells relatively homogeneous like uniform or they all different ok good question it's exactly how it works you start with a more homogeneous perhaps more multipotent type of stem cell that most importantly means that it can it has the potential to make many many different types of other cells and then with time these progenitors become more heterogeneous which means more diverse there are going to be many different types of the stem cells and also they will give rise to progeny to other cells that are not themselves that are specific cells of the brain that are very different from the mothers themselves and now you think about this process of making cells from the stem cells over many many months of development for humans and and what you're doing you're building the cells they physically make the brain and then you arrange them in specific structures that are present in the final brain so you can think about the embryonic development of the brain as the time where you're building the bricks you're putting a bricks together to form buildings structures regions of the brain and where you make the connections between these many different type of cells especially nerve cells neurons right that transmit action potentials and electricity I've heard you also says somewhere I think correct me if I'm wrong that the order of the way this builds matters oh yes if you are an engineer and you think about development you can think of it as well I could also take all the cells and bring them all together into a brain in the end but development is much more than that so the cells are made in a very specific order that subserve the final product that you need to get and so for example all of the nerve cells the neurons are made first and all of the supportive cells of the neurons like the glia is made later and there is a reason for that because they have to assemble together in specific ways but you also may say well why don't we just put them all together in the end it's because as they develop next to each other they influence their own development so it's a different thing for a glia to be made alone in a dish then a glia be made in a glial cell be made in a developing embryo with all these other cells surrounded they produce all these other signals first of all that's mind-blowing that this development process from my perspective in artificial intelligence you often think of how incredible the final product is the final product the brain but you just you're making me realize that the final product is just is the the beautiful thing is the actual development and development process do we know the code that drives that development do we have any sense first of all thank you for saying that it's really the formation of the brain it's really its development this incredibly choreograph dance that happens the same way every time each one of us builds the brain right and that builds an organ that allows us to do what we're doing today right yeah that is mind blowing and this is why developmental neurobiologists never get tired in that now you're asking about the code what drives this how is this done well it's you know millions of years of evolution of really fine-tuning gene expression programs that allow certain cells to be made at a certain time and to be in to become a certain you know cell type but also mechanical forces of pressure bending this embryo is not just it will not stay a tube this this brain for very long at some point is tube in the front of the of the embryo will expand to make the primordium of the brain right now they the forces that control that these cells feel and this is another beautiful thing at the very force that they feel which is different from a week before a week ago will tell the cell oh you're being squished in a certain way begin to produce these new genes because now you are at the corner or you are you know in a stretch of cells or whatever it is and there so that mechanical physical force shapes the fate of the cell as well so nala chemical is also mechana mechanical so from my perspective biology is this incredibly complex mess gooey mess so you're seeing mechanical forces how different is a like a computer or any kind of mechanical machine that we humans build and the biological systems have you been because you've worked a lot with biological systems are they as much of a mess as it seems from a perspective of an engineer mechanical engineer yeah they are much more prone to taking alternative routes right so if you we go back to printing a brain versus developing a brain of course if you print a brain given that you start with the same building blocks the same cells you could potentially print it the same way every time but that final brain may not work the same way as a brain built during development does because the build very build very same building blocks that you're using developed in a completely different environment right there was not the environment of the brain therefore they're going to be different just by definition so if you instead use development to be able to say a brain organoid which maybe we'll be talking about in a future those things are fascinating yes so if you if you use processes of development then you when you watch it you can see that sometime things can go wrong in some organ weights and by wrong I mean different one organ way from the next well if you think about that embryo it always goes right so it's this development it's for as complexity as it is every time a baby is born has you know with very few exceptions so the brain is like the next baby but it's not the same if you develop it in a dish and first of all is we don't even develop a brain you develop something much simpler in the dish but there are more options for building things differently which really tells you the evolution as has played a really tight game here for how in the end the brain is built in vivo so just a quick may be dumb question but it seems like this is not the building process is not a dictatorship it seems like there's not a centralized like high-level mechanism that says ok this cell built itself the wrong way I'm going to kill it it seems like it's there's a really strong distributed mechanism is that is that in your sense for there are a lot of there are a lot of possibilities right and if you think about for example different species building their brain each brain is a little bit different so the brain of a lizard is very different from that of a chicken from that of a you know one of us and so on and so forth and still is a brain but it was built differently if starting from stem cells that pretty much had the same potential but in the end evolution builds different brains in different species because that serves in a way the purpose of the species and the well-being of that organism and so there are many possibilities but then there is a way and you were talking about a code nobody knows what the entire code of development is of course we don't we know bitten bits and pieces of very specific aspects of development of the brain what genes are involved to make a certain cell types out those two cells interact to make the next level structure that we might know but the entirety of it oh it's so well control it's really mind-blowing so in the first two months in the embryo or whatever the first few months so yeah the the building blocks are constructed the actual the different regions of the brain I guess in the nervous system well this continuous way longer than just the first few months so over the the very first in a few months you build a lot of the cells but then there is a continuous building of new cell types all the way through birth and then even post Natalie you know I don't know if you ever heard of myelin myelin is this shrub insulation that is build around a cable some of the neurons so that the electricity can go really fast the front axons I guess the axons are called axons exactly and and so as human beings we my alienate ourselves post Natalie a kid you know a six-year-old kid has barely started so making the mature oligodendrocytes which are the cells then eventually will wrap with the axons in to myelin and this will continue believe it or not until we are about you know 2530 years old so there is a continuous process of maturation and tweaking and additions and and also in response to what we do I remember taking ap biology in high school and in the textbook it said that I'm going by memory here that scientists disagree on the purpose of myelin in in the brain is that is that's totally wrong so like it I guess it speeds up the book okay might be wrong here but I guess it speeds up the electricity traveling down the axon or something that's the most sort of canonical and definitely that's the case so you have to imagine an axon and you can think about it as a cable or some type with electricity going through and what myelin does by insulating the outside I should say there are tracks of myelin and pieces of axons that are naked without my Elaine and so by having the insulation the electricity instead of going straight through the cable it would jump over a piece of myelin right to the next naked little piece and jump again and therefore you you know that's the idea that you go faster and it was always thought that in order to build a big brain a big nervous system in order to have a nervous system it can do very complex the type of things then you need a lot of myelin because you want to go fast with this information from point A to point B well a few years ago maybe five years ago or so we discovered that some of the most evolved which means the newest type of neurons that we have as non-human primates as as human beings in the top of our cerebral cortex which should be the neurons do some of the most complex things that we do well those F axons that have very little myelin and they have very interesting way in which they put the smiling on their axles you know a little piece here then a long track with no mining another chunk there and some don't have mining at all so now you have to explain where we're going with evolution and if you think about it perhaps as an electrical engineer when I looked at it I initially thought and I'm a developmental neurobiology I thought maybe this is what we see now but if we give evolution another few million years we see a lot of myelin on these neurons - but I actually think now that that's instead the future brain less myelin am i allow for more flexibility on what you do with your actions and therefore more complicated and unpredictable type of functions which is also a bit mind-blowing so it seems like it's controlling the timing of the signal so they're in the timing you can encode a lot of yes information yeah and so the brain I mean the chemistry of that little piece of axon perhaps is a dynamic process where the myelin can move now you see how many layers of variability you can add and that's actually really good if you're trying to to come up with a new function or a new capability or something unpredictable in a way so we're gonna jump around a little bit but the old question of how much is nature and how much is nurture in terms of this incredible thing after the development is over we seem to be kind of somewhat smart intelligent cognition consciousness all these things are just incredible ability reasons so on emerge in your sense how much is in the hardware in the nature and how much is in the nurture has learned through with our parents to interact in the environment so on it's really both right if you think about it so we are born with the brain as babies there has most of his South and most of structures and we'll take a few years to you know to grow to add more to be better but really then we have this 20 years of interacting with the environment around us and so what that brain that was so you know perfectly built or imperfectly built due to our genetic cues will then be used to incorporate the environment in its further maturation and development and so your experiences do shape your your brain I mean we know that like if you know you and I may have had a different childhood or a different we have been going to different schools we have been learning different things and our brain is a little bit different because of that we behave differently because of that and and so especially postnatally experience is extremely important we are born with a plastic brain what that means it's a brain it is able to change in response to stimuli they can be sensory so perhaps some of the most illuminating studies that were done were studies in which the sensory organs were now working right if you are born with eyes that don't work then your very brain that no piece of the brain that normally would process vision the visual cortex develops postnatally differently and it might be used to do something different right so the most extreme the plasticity of the brain I guess is the magic hardware that it and then it's it's flexibility in all forms is what enables the learning yes Natalie can you talk about organoids what are they yes and how can you use them to help us understand the brain and the development of the brain this is very very important so the first thing I'd like to say please keep this in the video the first thing I'd like to say is that an organ or a brain organoid is not the same as a brain okay it's a fundamental distinction it's a system a cellular system that one can develop in the culture dish starting from stem cells that will mimic some aspects of the development of the of the brain but not all of it they are very small maximum they become about you know four to five millimeters in diameter they are much simpler than than our brain of course by yet they are the only system where we can literally watch a process of human brain development unfold and by watch I mean study it remember when I told you that we can't understand everything about development our own brain by studying a mouse well we can study the actual process of development of the human brain because it all happens in utero so we will never have access to that process ever and therefore this is our next best thing like a a bunch of stem cells that can be coaxed into starting a process of neural tube formation remember that cube that is made by the embryo early on and from there a lot of the cell types that are present within within the brain and you can simply watch it and study but you can also think about diseases where development of the brain does not proceed normally right properly think about neurodevelopmental diseases there are many many different types think about all these own spectrum disorders there are also many different types of autism so there you could take a stem cell which really means either a sample of blood or a sample of skin from the patient make a stem cell and then with that stem cell watch a process of formation of a brain organ with of their person the person with that genetics with that genetic code in it and you can ask what is this genetic code doing to some aspects of development of the brain for the first time you may come to solutions like what cells are involved in autism so right so many questions around this so if you take this human stem cell for that particular person with that genetic code how and you try to build an organized yeah how often will it look similar what's the produce ability yes or how much variability its website of that yeah so there is much more variability in building organoids than there than there is in building brain it's really true that the majority of us when we are born as babies our brains look a lot like each other this is the magic that the embryo does where it builds a brain in the context of a body and and there is very little variability there there is disease of course but in general little variability when you build an organizer you know we don't have the full code for how this is done and so in part the organoid somewhat built itself because there are some structures of the brain that the cells know how to make and another part comes from the investigator the scientist add in to the media factors that we know in the mouse for example would foster a certain step of development but it's very limited and so as a result the kind of product you get in the end is much more reduction is this much more simple than what you get in vivo it mimics early events of development as of today and it doesn't build very complex type of anatomy and structure does not as of today which happens is that in in vivo and also the variability that you see one organ or to the next tends to be higher than we'll compare an embryo to the next so okay then the next question is how hard and maybe another flip side of that expensive is it to go from one stem cell to an organized yeah how many can you build and like this sounds very complicated it's work definitely and it's money definitely but you can really grow a very high number of this organoids you know can go I told you the maximum they become about five millimeters in diameter so this is about the size of a of a tiny tiny you know raising or perhaps the seed of an apple and so you can grow 50 to 100 of those inside one big bioreactors which are these flasks where the media provides nutrients for the organoids so the problem is not to grow more or less of them it it's really to figure out how to grow them in a way that they are more and more reproducible for example orgonite organize so they can be used to study a biological process because if you have too much variability then you never know if what you see is just an exception or really the rule so what is an organize look like are there different neurons already emerging is there you know well first can you tell me what kind of neurons are there yes are they sort of all the same are they not all the same as to how much do we understand and how much of that variance if any can exist in organoids yes so you could grow I told you that the brain has different parts so the cerebral cortex is on top at the top part of the brain but there is another region called the striatum that is below the cortex and so on and so forth all of these regions have different types of cells in the actual brain ok and so scientists have been able to grow organoids that may mimic some aspects of development of these different regions of the brain and so we are very interested in the cerebral cortex that's the coolest part you're talking if we didn't have a cerebral cortex it's also I like to think the part of the brain that really truly makes us human the most evolved in recent evolution and so in the attempt to make the cerebral cortex and by figuring out a way to have this organoids continue to grow and develop for extended periods of times much like it happens in the real embryo months and months in culture there you can see that there many different types of neurons of the cortex appear and at some point also the astrocytes to the glia cells of the cerebral cortex also appear what are these the astrocytes are not neuron so they're not nerve cells but they they play very important roles one important role is to support the neuron but of course they have much more active type of roles are very important for example to make the synapses which are the point of contacts and communication between two neurons they all that chemistry fun happens between the synapses happens because of these cells are they the medium in which it happens because of the interactions happens because you are making the cells and they have certain properties including the ability to make you know neurotransmitters which are the chemicals that are secreted to the synapses including the ability of making this axons grow with their growth cones and so on and so forth and then you have other cells around there that release chemicals or touch the neurons or interact with them in different ways to really foster this perfect process in this case of synaptogenesis and this does happen within within organized so the mechanical and the chemists and chemical stuff happens yes the connectivity between neurons this video why is not surprising because scientists have been culturing neurons forever and when you take a new don't even a very young one and you culture it eventually finds another cell or another neuron to talk to it will form a synapse are we talking about Meisner on my human neuron it doesn't matter both so you can culture and you out like a single neuron and give it a little friend and it starts interacting yes so neurons are able to it sounds it's more simple than what it may sound to you neurons have molecular properties and structural properties allow them to really communicate with other cells and so if you put not one neuron but if you put several neurons together chances are that they will form synapse is with each other okay great so an organized not a brain but but uh there's some it's able to especially what you're talking about mimic some properties of the cerebral cortex for example so what what can you understand about the brain by studying an organize of a cerebral cortex I can literally study how all this incredible diversity of cell type all these many many different classes of cells how are they made how do they look like what do they need to be made properly and what goes wrong if now the genetics of that stem cell that I used to make the organ we came from a patient with a neurodevelopmental disease can I actually watch for the very first time what may have gone wrong years before in this kid when its own brain was being made think about that loop in a way it's a little tiny rudimentary window into the past into the time when that brain in a kid they had this you know developmental disease was being made and I think that's unbelievably powerful because today we have no idea of what cell types we barely know what brain regions are affected in these diseases now we have an experimental system that we can study in the lab and we can ask what are the cells affected when during development things went wrong what are the molecules among the many many different molecules that that control brain development which ones are the ones they're really messed up here and we want perhaps to fix and what is really the final product is it a less strong kind of circuit and brain is it a brain that lacks a cell type is it what is it because then we can think about treatment and and care for these patients that is informed rather than just based on current diagnostics so how hard is it to detect through the development of process the super-exciting mech tool just to see how different conditions develop how hard is it to detect it wait a minute this is abnormal development yeah that's how hard is how much signal is there how much of it is is it a mess because things can go wrong multiple levels right you could have a cell that is born and built but then doesn't work properly or a self is not even born or a cell does interact with other cell differently and so on and so forth so today we have technology that we did not have even five years ago that allows us to look for example at the molecular picture of a cell of a single cell in a sea of cells with high precision and so that molecular information where you compare many many single cells for the genes they produce between a control individual and an individual with and your developmental disease that may tell you what is different molecular Li or you could see that some cells are not even made for example or that the process of maturation of the cells may be wrong there are many different levels here and and we can study these cells at the molecular level but also we can use the organ is to ask questions about the properties of the neurons the functional properties how they communicate with each other out they respond to a stimulus and so on and so forth and we may get at abnormalities their detectors so how early is this work in a maybe in the history of science so I mean like so if you were to if you and I time travel a thousand years into the future organize seem to be maybe I'm romanticizing the notion but you're building not a brain but something that has properties of a brain so you it feels like you might be getting close to in the building process thought to build us to understand so how how far are we in this understanding process of development thousand years from now it's a long time from now so if this planet is still gonna be here and a thousand years from now you know like they write a book obviously you there'll be a chapter about you that science fiction book today today but I mean I guess where we really understood very little about the brain a century ago was big fan in high school reading Freud and so on and still AM of Psychiatry I would say we still understand very little about the functional aspect of just yeah but how in the history of understanding the biology of the brain the development how far are we alone it's a very good question and so this is just of course my opinion I think that we did not have technology even ten years ago or 20 certainly not 20 years ago to even think about experimentally investigating the development of the human brain so we've done a lot of work in science to study the brain or many other organisms now we have some technologies which I'll spell out that allow us to actually look at the real thing and look at the brain at the human brain so what are these technologies there is been huge progress in stem cell biology the moment someone figured out how to turn a skin cell into an embryonic stem cell basically and that out that embryonic stem cell could begin a process of development again to correct for example make a brain there was a huge and you know advance and in fact there was a Nobel Prize for that that started the field really of using stem cells to build organs now we can build on all the knowledge of development that we build over the many many many years to say how do we make the stem cells now make more and more complex aspects of development of the human brain so this field is younger the field of brain organics but is moving faster and it's moving fast in a very serious way that is rooted in labs with the right ethical framework and and really building on you know solid science for reality is and what is not and but it will go faster and it will be more and more powerful we also have technology that allows us to basically study the properties of single cells across many many millions of single cells which we didn't have perhaps five years ago so now with that even an organ or that has millions of cells can be profiled in a way looked at very very high resolution the single cell level to really understand what is going on and you could do it in multiple stages of development and you can build your hypothesis and so on and so forth so it's not going to be a thousand years it's gonna be a shorter amount of time and I see this as sort of a an exponential growth of this field enabled by these technologies that we didn't have before and so we're gonna see something transformative that we didn't see at all in the prior thousand years so I apologize for the crazy sci-fi questions but the developmental process is fascinating to watch and study but how far are we away from and maybe how difficult is it to build not just an organ or but a human brain okay from us themself yeah first of all that's not the goal for the majority of the serious scientists that work on this because you don't have to build the whole human brain to make this model useful for understanding of the brain develops or understanding disease you don't have to build the whole thing so let me is let me just comment on that it's fascinating it shows to me the difference between you and I is you're actually trying to understand the beauty of the human brain and to use it to really help thousands or millions of people disease and so on right from an artificial intelligence but we're trying to build systems that we can put in robots and try to create systems that have echoes of the intelligence about reasoning about the world navigating the world its its different objectives I think we operate in size fix a little bit but so so on that point of building a brain even though that is not the focus or interest perhaps of the community how difficult is it is it truly science fiction at this point I think the field will progress like I said and that the system will be more and more complex in a way right but there are properties that emerge from the human brain they have to do with the mind they may have to do with conscious name I have to do with intelligence or whatever that we don't really don't don't understand even now they can emerge from an actual real brain and therefore we cannot measure or study in an organized so so I think that this field many many years from now may lead to the building of better neural circuits of you know that really are built out of understanding about this process really works and it's hard to predict how complex this really will be I really don't think we're so far from it makes me laugh really it's really that far from building the human the human brain but you're gonna be building something that is you know always a bad version of it but that may have really powerful properties and might be able to you know respond to stimuli or or be used in in certain context and this is why I really think that there is no other way to do this science but within the right ethical framework because where you're going with this is also you know we can talk about science fiction and write that book and we could today but this work happens in a specific ethical framework that we don't decide just a scientist but also as a society so the ethical framework here is a fascinating one is the complicated one do you have a sense a grasp of how we think about ethically of building organoids from human stem cells to understand the brain it seems like a tool for helping potentially millions of people cure diseases are at least start the cure by understanding it but is there more is there gray areas that are at the that we have to think about ethically absolutely we must think about that every discussion about the ethics of this needs to be based on actual data from the models that we have today and from the ones that we will have tomorrow so it's a continuous conversation it's not something that you decide now today there is no issue really very simple models they that clearly can help you in many ways without much much think about but tomorrow we need to have another conversation and so on and so forth and so the way we do this is to actually really bring together constantly a group of people that are not only scientists but also bioethicists the lawyers philosophers psychiatrist and so on as a psychologist and so on and so forth to decide as a society really what we should and what we should not do so that's the way to think about the ethics now I also think though that as a scientist I have a moral responsibility so if you if you think about how transformative it could be for understanding and curing a neuropsychiatrist to be able to actually watch and study and treat with drugs the very brain of the patient that you are trying to study how transformative at this moment in time this could be we couldn't do it five years ago we could do it now right if taking of a particular patient patient and make an organ or for a simple and you know different from the from the human brain he still is his process of brain development with his with his or her genetics and we could understand perhaps what is going wrong perhaps we could use as a platform as a cellular platform to screen for drugs to fix a process and so on and so forth right so we could do it now we couldn't do it five years ago should we not do it what is the downside of doing it I don't see a downside if you would but if we invited a lot of people yes if I'm sure there would be somebody who would would argue against it what would be the devil's advocate argument yeah so it's exactly perhaps what you eluded it with your question that you are making as enabling you know some some process of formation of the brain that could be misused at some point or there could be showing properties that ethically we don't want to see in a tissue so today today this is not an issue and so you you just gain dramatically from the science without because the system is so simple and and so different in a way from from the actual brain but but because it is the brain we have an obligation to really consider all of this right and again it's it's a balanced conversation where we should put disease and betterment of humanity also on that plate what do you think at least historically there were some politicization of embryonic stem cells a stem cell research do you still see that out there is there is that still a force that we have to think about especially in this larger discourse that we're having about the role of science in at least American society yeah this is a very good question it's very very important I see a very central role for scientist to inform decisions about what we should or should not do in society and this is because the scientists have the first-hand look and understanding of really the work that they are doing and again this varies depending on what we're talking about here so now we're talking about Brian organoids I think the scientists need to be part of that conversation about what is will be allowed in the future or not allowed in the future to do with the system and I think it's that is very very important because they bring reality of data to the conversation and and and so they should have a voice so data should have a voice it needs to have a voice because a not only data we should also be good at communicating with non scientists the data so there has been often time there is a lot of discussion and you know excitement and fights about certain topics just because of the way they are described I'll give you an example if I call that the same cellular system we just talked about a brain organoid or if i called it a human mini brain your reaction is going to be very different yes to this and so the way the systems are described I mean we and journalists alike need to be a bit careful that this debate is a real debate and inform burial data that's all I'm asking and yeah the language matters here so I work on autonomous vehicles and their the use of language could it could it could drastically change the interpretation and the way people feel about what is the right way to proceed forward you are as I've seen from a presentation you're a parent as I show a couple pictures of your son is it just the one to sign a daughter so what have you learned from the human bracing two of them what have I learned I've learned that children really have this amazing plastic minds right that we have a responsibility to you know foster their growth in good healthy ways that keep them curious that keeps adventures that doesn't raise them in fear of things but also respecting who they are which is in part you know coming from their genetics we talked about my children are very different from each other despite the fact that they're the product of the same two parents I also learned that what you do for them comes back to you like you know if you're a good parent you're gonna most of the time have you know perhaps a decent kids at the end what do you think just a quick comment what what do you think is the source of that difference it's often the surprising thing for parents is it can't believe that our kids oh they're so different yet they came from the same parents well they are genetically different even they came from the same two parents because the mixing of gametes I mean you know we know these genetics creates every time a genetically different individual which will have a specific mix of genes that is a different mix every time from the two parents and so so they're not twins so they are genetically different is that a little bit of variation as you said really from a biological perspective the brains look pretty similar well so let me clarify that so the genetics you have the genes that you have at that play that beautiful orchestrated symphony of development different genes will play it slightly differently it's like playing the same piece of music but with the different orchestra and a different director right the music will not come out it will be still a piece by the same you know author but it will come out differently if it's played by the high school orchestra instead of and so you are born superficially with the same brain it has the same cell types similar patterns of connectivity but the properties of the cells and how the cells we then react to the environment as you experience your word will be also shaped by pujan ethically you are speaking just as a parent this is not something that comes from my work I think you can tell a birth differently that they have a different personality in a in a way right so both is needed the genetics as well as the nurturing afterwards so you are one human with a brain instead of living through the whole mess of it the human condition full of love maybe fear ultimately mortal how has studying the brain changed the way you see yourself and you look in the mirror when you think about your life the fear is the love when you see your own life your own mortality yeah that's a very good question it's almost impossible to dissociate sometime for me some of the things we do or some of the things that other people do from oh that's because that part of the brain is working in a certain way or thinking about a teenager you know going through teenage years and being at I'm funny in the way they think and impossible for me not to think it's because they're going through this period of time called critical periods of plasticity where their synapses are being eliminated here and there and they're just confused and so from that comes perhaps a different take on the behavior or maybe I can justify scientifically in some sort of way I also look at humanity in general and I am amazed by what we can do and the kind of ideas that we can come up where then I cannot stop thinking about how the brain is continuing to evolve I don't know if you do this but I think about the next brain sometimes what are we going with it's like what are the features of this brain that you know evolution is really plain wait to get us you know in the future the new brain it's not over right it's it's a work in progress so let me just a quick comment on do you see do you think there's a there's a lot of fascination and hope for artificial intelligence of creating artificial brains you said the next brain when you imagine over a period of a thousand years the evolution of the human brain do you sometimes envisioning that future see an artificial one artificial intelligence as it is hoped by many not hoped thought by many people would be actually the next evolutionary step in the development yeah yeah I think in a way that will happen right it's almost like a part of the way we evolved we evolved in the world that we created that we interact with the shape has as we grow up and so on and so forth sometimes I think about something that may sound silly but think about the use of of cell phones part of me thinks that somehow in their brain there will be a region of the cortex that is this much attuned to that tool and and this comes from a lot of studies in in in model organisms where really the cortex especially adapts to the kind of things you have to do so if we need to move our fingers in a very specific way we have a part of our cortex that allows us to do this kind of very precise movement in an owl that has to see very very far away with big eyes the visual cortex very big it's the brain attuned to your environment so the brain will attune to the to the technologies that we will have and will be shaped by it so the cortex very well maybe we shaped by it in the artificial intelligence it may merge with it they may get a envelop it and adjust if it's not you know emerge of the kind of oh let's have a synthetic element together with a biological one the very space around us the fact for example think about we put on some goggles of virtual reality and we physically are surfing the ocean right like I've done it and you have all these emotions that come to you you brain placed you in that reality and it was able to do it like that just by putting the goggles on it didn't take thousands of years of adapting to this the brain is plastic so adapts to new technology so you could do it from the outside by simply hijacking some sensory capacities that we have so clearly over you know recent evolution the cerebral cortex has been a part of the brain that has known the most evolution so we have put a lot of chips on evolving this specific part of the brain and the evolution of cortex is plasticity it's this ability to change in response to things so yes they will integrate that we want it or not well there is no better way to end it Paula thank you so much for talking about you
George Hotz: Comma.ai, OpenPilot, and Autonomous Vehicles | Lex Fridman Podcast #31
the following is a conversation with George Hotz he's the founder of comma AI a machine learning based vehicle automation company he is most certainly an outspoken personality in the field of AI and technology in general he first gained recognition for being the first person to carry or unlock an iPhone and since then he's done quite a few interesting things at the intersection of hardware and software this is the artificial intelligence podcast if you enjoy it subscribe on YouTube give it five stars on iTunes supported on patreon or simply connect with me on Twitter at lex friedman spelled fri d-m a.m. and i'd like to give a special thank you to Jennifer from Canada for her support of the podcast on patreon merci beaucoup Jennifer she's been a friend and an engineering colleague for many years since I was in grad school your support means a lot and inspires me to keep this series going and now here's my conversation with George Hotz do you think we're living in a simulation yes but it may be unfalsifiable what do you mean by unfalsifiable so if the simulation is designed in such a way that they did like a formal proof to show that no information can get in and out and if their hardware is designed to for the anything in the simulation to always keep the hardware in spec it may be impossible to prove whether we're in a simulation or not so they've designed it such there's the closed system you can't get outside of the system well maybe it's one of three worlds we're either in a simulation which can be exploited we're in a simulation which not only can't be exploited but like the same things too about VMs I'm a really well-designed VM you can't even detect if you're in a VM or not that's brilliant so where it's yeah so the simulation is running in a virtual machine but now in reality all VMs have wasted the fact that's the point I mean is it yeah you've done quite a bit of hacking yourself and so you should know that really any complicated system will have ways in and out so this isn't necessarily true going forward I spent my time away from comma I learned and said dependently typed like it's a language for writing math proofs and if you write code that compiles in a language like that it is correct by definition the types check it's correct and so it's possible that the simulation is written in a language like this in which case yeah yeah but that can't be sufficiently expressive a language like that all weekend it can be yeah okay well so all right so the simulation doesn't have to be tiring complete if it has a scheduled end date looks like it does actually with entropy and you know I don't think that a simulation that results in something as complicated in universe would have a formal proof of correctness right as as possible of course we have no idea how good their tooling is and we have no idea how complicated the universe computer really is it may be quite simple it's just very large right it's very it's definitely very large but the fundamental rules might be super simple yeah Conway's gonna live kind of stuff right so if you could hack it so imagine the simulation that is hackable if you could hack it what would you change about the you know like how would you approach hacking a simulation the reason I gave that talk I by the way I'm not familiar with the talk he gave I just read that you talked about escaping the simulation yeah like that so maybe you can tell me a little bit about the theme and the message there - it wasn't a very practical talk about how to actually escape a simulation it was more about a way of restructuring an us-versus-them narrative if we continue on the path we're going with technology I think we're in big trouble like as a species and not just as a species but even as me as an individual member of the species so if we could change rhetoric to be more like to think upwards like to think about that we're in a simulation and how we could get out already we'd be on the right path what you actually do once you do that while I assume I would have acquired way more intelligence in the process of doing that so I'll just ask that so the the thinking upwards what kind of ideas what kind of breakthrough ideas do you think thinking in that way could inspire and what did you say upwards upwards into space are you thinking sort of exploration in all forms the space narrative that held for the modernist generation doesn't hold as well for the postmodern generation what's the space narrator we're talking about the same space the dimensional space like going a little ace is like building like yuan mosque like we're gonna build rockets we're gonna go to Mars we're gonna colonize the universe and the narrative your friend was born in the Soviet Union you're referring to the race to space the race to space explore okay that was a great modernist narrative it doesn't seem to hold the same weight in today's culture I'm hoping for good postmodern narratives that replace it so think let's think so you work a lot with AI so the eyes one formulation of that narrative there could be also I don't know how much you do in VR and they are yeah that's another eye I know less about it but every time I play with it and our research is fascinating that virtual world are you are you interested in the virtual world I would like to move to a virtual reality in terms of your work no I would like to physically move there the apartment I can rent in the cloud is way better in the apartment I can rent in the real world well it's all relative isn't it because others will have very nice departments too so you'll be inferior in the virtual world that's not how I view the world right I don't view the world I mean it's very like like almost zero-sum issue a to view the world say like my great apartment isn't great because my neighbor has one - no my great apartment is great because like look at this dishwasher man yeah you just touch the dish and it's washed right and that is great in and of itself if I have the only apartment or if everybody had the apartment I don't care so you have fundamental gratitude the the world first learned of Geo ha George Hotz in August 2007 maybe before then but certainly in August 2007 when you were the first person to unlock carry unlock an iPhone how did you get into hacking what was the first system you discovered vulnerabilities for and broke into so that was really kind of the first thing I had I had a book in in 2006 called grey hat hacking and I guess I realized that if you acquired these sort of powers you could control the world but I didn't really know that much about computers back then I started with electronics the first iPhone hack was physical card work um you had to open it up and pull an address line high and it was because I didn't really know about software exploitation I learned that all in the next few years and I got very good at it but back then I knew about like how men chips are connected to processors and he knew about software and programming he didn't didn't know I'll really see you the view of the world and computers was physical was the most hard work actually if you read the code that I released with that in August 2007 it's atrocious the language was it a C say yes and in a broken sort of state machine SC I didn't know how to program man so how did you learn to program what was your journey cuz I mean we'll talk about it you've live streams from your programming man this is a chaotic beautiful mess how did you arrive at that years and years of practice I interned at Google after the summer after the iPhone unlock and I did a contract for them where I built hardware for for Street View and I wrote a software library to interact with it and it was terrible code and for the first time I got feedback from people who I respected saying you know like don't write code like this now of course just getting that feedback is not enough the way that I really got good was I wanted to write this thing like that could emulate and then visualize like armed binaries because I wanted to hack the iPhone better and I didn't like that I couldn't like see what that I couldn't single step through the processor because I had no debugger on there especially for the low level things like the boot ROM in the bootloader so I tried to build this tool to do it and I built the tool once and it was terrible I built the tool second times it was terrible I built the tool third time this by the time I was at Facebook it was kind of okay and then I built the tool fourth time when I was a Google intern again in 2014 and that was the first time I was like this is finally usable how do you pronounce this kira-kira yeah so it's essentially the most efficient way to visualize the change of state of the computer as the program is running that's what I mean by debugger yeah it's a timeless debugger so you can rewind just as easily as going forward think about if you're using gdb you have to put a watch on a variable if you want to see if that variable changes and Kure you can just click on that variable and then it shows every single time when that variable was changed or accessed think about it like get for your computers uh the run lock so there's like a deep log of of the state of the computer as the program runs and you can rewind why isn't that maybe it is maybe you can educate me what isn't that kind of debugging used more often ah because the tooling is bad well two things one if you're trying to debug chrome chrome is a 200 megabyte binary that runs slowly on desktops so that's going to be really hard to use for that but it's really good to use for like CTFs and for boot roms and for small parts of code so it's it's hard if you're trying to debug like massive systems what's the CTF and what's the boot ROM the boot ROM is the first code that executes it's the minute you give power to your iPhone okay and CTF were these competitions that I played capture the flag to capture the flag I was going to ask you about that what are those LaVette I watched a couple videos on YouTube those look fascinating what have you learned about maybe at the high level of vulnerability of systems from these competitions the like I feel like like in the heyday of CTFs you had all of the best security people in the world challenging each other and coming up with new toy exploitable things over here and then everybody okay who can break it and when you break it you get like there's like a file on the server called flag and then there's a program running listening on a socket that's vulnerable so you write an exploit you she'll and then you cat flag and then you type the flag into like a web-based scoreboard and you get points so the goal is essentially to find an exploit in the system that allows you to run shell to run arbitrary code on that system that's one of the categories that's like the PO noble category vulnerable yeah horrible it's like you know you pwned the program you are it's a program yeah yeah you know for personally I apologize I'm gonna I'm gonna say it's because I'm Russian but maybe you can help educate me some video game like misspell to own way back in the Mia and there's just I wonder if there's a definition I'll have to go to urban dictionary for it okay so what was the heyday seat yeah by the way but was it what decade are we talking about I think like I mean maybe I'm biased because it's the era that that that I played but like 2011 to 2015 because the modern CTF scene is similar to the modern competitive programming scene you have people who like do drills you have people who practice and then once you've done that you've turned it lesson to a game of generic computer skill and more into a game of okay you memorize you you drill on these five categories and then before that it wasn't it didn't have like as much attention as it had I don't know they were like I won $30,000 ones in Korea for one of these competitions oh crap they were they were that so that means I mean money's money but that means there was probably good people there exactly yeah are the challenges human constructive or are they grounded in some real flaws and real systems usually they're human constructed but they're usually inspired by real flaws what kind of systems are imagined is really focused on mobile like what has vulnerabilities these days is it does primarily mobile systems like Android everything does No yeah of course the price has kind of gone up because less and less people can find them and what's happened in security is now if you want to like jailbreak an iPhone you don't need one exploit anymore you need nine nine chained together what women yeah Wow okay so it's really so what's the but what's the benefit speaking higher level philosophically about hacking I mean it sounds from everything I've seen about you you just love the challenge and you don't want to do anything you don't want to bring that exploit out into the world and doing the actual let it run wild you just want to solve it and then you go on to the next thing oh yeah I mean doing criminal stuffs not really worth it and I'll actually use the same argument for why I don't do defense for why I don't do crime if you want to defend a system say the system has ten holes right if you find nine of those holes as a defender you still lose because the attacker gets in through the last one if you're an attacker you only have to find one out of the ten but if you're a criminal if you log on with a VPN nine out of the ten times but one time you forget you're done because you're caught okay because you only have to mess up once to be caught as a criminal yeah that's why I'm not a criminal but okay let me uh that's having a discussion with somebody just at a high level about nuclear weapons actually why we're having blowing ourselves up yet and my feeling is all the smart people in the world look at the distribution of smart people smart people are generally good and then this other person I was talking to Sean Carroll the physicist and you were saying no good and bad people are evenly distributed amongst everybody my sense was good hackers are in general good people and they don't want to mess with the world what's your sense I'm not even sure about that like I have a nice life crime wouldn't get me anything but if you're good and you have these skills you probably have a nice life too right like you can use the father things but is there an ethical is there some is there a little voice in your head that says well yeah if you could hack something to where you could hurt people and you could earn a lot of money doing it though not hurt physically perhaps but disrupt her life in some kind of way it is there a little voice that says um what two things one I don't really care about money so like the money wouldn't be an incentive the thrill might be an incentive but when I was 19 I read crime and punishment right that was another that was another great one that talked me out of ever really doing crime Oh cuz it's like that's gonna be me I'd get away with it whatever just went in my head even if I got away with it you know and then you do crime for long enough you'll never get away with it that's right in the end that's a good reason to be good I wouldn't say good I just say I'm not bad you're a talented programmer and a hacker in a good positive sense of the word award you've played around found vulnerabilities in various systems what have you learned broadly about the design of systems and so on from that from that whole process you learn to not take things for what people say they are but you look at things for what they actually are yeah I understand that's what you tell me it is but what does it do man and you have nice visualization tools to really know what it's really doing oh I wish I'm a better programmer now than I was in 2014 I said Kira that was the first tool that I wrote that was usable I wouldn't say the code was great I still wouldn't say my code is great so how was your evolution as a programmer except practice he went he started with C at which point did you pick up Python because you're pretty big and Python though now yeah in uh in college I went to Carnegie Mellon when I was 22 um I went back I'm like I'm gonna take all your hardest CS courses we'll see how I do right like did I miss anything by not having a real undergraduate education took operating systems compilers AI and they're like a freshman reader math course and operating says some of these some of those classes you mentioned actually they're great at least one the 2012 circuit 2012 operating systems and compilers we're two of the best classes I've ever taken my life because you write an operating system and you write a compiler I wrote my operating system in C and I wrote my compiler in Haskell but classical well somehow I picked up Python that semester as well I started using it for the CTS actually that's when I really started to get into CTF and CTF you're all to race against the clock so I can't write things and say oh there's a clock component so you really want to use the programming language you can be fastest than 48 hours pone as many of these challenges you can pone yeah you got like a hundred points a challenge whatever team gets the most you were both the Facebook and Google for a brief stint yeah well the project zero actually at Google for five months where you develop kara what was project zero about in general speak what what just curious about the security efforts in these companies well product zero started the same time I I went there what what years are there 2015 2015 so that was right at the beginning of project it's small it's Google's offensive security team I'll try to give I'll try to give the best public facing explanation that I can so the idea is basically these vulnerabilities exist in the world nation states have them some high powered bad actors have them sometime people will find these vulnerabilities and submit them in bug bounties to the companies but a lot of the companies don't really care it only fix the bug there's no it doesn't hurt for there to be a vulnerability so project zero is like we're gonna do it different we're going to announce a vulnerability and we're going to give them 90 days to fix it and then whether they fix it or not we're gonna drop the drop the zero day oh wow we're gonna drop the weapon that's so cool that is so cool I love that deadlines though that's so cool give him real deadlines yeah and I think it's done a lot for moving the industry forward I watched your coding sessions on the stream downline you code things up basic projects usually from scratch I would say sort of as a programmer myself just watching you that you type really fast and your brain works in both brilliant and chaotic ways I don't know if that's always true but certainly for the live streams so it's it's interesting to me because I'm more I'm much slower and systematic and careful and you just move I mean probably an order of magnitude faster some curious is there a method to your madness is this just who you are there's pros and cons there's pros and cons to my programming style and I'm aware of them like if you ask me to like like get something up and working quickly with like an API that's kind of undocumented I will do this super fast because I will throw things at it until it works if you ask me to take a vector and rotate it 90 degrees and then flip it over the XY plane I'll spam program for two hours and won't get it all because it's something that you could do with a sheet of paper think through design and then just you really just throw stuff at the wall and you get so good at it that it usually works I should become better at the other kind as well sometimes I'll do things pathetically it's nowhere near as entertaining on the twitch streams I do exaggerate it a bit on the edge games as well the twitch streams I mean what do you want to see a game or you want to see actions permit me right I'll show you a PM for programming yes I recommend people go to I think I watched I was probably several hours you put like I've actually left you programming in the background while I was programming because you made me you it was it was like watching a really good gamer it's like energizes you because you're like moving so fast it so it's it's awesome it's inspiring and so it made me jealous that like because my own program is inadequate in terms of speed Oh as I was like so I'm twice as frantic on the live streams as I am when I code without oh it's super entertaining so I I wasn't even paying attention to where you were coding which is great it's just watching you switch windows and VAM I guess is driven screen I've developed a workflow Facebook and talk about how do you learn new programming tools ideas techniques these days what's your like methodology for learning new things so I wrote for comma the distributed file systems out in the world are extremely complex like if you want to install something like like like Saif Saif is I think the like open infrastructure to should be a file system or there's like newer ones like seaweed FS but these are all like 10,000 plus line projects I think some of them are even 100,000 line and just configuring them as a nightmare so I wrote I wrote one um it's 200 lines and it's it uses like nginx to the live servers and has low master server that I wrote and go and the way I go this if I would say that I'm proud per line of any code I wrote maybe there's some exploits that I think are beautiful and then this this is 200 lines and just the way that I thought about it I think was very good and the reason it's very good is because that was the fourth version of it that I wrote and I had three versions that I threw away you mentioned you see go I ready go yeah and go so is that a functional language I forget what goes they go is Google's language right I'm a functional it's some it's like in a way it's C++ but easier it's it's strongly typed it has a nice ecosystem erotic when I first looked at it I was like this is like Python but it takes twice as long to do anything yeah now that I've open pilot is migrating to sea but it still has large Python components I now understand why Python doesn't work for large code bases and why you want something like Oh interesting so why why doesn't Python work for so even most speaking for myself at least like we do a lot of stuff basically demo level work with autonomous vehicles and most of the work is Python yeah why doesn't Python work for large code bases because well lack of type checking is a big errors creeping yeah and like you don't know the compiler can tell you like nothing right so everything is either you know like like syntax errors fine but if you misspell a variable and Python the compiler won't catch that there's like linters that can catch it some other time there's no types this is really the biggest downside and then will Python slow but that's not related to it well maybe the kind of related to its that's lacking so what's what's in your toolbox these days is a Python what else go I need to move on something else but my adventure interdependently type languages I love these languages they just have like syntax from the 80s what do you think about JavaScript yes thanks Nick tomorrow typescript javascript is the whole ecosystem is unbelievably confusing NPM updates a package from zero to two to zero to five and that breaks your babble linter which translates your es5 into es6 which doesn't run on so why do I have to compile my JavaScript again huh it may be the future though if you think about I mean I've embraced JavaScript recently because just like I've continually embraced PHP it seems that these worst possible languages live on for long is that cockroaches never die yeah well it's in the browser and it's fast it's fast yeah it's in the browser and compute mites they become you know the browser it's unclear what the role the browser's in terms of distributed computation in the future so javascript is definitely here to stay yeah interesting if Tom's vehicles will run on JavaScript one day I mean you have to consider these possibilities well all our debug tools are JavaScript we actually just open-source them we have a tool Explorer which you can annotate your dis engagements and we have tool cabana which lets you analyze the canned traffic from the car so basically any time you're visualizing something about the log you using javascript yeah well the web is the best UI toolkit by far yeah um so and then you know what you're voting in JavaScript we have a react guy he's good he acts nice let's get into it so let's talk to Thomas vehicles you found it comma a let's at a high level how did you get into the world the vehicle automation can you also just for people who don't know tell the story of comma yeah sure so I was working at this AI startup and a friend approached me and he's like dude I don't know where this is going but the coolest applied AI problem today is self-driving cars I'm like well absolutely do you want to meet with UI mosque and he's looking for somebody to build a vision system for auto pilot this is when they were still on ap one they were still using mobile I kneel on back then was looking for a replacement and he brought me in and we talked about a contract where I would deliver something that meets mobile eye level performance I would get paid twelve million dollars if I could deliver it tomorrow and I would lose 1 million dollars for every month I didn't deliver yeah so I was like ok this is a great deal this is a super exciting challenge you know what even if it takes me 10 months I get two million dollars it's good maybe I can finish up in five maybe I don't finish it at all and I get paid nothing and I'll work for twelve months for free so maybe I just take a pause on that I'm also curious about this because I've been working on robotics for a long time and I'm curious to see a person like you just step in and sort of somewhat naive but brilliant right so that's though that's the best place to be because you basically full-steam take on a problem how confident how from that time because you know a lot more now at that time how hard do you think it is to solve all of autonomous driving I remember I suggested to Elon in the meeting I'm putting GPU behind each camera to keep the compute local this is an incredibly stupid idea I leave the meeting 10 minutes later and I'm like I could have spent a little bit of time thinking about this problem was I would just send all your cameras to one big GPU you're much better off doing that oh sorry you said behind every camera you have a small GPU I was like oh I'll put the first few layers of my comm there Oh like why did I say that that's possible it's possible but it's a bad idea it's not obviously a bad idea pretty obvious but whether it's actually a bad idea or not I left that meeting with Elon like beating myself up I'm like why did I say something stupid yeah you haven't given I'm at least like thought through every aspect yes he's very sharp too like usually in life I get away with saying stupid things and then kind of course alright right away he called me out about it and like usually in life I get away with saying stupid things and then like people will you know people a lot of times people don't even notice and I'll like correct it and bring the conversation back but with Elon it was like nope like okay well that's not at all why the contract fell through I was much more prepared the second time I met him yeah but in general huh how hard did you think it is like 12 months is uh-oh is it tough timeline oh I just thought I'd clone mob like you three I didn't think I'd solve level five self-driving or anything so the goal there was to do lane-keeping good good link keeping I saw my friend showed me the outputs from a mobile I in the office from a mobile I was just basically two lanes at a position of a lead car mm-hm like I can I can gather a dataset and train this net in in weeks and I did well first time I tried the implementation of mobile I and the test I was really surprised how good it is it's quite incredibly good because I thought it's just because I've done a lot of computation I thought it'd be a lot harder to create a system that that's stable so I was personally surprised you know have to admit it because I was kind of skeptical before trying it because I thought it would go in and out a lot more it would get disengaged a lot more and it's pretty robust so what how how hard is the problem we need to when you tackled it I think a p1 was great like Elon talked about dis engagements on the 405 down in LA we'd like the lane marks were kind of faded and the mobile eye system would drop out uh like I had something up and working that I would say was like the same quality in three months same quality but how do you know you you say stuff like that yeah confidently but you can't and I love it but well the question is you can't you're kind of going by feel because he not solely absolutely like like I would take I hadn't I borrowed my friends Tesla yeah I would take ap one out for a drive yeah and then I would take my system out for a dry and seems reasonably like the same so the four or five how hard is it to create something that could actually be a product that's deployed I mean I've read an article or you on this respondent said something by you saying that to build autopilot is is more complicated than a single George Hotz a level job how hard is that job to create something that would work across the globe Lee what are the global ease the challenge but Elon followed that up by saying it's gonna take two years in a company of ten people yeah and Here I am four years later with a company of twelve people and I think we still have another two to go two years so yeah so what do you think what do you think about the hottest is progressing with autopilot v2 v3 I think we've kept pace with them pretty well I think navigator autopilot is terrible we had some demo features internally of the same stuff and we would test it and I'm like I'm not shipping this even as like open-source software to people what do you think is do Consumer Reports does a great job of describing it like when it makes a lane change it does it worse than a human you shouldn't ship things like autopilot open pilot they Lane keep better than a human if you turn it on for a stretch of highway like an hour long it's never gonna touch a lane line human will touch probably a lane line twice you just inspired me I don't know if you're grounded and data on that I read labor okay but no but that's interesting uh I wonder actually how often we touch Lane lines in general like a little bit cuz it is okay I could answer that question pretty easily with the common data side yeah I'm curious I've never answered it I don't know yeah I just - is like my person it feels right that's interesting because every time you touch the lane that's the source of a little bit of stress and kind of lane-keeping is removing that stress that's all to me the big the biggest value-add honestly is just removing the stress of having to stay in lane and I think honestly I don't think people fully realize first of all that that's a big value add but also that that's all it is and that not only I find it a huge value add I drove down when we moved to San Diego I drove down our Enterprise rent-a-car and I missed it so I missed having the system so much it's so much more tiring to drive without it it's it is that Lane centering that's the key feature yeah and in a way it's the only feature that actually adds value to people's lives and autonomous vehicles today way mode does not add value to people's lives it's a more expensive lower slower uber maybe someday it'll be this big cliff where it adds value but I don't usually do this vessei I haven't talked to is that this is good because I haven't I have intuitively but I think we're making it explicit now I I actually believe that really good lane-keeping is a reason to buy a car will be a reason to buy a car is a huge value add I've never until we just started talking about it haven't really quite realized that that I've felt with elan chase of level four is not the correct chase it was on because you should just say Tesla has the best as if from a testing perspective say Tesla has the best lane-keeping coming I should say coming I is the best link keeping and that is it yeah yeah does do you think well you have to do the longitudinal as well you can't just Lane keep you have to do a cc but a cc is much more forgiving than lanky especially on the highway oh by the way are you uh calming eyes camera only correct oh no we use the radar we from the car you were able to get to open it um we can't do a camera only now it's gotten to the point but we leave the radar there is like a it's it's fusion now okay so let's maybe talk through some of the system specs on the hardware or what it what's what's the hardware side of what you're providing what's the capabilities in the software side would open pilot and so on so open pilot as the the box that we sell that it runs on it's a phone in a plastic case it's nothing special we sell it without the software so you're like you know you buy the phone it's just easy it'll be easy setup but it's sold with no software open pilot right now is about to be 0.6 when it gets to 1.0 I think we'll be ready for a consumer product we're not gonna add any new features we're just gonna make the lane-keeping really really good so what do we have right now it's a snapdragon 820 say so many IMX 298 forward-facing camera driver monitoring camera and she's a selfie cam on the phone and a can transceiver biffle's little thing calls pandas and they talk over USB to the phone and then they have three canvases that they talk to the car one of those campuses is the radar CANbus one of them is the main car CANbus and the other one is the proxy camera CANbus we leave the existing camera in place so we don't turn a DB off right now we still turn a TV off if you're using our longitudinal but we're gonna fix that before 1.0 you got it wow that's cool so in its can both way so how are you able to control vehicles so we proxy the vehicles that we work with already have Lane Keeping Assist system so Lane Keeping Assist can mean a huge variety of things it can mean it will apply a small torque to the wheel after you've already crossed a lane line by a foot which is the system in the older Toyotas versus like I think Tesla still calls it Lane Keeping Assist where it'll keep you perfectly in the center of the lane on the highway you can control like you would in joystick the cars these so these cars already have the capability of drive-by-wire so is it is it trivial to convert a car that it operates with it open pile is able to control the steering Oh a new car or a car that we so we have support now for 45 different makes of cars what are one of the cars general mostly Hondas and Toyotas we support almost every Honda and Toyota made this year and then a bunch of GM's bunch of Subarus which it doesn't have to be like a Prius it could be Coral as well okay the 2020 Corolla is the best car with open pilot it just came out there the actuator has less lag than the older Corolla I think I started watching video with your eye the way you make videos is awesome literally the dealerships streaming stream for an hour yeah and basically like if stuff goes a little wrong you're just like you just go with it yeah I love it what's real yeah that's real that's that's it's that's so beautiful and it's so in contrast to the way other companies would put together a video like that how do I like to do it like good I mean if you become super rich one day is successful I hope you keep it that way because I think that's actually what people love that kind of genuine oh it's all that has value to me yeah my money has no if I sell out to like make money and I sold out it doesn't matter what do I get yacht I don't I got and I think Tesla's actually has a small inkling of that as well with autonomy day they did reveal more than I mean of course there's marketing communications you can tell but it's more than most companies will reveal which is I hope they go towards a direction more other companies GM Ford oh Jessa Tesla's gonna win level 5 they really are so let's talk about it you think you're focused on level 2 currently currently we're gonna be one to two years behind Tesla getting to level five okay we're interested right we're into it you're in I'm just saying once Tesla gets it we're one to two years behind I'm not making any timeline on when Tesla's that's right you did that's brilliant I'm sorry Tesla investors if you think you're gonna have an autonomous robot taxi fleet by the end of the year yes that's all bet against that so that what do you think about this the most level four companies are kind of just doing their usual safety driver during full autonomy kind of testing and then Tesla does basically trying to go from lane-keeping to full autonomy what do you think about that approach how successful would it be a ton better approach because Tesla is gathering data on a scale that none of them are they're putting real users behind the behind the wheel of the car it's I think the only strategy that works the incremental well so there's a few components to test approach that's that's more than just incrementally you spoke with is the one is the software so over-the-air software updates necessity I mean way more ease have those - those aren't but there was differentiating from the automaker's right no link keeping assist systems have no cars with lane keeping system have that except Tesla yeah and the other one is the data the other direction which is the ability to query the data I don't think they're actually collecting as much days people think but the ability to turn on collection and turn it off so I'm both in the robotics world in the the psychology human factors world many people believe that level to autonomy is problematic because of the human factor like the more the task is automated the more there's a vigilance decrement you start to fall asleep you start to become complacent start texting more and so on do you worry about that because if we're talking about transition from lane-keeping to full autonomy if you're spending eighty percent of the time not supervising machine do you worry about what that means to the safety of the drivers one we don't consider open pilot to be 1.0 until we have 100% driver monitoring you you can cheat right now our driver monitoring system there's a few ways to cheat it there pretty obvious we're working on making that better before we ship a consumer product that can drive cars I want to make sure that I have driver monitoring that you can't cheat what's like a successful driver monitoring system look like it's keep its is it all buzz just keeping your eyes on the road um well a few things so that's what we went with it first for driver monitoring I'm checking I'm actually looking at where your head is looking but cameras know about my resolution eyes are a little bit hard to get well head is this big I mean that is good and actually a lot of it just as psychology wise to have that monitor constantly there it reminds you that you have to be paying attention but we want to go further we just hired someone full-time to come onto the driver monitoring I want to detect phone in frame and I want to make sure you're not sleeping how much does the camera see of the body this one not enough not enough the next one everything what's interesting fish Atkins we have we're doing just data collection that real-time but fish eye is a beautiful mouth being able to capture the body and the smartphone is really like the biggest problem I'll show you I can show you one of the pictures from from our finder system awesome so you're basically saying the driver monitoring will be the answer to that um I think the other point that the original paper is is good as well you're not asking a human to supervise a machine without giving them meat they can take over at a time right our safety model you can take over we disengage on both the gas or the brake we don't disengage on steering I don't feel you have to but we disengage on gas or brake so it's very easy for you to take over and it's very easy for you to re-engage that switching should be super cheap yeah the cars that require even autopilot requires a double press that's almost I said I like that yeah and then then the cancel um to cancel in autopilot you either have to press cancel which no one knows where that is so they press the brake but a lot of things you don't you want to press the brake you want present ass so you should cancel on gas or wiggle the steering wheel which is bad as well wow that's brilliant I haven't heard anyone articulate at that point I like what this is all I think about it's because I think I think actually Tesla has done a better job than most automakers at making that frictionless but you just described that it could be even better I love super cruise as an experience once it's engaged yeah I don't know if you've used it but getting the thing to try to engage him yeah I've used this of Germany's super cruise a lot so what's their thoughts on the super Cruiser system in June disengage super cruise and it falls back to ACC so my car's like still accelerating it feels weird otherwise when you actually have super cruise engaged on the highway it is phenomenal we bought that Cadillac we just sold it but we bought it just to like experience this and I wanted everyone in the office to be like this is what we're striving to build GM pioneering with the driver monitoring you know you like their driver monitoring system it has some bugs if there's a sun shining back year it'll be blind to you by overall mostly yeah that's so cool you know the stuff that's uh I don't often talk to people that because it's such a rare car unfortunately they bought one yes possibly for us we lost like by 25k the deprecation but a Philips worth it I was very pleasantly surprised that GM system was so innovative and really that wasn't advertised much wasn't talked about much yeah and I was nervous that it would die that they would disappear my eyes did they put it on the wrong car they should've put it on the bolt and not some weird Cadillac that nobody bought I think that's gonna be into they're saying at least is going to be into their entire fleet so what do you think about it if as long as we're on the driver monitoring what do you think about you know I must claim that driver monitoring is not needed normally I love his claims that one is stupid that one is stupid and you know he's not gonna have his level five fleet by the end of the year hopefully he's like okay I was wrong I'm gonna add driver monitoring because when these systems get to the point that they're only messing up once every thousand miles you absolutely need driver monitor so let me play Delta because I agree with you but let me play devil's advocate so one possibility is that without driver monitoring people are able to monitor the self-regulate monitor themselves you know that so your idea is seeing all the people sleeping in decimals uh yeah well I'm a little skeptical of all the people sleeping in Tesla's because I have I've stopped paying attention to that kind of stuff because I want to see real data there's too much glorified it doesn't feel scientific to me so I want to know you know what how many people are really sleeping in Tesla's vs. sleeping I've I was driving here sleep-deprived in a car with no automation I was falling asleep I agree that it's high P it's just like you know what if you under I've am wondering I think I rented a my last autopilot experience was I rented a model three in march and drove it around the wheel thing is annoying and the reason the wheel thing is annoying we use the wheel thing as well but we don't disengage on wheel for Tesla you have to touch the wheel just enough you should trigger the torque sensor to tell it that you're there but not enough as to disengage it which don't use it for two things you disengage one wheel you don't have to that whole experience Wow beautiful put that all those elements even if you don't have driver monitoring that whole experience needs to be better driver monitoring I think would make I mean I think super cruise is a better experience once it's engaged over autopilot I think super cruise is our transition to engagement and disengagement are significantly worse yeah so there's a tricky thing because if I were to criticize super cruise is uh it's a little too crude and uh I think it's like six seconds or something if you look off-road you'll start warning you it's some ridiculously long period of time and just the way it I think it's basically it's a binary chili adapter it yeah it's it just needs to learn more about you and used to communicate what it sees about you more like I'm not you know Tesla shows what it sees about the external world it would be nice the supercruise would tell us what it sees about the internal world it's even worse than that you press the button to engage and it just says super cruise unavailable yeah why why yeah that transparency is good we've renamed the driver monitoring packet to driver state service state we have car state packet which has the state of the car driver state packet which I stay the driver so what does itah make their BAC must be do you think that's possible with computer vision absolutely so to me it's an open question I don't haven't looked into too much they actually had quite seriously looked at the literature it's not obvious to me that from the eyes and so on you can tell you might need to stuff from the car as well yeah you might need how they're controlling the car right and that's fundamentally at the end of the day what you care about you but I think especially when people are really drunk they're not controlling the car nearly smoothly as they would look at them walking right there the car is like an extension of the body so I think you could totally detect and if you could fix people who drunk distracted asleep if you fix those three yeah this is that's huge so what are the current limitations of open pilot what are the main problems that still need to be solved um we're hopefully fixing a few of them in 0-6 we're not as good as auto pilot at stop cars so if you're coming up to a red light at like 55 so it's the radar stopped car problem which is responsible to auto pilot accidents it's hard to differentiate a stopped car from a like signpost yes that ecology um so you have to fuse you have to do this visually there's no way from the radar data to tell the difference maybe you could make a map but I really believe in mapping at all anymore um really what you don't believe in mapping no so you basically the open pilot solution is saying react to the environment is just like human doing beings and then eventually when you want to do navigate on open pilot I'll train the net to look at ways all runways in the background I'll train a car using GPS at all we use it to crown trees we use it to very carefully ground treat the paths we have a stack which can recover a relative to 10 centimeters over one minute and then we use that to ground truth exactly where the car went in that local part of the environment but it's all local how are you testing in general just for yourself like experiments stuff all right were you were you located San Diego San Diego yeah okay Oh what you basically drive around there then collect some data and watch on Florence we have a simulator now and we have our simulators really cool our simulator is not it's not like a unity based simulator our simulator lets us load in real estate what I mean we can load in a drive and simulate what the system would have done on the historical data ooh nice interesting so what yeah right now we're only using it for testing but as soon as we start using it for training what's your feeling about the real world versus simulation do you like simulation for training if this moves to training Chuck we have to distinguish two types of simulators right there's a simulator that light is completely fake I could get my car to drive around in GTA mm-hmm um I feel that this kind of simulator is useless you're never there's so many my analogy here is like okay fine you're not solving the computer vision problem but you're solving the computer graphics problem right and you don't think you can get very far about creating ultra realistic graphics no because you can create ultra realistic graphics of the road now create alter a realistic behavioral models of the other cars oh well I'll just use my self-driving no you won't you need real you need actual human behavior because that's what you're trying to learn the dead driving does not have a spec the definition of driving is what humans do when they drive whatever way mode does I don't think it's driving right well I think if you win more than others its if there's any useful reinforcement learning I've seen it used quite well I study pedestrians a lot too is try to train models from real data of how pedestrians move and try to use reinforcement learning models to make pedestrians move in human-like ways by that point you've already gone so many layers you detected a pedestrian did you did you hand code the feature vector of their state did you guys learn anything from computer vision before deep learning well okay you know I feel like this is a perception to you is the sticking point does that mean what what's what's the hardest part of the stack here there is no human understandable feature vector separating perception and planning that's the best way I can I can put that there is no so it's all together and it's it's a that's a joint problem so you can take localization localization and planning there is a human understandable feature vector between these two things I mean okay so I have like three degrees position three degrees orientation and those derivatives maybe those second derivatives right that's human understandable that's physical the between perception and planning um so like way Moe has a perception stack and then a planner um and one of the things way matters right is they have a simulator that can separate those two they can like replay their perception data and test their system which is what I'm talking about about like the two different kinds of simulators there's the kind that can work on real data and is the kind of can't work on real data now the problem is that I don't think you can hand code a feature vector right like like you have some lists of like well here's my list of cars on the scenes here's my list of pedestrians in the scene this isn't what humans are doing what are humans doing global some something you're saying that's too difficult to handle I'm saying that there is no state vector given a perfect I could give you the best team of engineers in the world to build a perception system and the best team to build a planner all you have to do is define the state vector that separates those two I'm missing the state vector that separates those two what do you mean so what is the output of your perception system I'll put it the perception system it's theirs okay well there's several ways to do it one is this lamp components localization the other is drivable area drivable space drivable space and then there's the different objects in the scene and different objects in the scene over time maybe to give you input to then try to start modeling the trajectories of those objects sure that's it I can give you a concrete example of something you missed what's that so say there's a bush in the scene humans understand that when they see this bush that there may or may not be a car behind that bush drivable area and a list of objects does not include that humans are doing this constantly at the simplest intersections so now you have to talk about occluded area right right but even that what do you mean by occluded okay so I can't see it well if it's the other side of a house I don't care what's the likelihood that there's a car in that occluded area right and if you say okay we'll add that I can come up with 10 more examples that you can't add certainly occluded area would be something that simulator would have because it's simulating the entire you know occlusion is part of it a part of a vision stack pleasures that what I'm saying is if you have a hand engineered if your perception system output can be written in a spec document it is incomplete yeah idem you know certainly it's it's hard to argue with that because in the end that's going to be true yes I'll tell you what the output of our perception system is was that it's a thousand it's a thousand twenty four dimensional vector training underling oh no not it's a thousand twenty four dimensions of who knows what because its operating on real data yeah yeah and that's the perception that's the perception stake right think about a think about an autoencoder four phases alright if you have an autoencoder four phases and you say it has 256 dimensions in middle and I'm taking a face over here and projecting it to a face over here yeah can you hand label all 256 of those dimensions well no but those are generated automatically but they but even if you tried to do it by hand could you come up with a spec for your and between your encoder and your decoder no no because that's not it is it wasn't designed but there no no but if you could design it if you could design a face Reconstructor system could you come up with a spec no but I think we're missing here a little bit I think the the you're just being very poetic about expressing a fundamental problem of simulators that they're going to be missing so much that the feature vector would just look fundamentally different from in the simulated world in the real world I'm not making a claim about simulators I'm making a claim about the spec division between perception and planning and planning even in your system just in general right just in general if you're trying to build a car that drives if you're trying to hand code the output of your perception system like saying like here's a list of all the cars in the scene here's a list of all the people here's a list of the included areas here's a vector of drivable areas insufficient and if you start to believe that you realize that what Wayman crews are doing is impossible currently what we're doing is the perception problem it's converting the scene into a chessboard you yeah and then you reason some basic reasoning around that chessboard yeah and you're saying that really there's a lot missing there first of all why are we talking about this cuz isn't this a full autonomy is this something you think about oh I want to win self-driving cars so you're really your definition of win includes level of fool five level five I don't think level four is a real thing I want to build I want to build the alphago of driving so so alphago is really end to end yeah is uh yeah it's end to end and do you think this whole problem is those that also kind of what you're getting at with the perception and the planning is that this whole problem the right way to do it is really to learn the entire thing I'll argue that not only is it the right way it's the only way that's going to exceed human performance well certainly true for go everyone who tried to hand code go things built human inferior things and then someone came along and wrote some 10,000 line thing that doesn't know anything about go that beat everybody it's 10,000 lines true in that sense the the open question then that maybe I can ask you is uh driving is much harder than go the open question is how much harder so how because I think the AH mosque approach here with planning and perception it's similar to what you're describing which is really turning into not some kind of modular thing but really do formulate is a learning problem and it solves a learning problem of scale so how many years put one is how many years would it take to solve this problem or just how hard is this freaking problem well the cool thing is I think there's a lot of value that we can deliver along the way I think that you can build lane-keeping assist actually plus adaptive cruise control plus okay looking at ways extends to like all of driving yeah most of driving varies oh your adaptive cruise control treats red lights like cars okay so let's jump around with you you mentioned that you didn't like navigate an autopilot yeah what advice how would you make it better do you think as a feature that if it's done really well it's a good feature I think that it's too reliant on like hand coded hacks for like how does navigate an autopilot do a lane change it actually does the same lane change every time and it feels mechanical humans do different lane changes human sometime will do a slow one sometimes do a fast one navigate an autopilot at least every time I used it it did the identical language how do you learn I mean this is a fundamental thing actually yeah is uh the braking and an accelerating something that's still test the probably does it better than most cars but it still doesn't do a great job of creating a comfortable natural experience and navigate on autopilot just lane changes an extension of that so how do you learn to do natural lane change so we have it and I can talk about how it works so I feel that we have the solution for lateral but we don't yet have the solution for longitudinal there's a few reasons longitudinal is harder than lateral the lane change component the way that we train on it very simply is like our model has an input for whether it's doing a lane change or not and then when we train the end-to-end model we hand label all the lane changes because you have to I struggled a long time about not wanting to do that but I think you have to because you order the training data for the train data right well we actually we have an automatic ground truth or which automatically labels all the lane changes was that possible to automatically label interest yeah and detect the lane I see when it crosses it right I don't have to get that that high percent accuracy but it's like 95 good enough now I set the bit when it's doing the lane change in the end-to-end learning and then I set it to zero when it's not doing a lane change so now if I wanted to do a lane change a test time I just put the bit to a 1 and I'll do later yeah but so if you look at the space of lane change you know some percentage not a hundred percent that we make as humans is not a pleasant experience because we messed some part of it up yeah it's nerve-racking to change even look at the seizure des accelerate how do we label the ones that are natural and feel good you know that's the because that's your ultimate criticism the current Oh navigate not apologies doesn't feel good well the current navigator on autopilot is a hand coded policy written by an engineer in a room who probably went out and tested it a few times on the 280 probably a more a better version of that but yes that's how we would have written it a comment yeah Tesla they tested it and it might have been two engineers yeah no but so if you learn the lane change if you learn how to do a lane change from data just like just like you have a label that says lane change and then you put it in when you want to do the lane change it'll automatically do the lane change that's appropriate for the situation now to get it the problem of some humans do bad lane changes we haven't worked too much on this problem yet it's not that much of a problem in practice my theory is that all good drivers are good in the same way and all bad drivers are bad in different ways and we've we've seen some data to back this up well beautifully put so you just basically if that's true yeah hypothesis then you know task is to discover the good drivers um the good drivers stand out because they're in one cluster and the bad drivers are scattered all over the place and your net learns the cluster yeah that's uh so you just learned from the good drivers and they're easy to cluster we learned from all of them and that automatically learns the policy that's like the majority but we'll eventually probably afterthought so if that theory is true I hope it's true because the the counter theory is there is many clusters maybe but rarely many clusters of good drivers because if there's one cluster of good drivers you can at least discover a set of policies you can learn a set of policies which would be good universally yeah that would be a nice that would be nice if it's true and you're saying that there are some evidence that let's say lane changes can be clustered into four clusters right right there's this finite level of I would argue that all four of those are good clusters all the things that are random are noise and probably bad and which one of the four you pick or maybe it's Tanner maybe it's twenty you can learn them it's context dependent it depends on the scene and the hope is it's not too dependent on the driver yeah the hope is that it all washes out the hope is that there's that the distribution is not bimodal the hope is that it's a nice gas man so what advice would you give to Tessa how to fix how to improve navigate an autopilot the lessons you've learned from Kamiya the only real advice I would give to Tesla is please put driver monitoring in your cars with respect to improvement you can't do that anymore I said to interrupt but you know there's a practical nature of many of hundreds of thousands of cars being produced that don't have a good driver facing camera the model 3 has a selfie cam is it not good enough did they not have put IR LEDs for night that's a good question but I do know that the is fisheye in its relatively low resolution so it's really not this I he wasn't it wasn't designed for Arman you can hope that you can kind of scrape up and and and have something from it yeah but put it in today put it in today today every time I've heard Carpathia talk about the problem and talking about life software 2.0 and how the machine learning is gobbling up everything I think this is absolutely the right strategy I think that he didn't write navigate on autopilot I think somebody else did and kind of hacked it on top of that stuff I think what Carpathia says wait a second why did we hand code this lane change policy with all these magic numbers we're gonna learn it from data they'll fix it they already know what to do there well that that's that's Andres job is to turn everything into a learning problem and collect a huge amount of data the the reality is though not every problem could be turned into a learning problem in the short term in the end everything would be a learning problem the reality is like if you want to build alpha vehicles today it will likely involve no learning and that's that's the the reality is so at which point does learning start it's the crutch statement that lidar is a crutch on which point will learning get up to part of human performance it's all over human performance and imagenet classification under ivan is the question still it is a question I'll say this I'm I'm here to play for 10 years I'm not here to try to I'm here to play for 10 years and make money along the way I'm not here to try to promise people that I'm gonna have my l5 taxi Network up and working in two years do you think those mistake yes what do you think there was the motivation behind saying that other companies are also promising alpha vehicles with their different approaches in 2020 2021 2022 if anybody would like to bet me that those things do not pan out I will I will bet you even money even money I'll bet you as much as you want so are you worried about what's going to happen because you're not in full agreement on that I was going to happen when 2022 21 come around and nobody has fleets of autonomous vehicles no you can look at the history if you go back five years ago they were all promised by 2018 and 2017 but they weren't that strong of promises I mean Ford really declared pretty that I think not many have declared as as like definitively as they have now these dates well okay so let's separate l4 and l5 do I think that it's possible for way mo to continue to kind of like like hack on their system until it gets to level 4 in Chandler Arizona yes knows no safety driver Chandler Arizona yeah but by OSI which year are we talking about oh I even think that's possible by like 2020 2021 but level 4 Chandler Arizona not level 5 New York City level 4 meaning some very defined streets it works out really well very defined streets and then these streets are pretty empty if most of the streets are covered in way MOS we mo can kind of change the definition of what driving is hmm right if your self-driving network is the majority of cars in an area they only need to be safe with respect to each other and all the humans will need to learn to adapt to them now go drive in downtown New York oh yeah that's already you can talk about autonomy in like like fun farms it already works great because you can really just follow the GPS line so what does success look like for comm AI what what are the milestones like where you can sit back with some champagne and say we did it boys and girls well it's never over yeah but don't be let's drink champagne everything straight so what is a good what are some wins um a big milestone that we're hoping for by mid next year is profitability of the company and we're gonna have to revisit the idea of selling a consumer product but it's not gonna be like the comma one when we do it it's gonna be perfect open pilot has gotten so much better in the last two years we're gonna have a few a few features we're gonna have a hundred percent driver monitoring we're gonna disable no safety features in the car um actually I think it'd be really cool we're doing right now our project this week is we're analyzing the data set and looking for all the AEP triggers from the manufacturer systems we have a better data set on that than the manufacturers how much does how many does Toyota have ten million miles of real-world driving to know how many times they're AUB triggered so let me give you cuz yes right financial advice yeah cuz I work with a lot of automakers and one possible source of money for you which I'll be excited to see you take on is basically selling the data so which is something that most people are not selling in a way we're here here at automaker but creating we've done this actually at MIT not for money purposes but you could do it for significant money purposes and make the world a better place by creating a consortium where automakers would pay in and then they get to have free access to the data and I I think a lot of people are really hungry for that and would pay significant amount of money for it here's the problem with that I like this idea all in theory he'd be very easy for me to give them access to my servers and we already have all open source tools to access this data it's in a great format we have a great pipeline but they're gonna put me in the room with some business development guy mm-hmm and I'm gonna have to talk to this guy and he's not gonna know most of the words I'm saying I'm not willing to tolerate that okay but I think I agree with you I'm the same way but you just tell them the terms and there's no discussion needed if if I could just tell them the terms yeah and then like all right who wants access to my data I will sell it to you for let's say you want to go on a subscription I'll sell you 400 a month any 100k mo 100k month I'll give you access to the data subscription yeah yeah I think that's kind of fair came up with that number off the top of my head if somebody sends me like a three line email where it's like we would like to pay a hundred K month to get access to your data we would agree to like reasonable privacy terms of the people who are in the data set I would be happy to do it but that's not gonna be the email the email is gonna be hey do you have some time in the next month where we can sit down and we can I don't have time for that we're moving too fast yeah you could politely respond to that email but not saying I don't have any time for your yeah you say oh well unfortunately these are the terms and so this is we try to we brought the cost down for you in order to minimize the friction of education after here's the whatever it is 1 2 million years dollars a year and you have access and it's not like I get that email from like but okay am I gonna reach out am I gonna hire a business development person who's gonna reach out to the automaker's no way yeah okay if they reached into me I'm not gonna ignore the email I'll come back with something straight yeah if you're willing just pay honeycomb all the facts they don't man I'm happy to to set that up that's what my engineering time but actually quite insightful view you're right yeah probably because many of the automakers are quite a bit of old-school yeah there will be need to reach out and they want it but they they'll need to be some some communication you right mobile eye circuit 2015 had the lowest R&D spend of any chip maker like purpur and you look at all the people who work for them and it's all business development people because the car companies are impossible to work with yeah so you're you have no patience for that and you're you're legit Android huh I have something to do right like like it's not like it's not like I don't like I don't mean to like be a dick and say like I don't have patience for that but it's like that stuff doesn't help us with our goal of winning self-driving cars if I want money in the short term if I showed off like the actual like the learning tech that we have it's it's somewhat sad like it's years and years ahead of everybody else's not so maybe not Tesla's I think Tesla has similar stuff to us actually yeah I think Tesla's similar stuff but when you compare it to like what the Toyota Research Institute has you're not even close to what we have no comment but I also can't I have to take your comments I ain't into ative Lee believe you but I have to take it with a grain of salt because I mean you you are an inspiration because you basically don't care about a lot of things that other companies care about you don't try to in a sense like make up stuff so to drive a valuation you're really very real and you're trying to solve the problem and admire that a lot what I don't necessarily fully can't trust you on I do respect it's like how good it is right I can only but I also know how bad others are and so I'll say I'll say two things about don't trust but verify right I'll say two things about that one is try get in a twenty twenty Corolla and try open pal 0.6 when it comes out next month I think already you'll look at this and you'll be like them this is already really good and then I could be doing that all with hand labelers and all with with like like the same approach that like Mobileye uses when we release a model that no law has the lanes in it that only outputs a path mm-hmm then think about how we did that machine learning and then right away when you see and that's gonna be an open pilot that's gonna be an open pilot before 1.0 when you see that model you'll know that everything I'm saying is true because how else did I get that model good one of the things too about the simulator oh yeah yeah this is super exciting that's super exciting and uh but like you know I listened to your talk with Kyle and Kyle was originally building the the after market system and he gave up on it because of technical challenges yeah because of the fact that he's gonna have to support twenty to fifty cars we support forty five because what is he gonna do when the manufacturer ABS system triggers we have alerts and warnings to deal with all of that in all the cars and how is he going to formally verify it well I got ten million miles of data it's probably better it's probably better verified than the spec yeah I'm glad you're here talking to me this is I'll remember this day is this interesting if you look at Kyle's from from Cruz I'm sure they have a large number of business development folks and you work with he's working with GM you could work with agro a I working with Ford it's interesting because chances that you fail business-wise like bankrupt are pretty high yeah and and yet it's the Android model is you're actually taking on the problem so that's really inspiring I mean well I have a long-term way for kamma to make money too and one of the nice things when you really take on the problem which is my hope for autopilot for example is things you don't expect ways to make money or create value that you don't expect will pop up oh I've known how to do it since kind of 2017 is the first time I said it well which part to know it to know how to do which part our long-term plan is to be a car insurance company insurance yeah I love it yeah yeah what I make driving twice is safe not only that I have the best date is that you know who statistically is the safest drivers and oh oh we see you we see you driving unsafely we're not going to insure you and that that causes a like bifurcation in the market because the only people who can't get common insurance or the bad drivers Geico can insure them their premiums crazy higher premiums are crazy low would win contracts take over that whole market okay so if we win if we went but that's I'm saying like how do you turn comma into a ten billion dollar company is that that's right so you you know a musk who else who else is thinking like this and working like this in your view who are the competitors are there people seriously I don't think anyone that I'm aware of as seriously taking on lane-keeping you know like to worse a huge business that turns eventually into full autonomy that then creates yeah like that creates other businesses on top of it and so on thinks insurance thinks all kinds of ideas like that do you know who anyone else thinking like this not really that's interesting I mean it my sense is everybody turns to that in like four or five years like Ford once the autonomy doesn't feel fall through but at this time Elon to the iOS by the way he paved the way for all I was not i OS true I would not be doing comma AI today if it was not for those conversations with Elon and if it were not for him saying like yeah I think he said like well obviously we're not gonna use Leiter we use cameras humans use cameras so what do you think about that how important is lidar everybody else is on l5 is using lidar what are your thoughts on his provocative statement that lidar is a crutch see sometimes we'll say dumb things like the driver monitoring thing but sometimes we'll say absolutely completely 100% obviously true things yeah of course lidar is a crutch it's not even a good crutch you're not even using it they're using it for localization yeah which isn't good in the first place if you have to localize your car to centimetres in order to drive like yeah they're not drive it currently not doing much machine learning I thought polite our data meaning like to help you in the tasks of general tasks of perception the main goal of those light hours on those cars I think is actually localization more than perception or at least that's what they use them for yeah that's true if you want to localize two centimeters you can't use GPS the fanciest GPS in the world can't do it especially if you're under tree cover and stuff flatter I can do it pretty easily see really they're not taking on I mean in some research they're doing they're using it for perception but and they're certainly not which sad they're not fusing it well lay vision they do use it for perception I'm not saying they don't use it for perception but the thing that they have vision based and radar based perception systems as well you could remove the lidar and and and keep around a lot of the dynamic object perception you want to get centimeter accurate localization good luck doing that with anything else so what should Cruz lame-o do like what would you be your advice to them now anyway Mo's actually there's I mean they're doing they're serious way mo out of all of them equate so serious about the long game if everybody fell five is a lot is requires fifty years I think when will be the only one left standing at the end with the forgiving the financial backing if they have Google box um I'll say nice things about both lame-o and Cruz let's do it nice is good way mo is by far the furthest along with technology way mo has a three to five year lead on all the competitors um if that if the way mo looking stack works mm-hmm maybe three year lead if the way mo looking stack works they have a three year lead now I argue that way mo has spent too much money to recapitalize to gain back their losses in those three years also self-driving cars have no network effect like that yeah goober has a network effect you have a market you have drivers and you have riders self-driving cars you have capital and you have riders there's no network effect if I want to blanket a new city in self-driving cars i buy the off-the-shelf Chinese knockoff self-driving cars and I buy enough up from the city I can't do that with drivers and that's why Ober has a first mover advantage that no self-driving car company will can you uh disentangle that a little bit uber you're not talking about uber the autonomous vehicle number you talked about the uber cars okay yeah I'm over I open for business in Austin Texas listen I need to attract both sides of the market I need to both get drivers my platform and riders on my platinum and I need to keep them both sufficiently happy right riders aren't going to use it if it takes more than five minutes for an uber to show up drivers aren't gonna use it if they have to sit around all day and there's no riders so you have to carefully balance a market and whenever you have to carefully balance a market there's a great first mover advantage because there's a switching cost for everybody right the drivers and the riders would have to switch at the same time let's even say that you know um let's say Luber shows up in Luber somehow you know agrees to do things that add a bigger you know you know we're just gonna we've done it more efficiently right Luber is only takes five percent of a cot instead of the ten percent that Hooper takes no one is gonna switch because the switching cost is higher than that five percent so you actually can in markets like that you have a first mover advantage yeah autonomous vehicles of the level five variety have no first mover advantage if the technology becomes commoditized say I want to go to a new city look at the scooters it's gonna look a lot more like scooters every person with a checkbook can blanket a city in scooters and that's why you have 10 different scooter companies yeah which one's gonna win it's a race to the bottom it's terrible market to begin because there's no market for scooters and scooters don't get a say and whether they want to be bought and deployed to a city or not right so the yeah we're gonna entice the scooters with subsidies and deals so whenever you have to invest that capital that's it doesn't it doesn't come back yeah that they can't be your main criticism over the way mo approach oh I'm saying even if it does technically work even if it does technically work that's a problem yeah I don't know I if I were to say I I would I would say you're already there I haven't even thought about that but I would say the bigger challenge is the technical approach so way most cruises and not just the technical approach but of creating value I still don't understand how you beat uber the the human driven cars in terms of financially it doesn't it doesn't make sense to me that people want to want to get an autonomous vehicle I don't understand how you make money in the long term like real long-term but it just feels like there's too much capital investment needed oh and they're gonna be worse than ubers because they're gonna they're gonna stop for every little you know thing everywhere um actually a nice thing about Cruz that was my nice thing about wait another three years that it wasn't nice oh that's three years technically ahead of everybody their tech stack is is great my nice thing about Cruz is GM buying them was a great move for GM for 1 billion dollars GM bought an insurance policy against way mo they put Cruz is three years behind way mo hmm that means Google will get a monopoly on the technology for at most three years and technology works you might not even be right about the three years it might be less might be less crews actually might not be that far behind I don't know how much way mo has waffled around or how much of it actually is just that long tail yeah okay if that's the best you could say there's some nice things it that's more of a nice thing for GM that that's a smart insurance policy it's just more insurance policy I mean I think that's how I I can't see crews working out any other for crews to leapfrog way mo would really surprise me yeah so let's talk about like the underlying assumptions of everything is we're not going to leapfrog Tesla Tesla would have to seriously mess up for us because you're okay so the way you leapfrog right is you come up with an idea or you take a direction perhaps secretly that the other people aren't taking and so cruise way mo even Aurora no Aurora tzuke's is the same stack as well they're all the same codebase even and they're all the same DARPA urban challenge codebase so the question is do you think there's a room for brilliance and innovation there that will change everything like say okay so I'll give you examples it could be if revolution and mapping for example that allow you to map things do HD maps of the whole world all weather conditions really well or revolutionist simulation to where the the what you said before becomes incorrect that kind of thing I knew room for breakthrough innovation um what I said before about oh they actually get the whole thing well I'll say this about we divide driving into three problems and I actually haven't solved the third yet but I have an idea how to do it so there's the static the static driving problem is assuming you are the only car on the road right right and this problem can be solved 100% with mapping and localization this is why farms work the way they do if all you have to deal with is the static problem and you can statically schedule your machines right it's the same as like statically scheduling processes you can statically schedule your tractors to never hit each other on their paths all right because then you know the speed they go at so so that's the static driving problem Maps only helps you with the static driving problem yeah the question about static driving yeah you just made it sound like it's really easy that was really easy how easy how well because the whole drifting out of lane when when Tesla drifts out of lane is failing on the fundamental static driving problem Tesla is drifting out of lane the static driving problem is not easy for the world the static driving problem is easy for one route and one route in one weather condition with one state of lane markings and like no deterioration no cracks in the road I'm assuming you have a perfect localizer so that's all for the weather condition and me the lane marking condition that's the problem is how could you how do you have a perfect you can build perfect localizers are not that hard to build okay come on now with with wood lighter why don't ya wood lighter okay yeah but you use lighter right like use lidar build a perfect localizer building a perfect localizer without lidar it's gonna be it's gonna be hard you can get ten centimeters without liner you can get one centimeter with lidar maybe concern about the one or ten centimeter I'm concerned if every once in a while you're just way off yeah so this is why you have to carefully make sure you're always tracking your position you want to use light camera fusion but you can get the reliability of that system up to a hundred thousand miles and then you write some fallback condition where it's not that bad if you're way off right I think that you can get it to the point it's like özil D that you're you're never in a case where you're way off and you don't know it yeah okay so this is brilliant so that's the static static we can especially with lidar and good HD maps you can solve that problem easy no you just the static static very typical for you to say something's easy I got it it's not as challenging as the other ones okay well it's okay maybe it's obvious how to solve it the third one's the hardest well where do we get and a lot of people don't even think about the third one and even I see it as different from the second one so the second one is dynamic the second one is like say there's an obvious examples like a car stopped at a red light right you can't have that car in your map yeah because you don't know whether that car is gonna be there or not so you have to detect that car in real time and then you have to you know do the appropriate action right also that car is not a fixed object that car may move and you have to predict with that car will dim alright so this is the dynamic problem yeah do you have to deal with this um this involves again like you're gonna need models of other people's behavior do you are you including in that and I don't want to step on on the third one oh but if I are you including in that you're influenced and people I guess the third okay that's the moon we call it the counterfactual yeah I believe that I just talked to Judea pearl who's obsessed with counterfactuals oh yeah yeah so the static and the dynamic yeah our approach right now for lateral will scale completely to the static a dynamic the counterfactual the only way I have to do it yet they don't give you thing that I want to do once we have all these cars is I want to do reinforcement learning on the world I'm always gonna turn the exploiter up to max I'm not gonna have them explore but the only real way to get at the counterfactual is to do reinforcement learning because the other agents are humans so that's fascinating that you break you down like that I agree completely I've set my life thinking about this beautiful they're so and part of it because you're slightly insane because not my life just the last four years no no you have like some some nonzero percent of your brain has a madman in it which that's a really good feature but there's a safety component to it that I think when this sort of counterfactuals and so on that would just freak people out how do you even start to think about just in general I mean you've you've had some friction with Nitza and so on I am frankly exhausted by safety engineers the the prioritization on safety over innovation to a degree where it kills in my view kills safety in the long term so the counterfactual thing they just just actually exploring this world of how do you interact with dynamic objects and so on how do you how do you think about safety you can do reinforcement learning without ever exploring and I said that like so you can think about you're in like a reinforcement learning it's usually called like a temperature parameter and your temperature parameter is how often you deviate from the Arg max I could always set that to zero and still learn and I feel that you'd always want that set to zero on your actual system got you but the problem is you first don't know very much and so you're going to make mistakes so the learning the exploration happens to ready yeah but okay so the consequences of a mistake yeah open pilot and autopilot are making mistakes left and right yeah we have we have we have 700 daily active users a thousand weekly active users open pilot makes tens of thousands of mistakes a week these mistakes have zero consequences these mistakes are oh it I wanted to take this exit and it went straight so I'm just gonna carefully touch the wheel humans the humans catch them and the human disengagement is labeling that reinforcement learning in a completely consequence-free way so driver monitoring is the way you ensure they keep yes they keep paying attention how is your messaging say I gave you a billion dollars you would be scaling and now oh my fact it's guy couldn't scale with any amount of money I'd raise money if I could if I had way to scale yeah you're not focusing I don't know I don't know how to do Oh like I guess I could sell it to more people but I want to make the system better better I don't know I mean but what's the messaging here I got a chance to talk to you on and and he he basically said that the human factor doesn't matter you know the human doesn't matter because the system will perform there would be sort of a sorry to use the term but like a singular like a point where it gets just much better and so the human it won't won't really matter but it seems like that human caching the system when it gets into trouble is like the thing which will make something like reinforcement learning work so how do you how do you think messaging for Tesla for you should chant for the industry in general should change I think my messaging is pretty clear at least like our messaging wasn't that clear in the beginning and I do kind of fault myself for that we are proud right now to be a level 2 system we are proud to be level 2 if we talk about level 4 it's not what the current hardware it's not gonna be just a magical OTA upgrade it's gonna be new hardware it's gonna be very carefully thought-out right now we are proud to be level 2 and we have a rigorous safety model I mean not like like okay rigorous who knows what that means but we at least have a safety model and we make it explicit is in safety MD and open pilot and it says seriously though safety dot MD Android so well this is this is the safety model and I like to have conversations like if like you know sometimes people will come to you and they're like your systems not safe okay have you read my safety Doc's would you like to have an intelligent conversation about this and the answer is always no they just like scream about it runs Python okay what so you're saying that that because pythons not real-time Python not being real-time never causes disengagement disengagement SAR caused by you know the model is QM but safety dad MD says the following first and foremost the driver must be paying attention at all times I don't can I do I still consider the software to be alpha software until we can actually enforce that statement but I feel it's very well communicated to our users two more things one is the user must be able to easily take control of the vehicle at all times mm-hmm so if you step on the gas or brake with open pilot it gives full manual control back to the user or press the cancel button step 2 the car will never react so quickly we define so quickly to be about one second that you can't react in time and we do this by enforcing torque limits braking limits and acceleration limits so we have um like our torque limits way lower than Tesla's this is another potential if I could tweak autopilot I would lower their torque limit or would a driver monitoring um because autopilot can jerk the wheel hard yeah open pilot can it's we we limit um and all this code is open source readable and I believe now it's all misery C compliant misra is like the automotive coding standard um at first I you know I've come to respect I've been reading like the standards lately and I've come to respect them they're actually written by very smart people yeah they're brilliant people actually they have a lot of experience there's sometimes a little too cautious but in this case it pays off miss was written by like computer scientists and you tell them as a language they use you can tell by the language they use they talk about like whether certain conditions in misra are decidable or undecidable you mean like the halting problem and yes well all right you've earned my respect I will tell you carefully what you have to say and we want to make our code compliant with that all right so you're proud level two and reform so you were the founder and I think CEO of comm AI then you were the head of research what the heck are you know what's your connection to come AI the president but I'm one of those like unelect unelected presidents of like like a small dictatorship country not one of those like elected presidents oh so you're like Putin when he was like yeah I got sure so there's uh what's the governance structure what's the what's the future of commie I finance I mean as a business do you want you just focused on getting things right now making some small amount of money and mean to and then one that works it works in each scale our burn rate is about 200k a month and our revenue is about 100k a month so we need to 4x our revenue but uh we haven't like tried very hard at that yet and the revenue is basically selling stuff online yeah we sell stuff shopped a comment at AI is there other well okay so you you'll have to figure out that's our that's our only see but to me that's like respectable revenues yeah we make it by selling products to consumers we're honest and transparent about what they are most actually level for companies right because you could easily start blowing up like smoke like over selling the hype and feeding into getting some fundraisers oh you're the guy you're genius because you hacked the iPhone oh I hate that I hate that yeah I can trade my social capital for more money yeah I did it once I almost regret it doing the first of it well on a small tangent what's your you seem to not like Fame and yet you're also drawn to fame what were you on we're on you where are you on that currently have you had some introspection some soul-searching yeah I actually I've come to a pretty stable position on that like after the first time I realized that I don't want attention from the masses I want attention from people who I respect who you respect I can give a list of people so are these like Elon must have characters yeah well actually you know what I'll make it more broad than that I won't make it about a person I respect skill I respect people who have skills right and I would like to like be I'm not gonna say famous but be like known among more people who have like real skills who in cars doers do you think have skill not do you respect Oh Kyle vote has skill a lot of people away mo have skill and I respect them I I respect them as engineers like I can think I mean I think about all the times in my life where I've been like dead set on approaches and they turn out to be wrong so I mean this might I might be wrong I accept that I accept that there's a decent chance that I'm I'm wrong and actually I mean having talked to Chris Urmson sterling anderson i those those guys I mean I deeply respect Chris I just admire the guy he's legit can you drive a car through the desert when everybody thinks it's impossible that is that's legit and then I also really respect the people who are like writing the infrastructure of the world like the linus torvalds and the chris lab they're doing the real work I know they're doing the real work this every dog that Chris Ladin you realize especially when they're humble it's like you realize oh you guys were just using your oh yeah all the hard work they did him that's incredible what do you think mr. Anthony lowendahl ski what do you he's a he's another mad genius sharp guy oh yeah what do you think he might long-term become a competitor Oh tu cama well so I think that he has the other right approach I think that right now there's two right approaches one is what we're doing and one is what he's doing can you describe I think it's called pronto a certain you thing did do you know what what the approaches actually don't know embark is also doing the same sort of thing the idea is almost that you want to so if you're I can't partner with Honda and Toyota Honda and Toyota are uh like four hundred thousand person companies it's not even a company at that point like I don't think of it like I don't personify it I think of it like an object but a trucker drives for a fleet maybe that has like some truckers are independent some truckers Drive for fleets with a hundred trucks there are tons of independent trucking companies out there start a trucking company and drive your costs down or figure out how to drive down the cost of trucking another company that I really respect is uh not oh I should I respect their business model no auto sells a driver monitoring camera and they sell it to fleet owners if I that's right if I owned a fleet of cars and I could pay you know 40 bucks a month to monitor my employees this is gonna like reduces accidents 18% yeah it's it's so like that in the space that is like the business model that I like most respect is there creating value today yeah which is uh that's a huge one is how do we create value today with some of this then the link keeping things huge and it sounds like you're creeping in or full steam ahead on the driver monitoring - yeah which I think actually were the short-term value if you can get right I still I'm not a huge fan of the statement that everything is to have driver monitoring but will I agree with that completely but I'm that statement usually misses the point that to get the experience of it right is not trivial oh no not at all in fact like so right now we have I think the time out depends on speed of the car but we want to depend on like the scenes day if you're on like an empty Highway it's very different if you don't pay attention then if light you're like coming up to a traffic light and long-term it should probably learn from from the driver because that's to do I watched a lot of video we've built a smartphone detector just to analyze how people are using smartphones and people are using it very differently and there's a it's a texting styles there's videos yeah like I got billions of miles of people driving cars in this moment I spent a large fraction of my time just watching videos because it's never fails to to learn like it never I've never failed from a video watching session to learn something I didn't know before fact I usually like when I eat lunch I'll sit especially when the weather is good and just watch pedestrians with an eye to understand like from a computer vision I just to see can this model can you predict what are the decisions made and there's so many things that we don't understand this is what I mean about the state vector yeah it's I'm trying to always think like Gamma understanding in my human brain how do we convert that into how hard is the learning problem here I guess is the fundamental question so something that from a hacking perspective this is always comes up especially with folks well first the most popular question is the trolley problem right so that's not a sort of a serious problem there are some ethical questions I think that arise maybe will you want to met you or do you think there's any ethical serious ethical questions that we have a solution to the trolley problem Akane aye well so there is actually an alert in our code ethical dilemma detected it's not triggered yeah we don't we don't how you have to detect the ethical dilemmas but we're a level two system so we're going to disengage and leave that decision to the human you're such a troll hey no but the trolley problem deserves to be trolled yeah that's a beautiful answer actually I know I gave it to someone who was like sometimes people ask like you asked about the trolley problems like you can have a kind of discussion about it like boo you get someone who's like really like earnest about it because it's the kind of thing where if you ask a bunch of people in an office whether we should use a sequal stack or no sequel stack if they're not that technical they have no opinion but if you ask them what color they want to paint the office everyone has an opinion on that and that's why the trolley problem is that's it I mean it's a beautiful answer yeah we're able to detect the problem and were able to pass it on to the human yeah I've never never heard anyone say it nice escape route okay but proud level - I'm proud level - I love it so the other thing that people cope you know have some concern about with AI in general is hacking so how hard is it do you think to hack a nataas vehicle either through physical access or through the more sort of popular now these adversarial examples on the sensors be adversarial examples one you want to see some adversarial examples that affect humans hmm right oh well there used to be a stop sign here but I put a black bag over the stop sign and then people ran it all right adversarial yeah right like like like there's tons of human adversarial examples - um the question in general about like security if you saw something something just came out today I'm like there are always such high P headlines about like how navigate on autopilot was fooled by a GPS spoof to take an exit right at least that's all they could do was take an exit if your car is relying on GPS in order to have a safe driving policy they're doing something if you're relying and this is why v2v is such a terrible idea v2v now relies on both parties getting communication right this is not even so I think of safety security is like a special case of safety right safety is like we put a little you know piece of caution tape around the hole so that people won't walk into it by accident security is I put a 10 foot fence around the hole so you actually physically cannot climb into it with barbed wire on the top and stuff right so like if you're designing systems that are like unreliable they're definitely not secure your car should always do something safe using its local sensors and then the local sensor should be hardwired and then could somebody hack into your can boss and turn your steering wheel on your brakes yes but they could do it before common AI too so let's think out of the box and some things so do you think teleoperation has a role in any of this so remotely stepping in and controlling the cars no I think that if safety if the safety operation by design requires a constant link to the cars I think it doesn't work so that's the same argument using for v2i VTV well there's a lot of non safety critical stuff you can do with v2 I like v2 I liked v2 I weigh more than V B because Vita I is is already like I already have internet in the car right there's a lot of great stuff you can do with v2 I um like for example you can well where I already have v2 Waze is V die right ways can route me around traffic jams that's a great example of v2 I mm-hmm and then okay the car automatically talks to that same service like improving the experience but it's not a fundamental fallback for safety know if any of your if any of your if any of your things that require wireless communication are more than qm like have a nozzle rating you should you previously said that life is work and then you don't do anything to relax so how do you think about hard work well what is it what do you think it takes to as great things you know there's a lot of people saying that there needs to be some balance you know you need to in order to accomplish great things you need to take some time off each of reflects and so on now and then some people are just insanely working burning the candle at both ends how do you think about that I think I was trolling in the Siraj interview when I said that off camera right before I spoke a little bit we'd like get out spot this is a joke right like I do nothing it relaxed look where I am I'm at a party right yeah that's true so no of course I I don't um what I say that life is work though I mean that like I think that what gives my life meaning is work I don't mean that every minute of the day you should be working I actually think this is not the best way to maximize results I think that if you're working 12 hours a day you should be working smarter and not harder well so it gives work gives you meaning for some people other source of meaning is personal relationships yeah like family and so on you've also in that interview of Sirach or does the the trolling mentioned that one of the things you look forward to in the future is AI girlfriends yes so at the topic that I'm all very much fascinated by not necessarily girlfriends but just forming a deep connection with AI what kind of system do you imagine when you say AI girlfriend whether you were trolling or not know that one I'm very serious about and I'm serious about that on both a shallow level and a deep level I think that VR brothels are coming soon and are gonna be really cool it's not cheating if it's a robot I see the slogan already but there's I don't know if you've watched it just watched the black mirror episode i watch the one year yeah yeah oh the the Ashley - one way da no where there's two friends were having sex with each other and mo in the VR game your game it's just two guys but yeah one of them was was a female and yeah there's another mind-blowing concept that in VR you don't have to be the form you can be to animals having sex weird I mean I'll see you I said the software Maps the nerve endings right yeah yeah they they sweep a lot of the fascinating really difficult technical challenges under the rock like assuming it's possible to do the mapping of the nerve endings then I wish yeah I saw that the way they did it with a little like stim unit on the head that'd be amazing so wanna know on a shallow level like you could set up like almost a brothel with like real dolls and oculus quests right some good software I think it vehicle novelty experience you know on a deeper like emotional level I mean yeah I would really like to fall in love with with with the machine do you see yourself having a long-term relationship of the kind monogamous relationship that we have now with a robot with a a AI system even not even just the robot so I think about maybe my ideal future when I was fifteen I read eliezer yudkowsky early writings mmm-hmm on the singularity and like that AI is going to surpass human intelligence massively he made some Moore's law based predictions that I mostly agree with and then I really struggled for the next couple years of my life like why should I even bother to learn anything it's all gonna be meaningless when the machines show up right maybe maybe when I was that young I was still a little bit more pure and really like clung to that and I'm like wow the machines ain't here yet you know and I seem to be pretty good at this stuff let's uh let's try my best you know like what's the worst that happens but the best possible future I see is me sort of merging with the Machine and the way that I personify this is in a long-term monogamous relationship with a machine oh you don't think there's room for another human in your life if you really truly merge with another machine I mean I see merging I see like the best interface to my brain is like the same relationship and to merge with an AI right does that merging feel like I see yeah I've seen couples who've been together for a long time and like I almost think of them as one person like couples who spend all their time together and that's that's how you're actually putting what does that merging actually looks like it's not just a nice channel like a lot of people imagine it's just an efficient link search link to Wikipedia or something I don't believe in that but it's more you're saying that there's the same kind of the same kind of relationship you have one other human that's a deep relationship is that's what merging looks like that's that's pretty uh I don't believe that link is possible um I think that that link so you're like oh me to download Wikipedia right to my brain yeah my reading speed is not limited by my eyes my reading speed is limited by my inner processing locally and to like bootstrap that sounds kind of unclear how to do it and horrify but if I am with somebody and I'll use a somebody who is making a super sophisticated model of me and then running simulations on that model I'm not gonna get into the question whether the simulations are conscious or not I don't really want to know what it's doing um but using those simulations to play out hypothetical futures for me deciding what things to say to me to guide me along a path and that's how I envision it so on that path to AI of superhuman level intelligence you've mentioned that you believe in the singularity that singularity is coming yeah again could be trolling could be not could be part I'm all trolling his truth in it I don't know what that means anymore what is the singularity yeah so that's that's really the question how many years do you think before the singularity what form do you think it will take does that mean fundamental shifts and capabilities of AI does it mean some other kind of ideas um maybe this is just my roots but so I can buy a human beings worth of compute for like a million bucks that I it's about one TPU pod v3 I want like I think they claim a hundred peda flops that's being generous I think humans are actually more like twenty so that's like five humans that's pretty good Google needs to sell their teepees um but I could buy I could buy I could buy GPUs I could buy a stack of like by 1080 tea eyes build data center full of them four million box I can get a human worth of compute but when you look at the total number of flops in the world when you look at human flops which goes up very very slowly with the population and machine flops which goes up exponentially but it's still nowhere near I think that's the key thing to talk about when the singularity happened when most flops in the world are silicon and not biological that's kind of the crossing point like they are now the dominant species on the planet and just looking at how technology is progressing when do you think that could possibly happen you think go to happen in your lifetime oh yeah definitely my lifetime I've done the math I like 2038 because it's the UNIX timestamp rollover yeah beautifully put so you've you said that the meaning of life has to win if you look five years into the future what does winning look like so [Music] hi there's a lot of I can go into like technical depth to what I mean by that to win um it may not mean I was criticized for that in the comments like doesn't this guy want to like save the penguins in Antarctica or like you know listen to what I'm saying I'm not talking about like I have a yacht or something I am an agent I am put into this world and I don't really know what my purpose is but if you're a reinforcement if you're if you're an intelligent agent and you're put into a world what is the ideal thing to do well the ideal thing mathematically you go back to like Schmitt Hoover theories about this is to build a compressive model of the world to build a maximally compressive to explore the world such that your exploration function maximizes the derivative of compression of the past mid Hoover has a paper about this and like I took that kind of as like a personal goal function so what I mean to win I mean like maybe maybe this is religious but like I think that in the future I might be given a real purpose or I may decide this purpose myself and then at that point now I know what the game is and I know how to win I think right now I'm still just trying to figure out what the game is but once I know so you have you have imperfect information you have a lot of uncertainty about the reward function and you're discovering it exactly the purpose is that's that's the better way to put it the purpose is to maximize it while you have it a lot of uncertainty around it and you're both reducing the uncertainty and maximizing at the same time yeah and so that's at the technical level what is the if you believe in the universal prior yeah what is the universal reward function that's the better way to put it so that when it's interesting I think I speak for everyone in saying that I wonder what that reward function is for you and I look forward to seeing that in five years in ten years I think a lot of people who do myself right and cheering you on man so I'm I'm a happy you exist and I wish you the best of luck thanks for talking today man thank you this is a lot of fun you
Kevin Scott: Microsoft CTO | Lex Fridman Podcast #30
the following is a conversation with Kevin Scott the CTO of Microsoft before that he was the senior vice president of engineering and operations at LinkedIn and before that he oversaw mobile ads engineering at Google he also has a podcast called behind the tech with Kevin Scott which I'm a fan of this was a fun and wide-ranging conversation that covered many aspects of computing it happened over a month ago before the announcement that Microsoft's investment open the eye that a few people have asked me about I'm sure there'll be one or two people in the future they'll talk with me about the impact of that investment this is the artificial intelligence podcast if you enjoy it subscribe on YouTube give it five stars in iTunes supported on a patreon or simply connect with me on Twitter at lex friedman spelled fri d-m am and I'd like to give a special thank you to Tom and a lot the big housing for their support of the podcast on patreon thanks Tom and alon the-- hope I didn't mess up your last name too bad your support means a lot and inspires me to keep the series going and now here's my conversation with Kevin Scott you described yourself as a kid in a candy store at Microsoft because of all the interesting projects that are going on can you uh try to do the impossible task and give a brief whirlwind view of all the spaces that Microsoft is working in it was the research and product if you include research it becomes even even more difficult so so like I think broadly speaking Microsoft's product portfolio includes everything from a big cloud business like a big set of SAS services we have you know sort of the original or like some of what are among the original productivity software products that everybody uses we have an operating system business we have a hardware this where we make everything from computer mice and headphones high-end high-end personal computers and laptops we have a fairly broad ranging research group where like we have people doing everything from economics research so like this is really a really smart young economist Glenn Weil who like my group works with a lot who's doing this research on these things called radical markets like he's written an entire entire technical book about about this whole notion of a radical market so like the research group sort of spans from that human-computer interaction to artificial intelligence and we have a we have github we have LinkedIn we have a search advertising and news business and and like probably a bunch of stuff that I'm embarrassingly not recounting and in this gaming to Xbox and so on yeah gaming for sure like I was I was having a super fun conversation this morning with with Phil Spencer so when I was in college there was this game that Lucas arts made called day of the tentacle that my friends and I played forever and like we're you know doing some interest in collaboration now with the folks who made day of the tentacle and I was like completely nerding out with Tim Schafer like the guy who wrote a day of the tentacle this morning just a complete fanboy which you know sort of it like happens a lot like you know Microsoft has been doing so much stuff it's such breadth for such a long period of time that you know like being CTO like most of the time my job is very very serious and sometimes like I get to I get caught up and like how amazing it is to be able to have the conversations that I have with the people I get to have them with you had to reach back into the sentimental and what's the the wreck of radical markets and they and they had economics so there the idea with radical markets is like can you come up with new market-based mechanisms - you know I think we have this we're having this debate right now like does capitalism work like free markets work can the incentive structures that are built into these systems produce outcomes that are creating sort of equitably distributed benefits for every member of society you know and I think it's a reasonable reasonable set of questions to be asking and so what Glenn and so like you know one motor thought they're like if you have doubts that the that the markets are actually working you can sort of like tip towards like okay let's let's become more socialist and you know like have central planning and you know governments or some other central organization it's like making a bunch of decisions about how you know sort of work gets done and you know like where the you know where the investments and where the outputs of those investments get distributed Glenn's notion is like lean more into like the market-based mechanism so like for instance you know this is one of the more radical ideas like suppose that you had a radical pricing mechanism for assets like real estate where you were you could be bid out of your position and in in your home you know for instance so like if somebody came along and said you know like I've I can find higher economic utility for this piece of real estate that you're running your your business and like then like you either have to you know sort of bid to sort of stay or like the thing that's got the higher economic utility you know sort of takes over the asset and which would make it very difficult to have the same sort of rent seeking behaviors that you've got right now because like if you did speculative bidding like you would you very quickly like lose a whole lot of money and so like the prices of the assets would be sort of like very closely index to like the value that they can produce and like because like you'd have this sort of real-time mechanism that would force you to sort of mark the value of the asset to the market then it could be taxed appropriately like you couldn't sort of sit on this thing and say oh like this house is only worth ten thousand bucks when like everything around it is worth ten million let's finish so it's an incentive structure that where the prices matched the value much better yeah so the Anglin does a much much better job than I do at selling and I probably picked the world's worst example you know and and in but like in its it's intentionally provocative you know so like this whole notion like I you know like I I'm not sure whether I like this notion that like we could have a set of market mechanisms where I could get bit out of faith that was my property you know but but you know like if you're thinking about something like Elizabeth Warren's wealth tax for instance like you would have I mean you'd be really interesting in like how you would actually set the the price on the assets and like you might have to have a mechanism like that if you put a tax like that in place it's really interesting that that kind of research at least tangentially is touching Microsoft Research yeah the years really thinking broadly that maybe you can speak to this connects to AI so we have a candidate Andrew yang who kind of talks about artificial intelligence and the concern that people have about art you know automations impact on society and arguably Microsoft is at the cutting edge of innovation in all these kinds of ways and so it's pushing AI forward how do you think about combining all our conversations together here with radical markets and socialism and innovation in a item that Microsoft is doing and then Andrew Yang's worried that that that will that will result in job loss for the low and so on how do you think about that I think it's sort of one of the most important shins and Technology like maybe even in society right now about how is AI going to develop over the course of the next several decades and like what's it going to be used for and like what what benefits will it produce and what negative impacts will it produce and you know how who gets to steer this whole thing you know I'll say it at the highest level one of the real joys of getting to do what I do at Microsoft is Microsoft has this heritage as a platform company and so you know like Bill Bill's has this thing that he said a you know a bunch of years ago where you know the the measure of a successful platform is that it produces far more economic value for the people who build on top of the platform than is creative for the the platform owner or builder and I think we have to think about AI that way like satellite form yeah it has to like it has to be a platform that other people can use to build businesses to fulfill their creative objectives to be entrepreneurs to solve problems that they have in their work and in their lives it can't be a thing where there are a handful of companies sitting in a very small handful of city cities geographically who are making all the decisions about what goes into the AIA and and and like and then on top of like all this infrastructure then build all of the commercially valuable uses for it so like I think like that's bad from a you know sort of you know economics and sort of equitable distribution of value perspective like you know sort of back to this whole notion of you know like do the markets work but I think it's also bad from an innovation perspective because like I have infinite amounts of faith in human beings that if you you know give folks powerful tools they will go do interesting things and it's more than just a few tens of thousands of people with the interesting tools it should be millions of people with the tools so sort of like you know you think about the the steam engine and the late 18th century like it was you know maybe the first large-scale substitute for human labor that we've built like a machine and you know in the beginning when these things are getting deployed the folks who got most of the value from the steam engines were the folks who had capital so they could afford to build them and like they built factories around them in businesses and the experts who knew how to build and maintain them but access to that technology democratized over time like now like like an engine is not a it's not like a differentiated thing like there isn't one engine company that builds all the engines and all of the things that use engines are made by this company and like they get all the economics from all of that like never like fully democratize like they're probably you know we're sitting here in this room and like even though they don't that they're probably things you know like the the MEMS gyroscope that are in both of our float like there's like little engines you know sort of everywhere they they're just a component and how we build the modern world like AI DS to get there yeah so that's a really powerful way to think if we think of AI is a platform versus a tool that Microsoft owns as a platform that enables creation yeah on top of it that's a way to democratize it that's really absolutely interesting actually and Microsoft in its history has been positioned well to do that and the you know the tie back to the to this radical markets thing like the so my team has been working with Glenn on this and Jaron Lanier actually did just so Jaron is the like the sort of father of virtual reality like he's one of the most interesting human beings on the planet like a sweet sweet guy and so Jaron and Glen and folks in my team have been working on this notion of data as labor or like they call it data dignity as well and so the the idea is that if you you know again going back to this you know sort of industrial analogy if you think about data is the raw material that is consumed by the machine of AI in order to do useful things then like we're not doing a really great job right now and having transparent marketplaces for valuing those data contributions so like and we all make them like explicitly like you go to LinkedIn you sort of set up your profile on LinkedIn like that's an explicit contribution like you know exactly the information that you're putting into the system and like you put it there because you have some nominal notion of like what value you're gonna get in return but it's like only nominal like you don't know exactly what value you're getting in return like service is free you know like it's low amount of like procedure and then you've got all this indirect contribution that you're making just by virtue of interacting with all of the technology that's in your daily life and so like what Glen and Jaron and and this data Dignity team are trying to do is like can we figure out a set of mechanisms that let us value those data contributions so that you could create an economy and like a set of controls and incentives that would allow people to like maybe even in the limit like earn part of their living through the data that they're creating and like you can sort of see it in explicit ways they're these companies like scale AI and like they're a whole bunch of them in in China right now that are basically data labeling companies so like you you're doing supervised machine learning you need you need lots and lots of label training data and like those people are getting competent like who worked for those companies are getting compensated for their data contributions into the system and so that's easier to put a number on their contribution because they're explicitly labeling they're correct but you're saying that we're all contributing data in all kinds of ways and it's fascinating to start to explicitly try to put a number on it do you think that's you that's possible I don't know it's hard it really is because you know we don't have as much transparency as is I think we need in like how the data is getting used and it's you know super complicated like you know we we you know I think it's technologists sort of appreciate like some of the subtlety there it's like you know the data the data gets created and then it gets you know it's not valuable like the the data exhaust that you give off or the you know the explicit data that I am putting into the system isn't value valuable it's super valuable atomically like it's only valuable when you sort of aggregate it together and you know sort of large numbers it's true even for these like folks who are getting compensated for like labeling things like for supervised machine learning now like you need lots of labels to train a you know a model that performs well and so you know I think that's one of the challenges it's like how do you you know how do you sort of figure out like because this data is getting combined in so many ways like through these combinations like how the value is flowing yeah that's that's that's tough yeah and it's fascinating that you're thinking about this and I wish I wasn't even going to this conversation expecting the breadth of research really that Microsoft broadly is thinking about you are thinking about it Microsoft so if we go back to 89 when Microsoft released office or 1990 when they at least windows 3.0 house.the in your view I know you weren't there the entire you know there was history but how is the company changed in the 30 years since as you look at it now the good thing is it's started off as a platform company like it's still a platform company like the parts of the business that are like thriving and most successful or those that are building platforms like the mission of the company now is the missions change it's like changing a very interesting way so you know back in 89 90 like they were still on the original mission which was like put a PC on every desk and in every home like in it was basically about democratizing access to this new personal computing technology which when Bill started the company integrated circuit micro processors were a brand-new thing and like people were building you know homebrew computers you know from kits like the way people build ham radios right now yeah and I think this is sort of the interesting thing for folks who build platforms in general Bill saw the opportunity there and what personal computers could do and it was like it was sort of a reach like you just sort of imagined like where things were you know when they started the company versus where things are now like in success when you've democratized a platform it just sort of vanishes into the platform you don't pay attention to it anymore like operating systems aren't a thing anymore like they're super important like completely critical and like you know when you see one you know fail like you you just you sort of understand but like you know it's not a thing where you're you're not like waiting for you know the next operating system thing in the same way that you were in 1995 right that's like in 1995 like you know we have Rolling Stones on the stage with the windows 95 rollout like it was like the biggest thing in the world everybody was lined up for it the way that people used to line up for iPhone but like you know eventually and like this isn't necessarily a bad thing like it just sort of you know it the success is that it's sort of it becomes ubiquitous it's like everywhere and like human beings when their technology becomes ubiquitous they just sort of start taking it for granted so the mission now that Satya Ari articulated five plus years ago now when he took over as CEO of the company a mission is to empower every individual and every organization in the world to be more successful and so you know again like that's a platform mission and like the way that we do it now is is different it's like we have a hyper scale cloud that cloud or building our applications on top of like we have a bunch of AI infrastructure that people are building their AI applications on top of we have you know we have a productivity suite of software like Microsoft Dynamics which you know some people might not think is the sexiest thing in the world but it's like helping people figure out how to automate all of their business processes and workflows and you know like help those businesses using it to like grow and be more so so it's it's a much broader vision in a way now than it was back then like it was sort of very particular thing and like now like we live in this world where technology is so powerful and it's like such a basic fact of life that it you know that it it both exist and is going to get better and better over time or at least more and more powerful over time so like you know what you have to do is a platform player is just much bigger right there's so many directions in which you can transform you didn't mention mixed reality yeah you know that's yep that's that's probably early days or depends how you think of it but if we think on a scale of centuries just the early days of mixed reality oh for sure and so yeah with hololens the Microsoft is doing some really interesting work there do you touch that part of the effort what's the thinking do you think of mixed reality as a platform to know sure when we look at what the platform's of the future could be so like fairly obvious that like AI is one like you don't have to I mean like that's you know you sort of say it like someone and you know like they get it but like we also think of the like mixed reality and quantum is like these two interesting you know potential computing yeah okay so let's get crazy then so so you're talking about some futuristic things here well the mixed reality Microsoft is really it's not even feature a stick is here it is incredible stuff and it in look and it's heaven and it's having impact right now like one of the one of the more interesting things this happened with NYX reality over the past couple of years that I didn't clearly see is that it's become the computing device for for folks who for doing their work who haven't used any computing device at all to do their work before so technicians and service folks and people who are doing like machine maintenance some factory floors so like they you know but because they're mobile and like they're out in the world and they're working with their hands and you know sort of servicing these like very complicated things they're they don't use their mobile phone and like they don't carry a laptop with them and you know they're not tethered to a desk and so mixed reality like where it's getting traction right now where hololens is selling a lot of a lot of units is for these sorts of applications for these workers and it's become like I mean like the people love it they're like oh my god like this is like for them like the same sort of productivity boost that you know like an office worker had when they got their first personal computer yeah but you did mention it's really obvious AI as a platform but can we dig into it a little bit red how does a I begin to infuse some of the products in Microsoft so currently providing training of for example neural networks in the cloud yeah we're providing put pre train models or just even providing computing resources whatever different inference that you want to do using you on that works yep well how do you think of AI infusing the as a platform that Microsoft can provide yeah I mean I think it's it's super Android it's like everywhere and like we we run these we run these review meetings now where it's be and satya and like members of sathyas leadership team and like a cross-functional group of folks across the entire company who are working on like either AI infrastructure or like have some substantial part of their of their product work using AI in some significant way now the important thing to understand is like when you think about like how the AI is gonna manifest in like an experience for something that's gonna make it better like I think you don't want the a eyeness to be the first-order thing it's like whatever the product is and like the thing that is trying to help you do like the AI just sort of makes it better and it you know this is a gross exaggeration but like i yet people get super excited about like where the AI is showing up in products and i'm like do you get that excited about like where you using a hash table that code like it's just another just the tool it's a very interesting programming tool but it's sort of a like it's an engineering tool and so like it shows up everywhere so like we've got dozens and dozens of features now in office that are powered by like fairly sophisticated machine learning our search engine wouldn't work at all if you took the machine learning out of it the like increasingly you know things like content moderation on our Xbox and X cloud platform you know when you mean moderation to be like the recommenced it's like showing what you want to look at next no no it's like anti-bullying so that's that so you use your social network stuff they yeah deal with yeah correct but it's like really it's targeted it's targeted towards a gaming audience so it's like a very particular type of thing where you know the the line between playful banter and like legitimate bullying is like a subtle one and like you have to like it's sort of tough like I have I'd love to if we could dig into it because you're also you let the engineering efforts to LinkedIn yep and if we look at if we look at LinkedIn as a social network yeah and if we look at the Xbox gaming is the social components the very different kinds of I imagine communication going on on the two platforms yeah right and the line in terms of bullying and so on is different on the GUP platforms so how do you I mean in such a fascinating philosophical discussion of where that line is I don't think anyone knows the right answer Twitter folks are under fire now Jack a Twitter for trying to find that line nobody knows what that line is but how do you try to find the line for you know trying to prevent abusive behavior and at the same time let people be playful and joke around and that kind of thing I think in a certain way like even if you have what I would call vertical social networks it gets to be a little bit easier so like if you have a clear notion of like what your social network should be used for or like what you are designing a community around then you don't have as many dimensions to your sort of content safety problem as you know as you do in a general purpose I mean so like on on LinkedIn like the whole social network is about connecting people with opportunity whether it's helping them find a job or to you know sort of find mentors or to you know sort of help them like find their next sales leave or to just sort of allow them to broadcast their their you know sort of professional identity to their their network of peers and collaborators and you know sort of professional community like that is I mean in like in some ways like that's very very broad but in other ways it's sort of you know it's narrow and so like you can build a eyes like machine learning systems that are you know capable with those boundaries of making better automated decisions about like what is you know sort of inappropriate and offensive comments or dangerous comments or illegal content when you have some constraints you know same thing with the same thing with like the gaming gaming social network sufferance it's like it's about playing games not having fun and like the thing that you don't want to have happen on the platform it's why bullying is such an important thing like bullying is not fun and also you want to do everything in your power to encourage that not to happen and yeah I but I think it's it's sort of a tough problem in general it's one where I think you know eventually we're gonna have to have some sort of clarification from our policymakers about what it is that we should be doing like where the lines are because it's tough like you don't like in democracy right like you don't want you want some sort of democratic involvement like people should have a say in like where where the lines lines are drawn like you don't want a bunch of people making like unilateral decisions and like we are in a we're in a state right now for some of these platforms where you actually do have to make unilateral decisions where the policy-making isn't gonna happen fast enough in order to like prevent very bad things from happening but like we need the policy-making side of that to catch up I think is as quickly as possible because you want that whole process to be a democratic thing not a you know not not some sort of weird thing where you've got a non representative group of people making decisions that have you know like national and global impact as fascinating because the digital space is different than the physical space and which nations and governments were established and so what policy looks like globally what bullying looks like globally what healthy communication looks like global is there's open question and we're offering and freaking it out yeah I mean with you know sort of fake news for instance and deep fakes and fake news generated by humans yeah so even we can talk about defects like I think that is another like you know sort of very interesting level of complexity but like if you think about just the written word right like we have you know we invented papyrus what's three thousand years ago where we you know you could sort of put put word on on paper and then five hundred years ago like we we get the printing press like where the word gets a little bit more ubiquitous and then like you really really didn't get ubiquitous printed word until the end of the nineteenth century when the offset press was invented and then you know just sort of explodes and like you know the cross-product of that and the industrial revolutions need for educated citizens resulted in like this rapid expansion of literacy and the rapid expansion of the word but like we had three thousand years up to that point to figure out like how to you know like what's what's journalism what's editorial integrity like what's you know what's scientific peer review and so like he built all of this mechanism to like try to filter through all of the noise that the technology made possible to like you know sort of getting to something that society could cope with and like if you think about just the piece the PC didn't exist fifty years ago and so in like this span of you know like half a century like we've gone from no digital you know no ubiquitous digital technology to like having a device that sits in your pocket where you can sort of say whatever is on your mind to like what would it Mary Heaven or mary meeker just released her new like slide deck last week you know we've got 50 percent penetration of the the internet to the global population like they're like three and a half billion people who are connected now it's it's like it's crazy crazy croelick inconceivable like how all of this happens so you know it's not surprising that we haven't figured out what to do yet but like I gotta like we got a really like lean into this set of problems because like we basically have three millennia worth of work to do about how to deal with all of this and like probably what yeah amounts to the next decade worth of time so since were on the topic of tough tough you know tough challenging problems let's look at more on the tooling side in AI that Microsoft is looking at space recognition software so there's there's a lot of powerful positive use cases yeah for face recognition but there's some negative ones and we're seeing those in different governments in the world so how do you how does Microsoft think about the use of face recognition software as a platform in governments and companies yeah how do we strike an ethical balance here yeah I think we've articulated a clear point of view so Brad Smith wrote a blog post last fall I believe this sort of like outline like very specifically what you know whatever what our point of view is there and you know I think we believe that there are certain uses to which face recognition should not be put and we believe again that there's a need for regulation there like the the government should like really come in and say that you know this is this is where the lines are and like we very much wanted to like figuring out where the lines are should be a democratic process but in the short term like we've drawn some lines where you know we push back against uses of face recognition technology you know like this city of San Francisco for instance I think is completely outlawed any government agency from using face recognition tech and like that may prove to be a little bit overly broad but for like certain law enforcement things like you you really III would personally rather be overly sort of cautious in terms of restricting use of it until like we have you know to find a reasonable democratically determined regulatory framework for like where we we could and should use it and you know the the other thing there is like we've got a bunch of research that we're doing and a bunch of progress that we've made on on bias there and like there all sorts of like weird biases that these models can have like all the way from like the most noteworthy one where you know you may have underrepresented minorities who are like underrepresented in the training data and then you start learning like strange things but like they're they're even you know other weird things like we've I think we've seen in the public research like models can learn strange things like all doctors or men for instance just yeah i mean so like it really is a thing where it's very important for everybody who is working on these things before they push publish they launch the experiment they you know push the code you know online or they even publish the paper that they are at least starting to think about what some of the potential negative consequences are some of this stuff i mean this is where you know like the deep fake stuff I find very worrisome just because there gonna be some very good beneficial uses of like Gann generated imagery and I and funny enough like one of the places where it's actually useful is we're using the technology right now to generate synthetic synthetic visual data for training some of the face recognition models to get rid of the bias right so like that's one like super good use of the tech but like you know it's getting good enough now where you know it's gonna sort of challenge normal human beings ability to like now you're just sort of say like it's it's very expensive for someone to fabricate a photorealistic fake video and like ganzar gonna make it fantastically cheap to fabricate a photorealistic fake video and so like what you assume you can sort of trust as true versus like be skeptical about is about to change yeah and like we're not ready for it I don't think the nature of truth right that's uh it's also exciting because I think both you and I probably would agree that the way to solve to take on that challenge is with technology yeah right there's probably going to be ideas of ways to verify which which kind of video is legitimate which kind of is not so to me that's an exciting possibility most most likely for just the comedic genius that the internet usually creates with these kinds of videos yeah and hopefully will not result in any serious harm yeah and it could be you know like I think we will have technology too that may be able to detect whether or not something's fake a real although yeah the the fakes are pretty convincing even like when you subject them to machine scrutiny but you know that we we also have these increasingly interesting social networks you know that are under fire right now for some of the bad things that they do like one of the things you could choose to do with a social network is like you could you could use crypto and the networks to like have content signed where you could have a like full chain of custody that accompanied every piece of content so like when you're viewing something and like you want to ask yourself like how you know how much can I trust this like you can click something and like have a verified chain of custody that shows like oh this is coming from you know from this source and it's like sign I like someone whose identity I trust yeah yeah I think having that you know having that Chain of Custody like being able to like say oh here's this video like it may or may not have been produced using some of this deep fake technology but if you've got a verified Chain of Custody where you can sort of trace it all the way back to an identity and you can decide whether or not like I trust this identity like oh no this is really from the White House or like this is really from the you know the office of this particular presidential candidate or it's really from you know Jeff Weiner CEO of of LinkedIn or Satya Nadella CEO Microsoft like that might that might be like one way that you can solve some of the problems and so like that's not the super high tech like we've had all of this technology forever and back but I think you're right like it has to it has to be some sort of technological thing because the the underlying tech that is used to create this isn't not going to do anything but get better over time and the genie is sort of out of the bottle there's no stuffing it back in and there's a social component which i think is really healthy for a democracy where people be skeptical about the thing they watch yeah in general so you know which is good skepticism in general is good and it's good content so deep fakes in that sense of creating global skepticism about can they trust what they read it encourages further research I come from the Soviet Union where basically nobody trusted the media because you knew it was propaganda and that encouraged that kind of skepticism encouraged further research about ideas yeah posters just trusting any one source look I think it's one of the reasons why the the you know the scientific method and our apparatus of modern science is so good like because you don't have to trust anything like you like the whole notion of you know like modern science beyond the fact that you know this is a hypothesis and this is an experiment to test the hypothesis and you know like this is a peer review process for scrutinizing published results but like stuffs also supposed to be reproducible so like you know it's been better by this process but like you also are expected to publish enough detail where you know if you are sufficiently skeptical of the thing you can go try to like reproduce it yourself and like I I don't know what it is like I think a lot of Engineers are like this where like you know sort of this like your brain is sort of wired for for scepticism like you don't just first order trust everything that you see an encounter and like you're sort of curious to understand you know the next thing but like I think it's an entirely healthy healthy thing and like we need a little bit more of that right now so I'm not a large business owner so I'm just I'm just a huge fan of many of Microsoft products I mean I still actually in terms of I generate a lot of graphics and images and I still use PowerPoint to do that it beats illustrator for me even professional a sort of is this fascinating so I wonder what is the future of let's say windows and office look like is do you see it I mean I remember looking forward to XP wasn't exciting yep when XP was released just like you said I don't remember when 95 was released but xp for me it was a big celebration and and 110 came out I was like okay what's nice it's a nice improvement but yeah so what do you see is the future of these products and you know I think there's a bunch of exciting I mean though in the office front there's going to be this like increasing productivity winds that are coming out of some of these AI powered features that are coming like the products are sort of get smarter and smarter in like a very subtle way like there's not gonna be this Big Bang moment where you know like Clippy is gonna reimagined it's gonna wait a minute okay well have that wait wait wait Clippy coming back in but quite seriously so injection of AI there's not much or at least I'm not familiar sort of assistive type of stuff going on inside the office products in like a clippie style a assistant personal assistant do you think that there's a possibility of the future alright so I think they're a bunch of like very small ways in which like machine learning power and assistive things are in the product right now so there are there a bunch of interesting things like the auto response stuffs getting better and better and it's like getting to the point where you know it can auto respond with like okay let you know this person is clearly trying to schedule a meeting so it looks at your calendar and it automatically electrons to fines like a time in a space that's mutually interesting like we we have this notion of Microsoft search where it's like not just web search but it's like search across like all of your information that's sitting inside of like your office 365 tenant and like you know potentially in other products and like we have this thing called the Microsoft graph that is basically a API federated at you know sort of like gets you hooked up across the entire breadth of like all of the you know like what were information silos before they got woven together with the graph like that is like getting increasing with increasing effectiveness sort of plumbed into the into some of these auto-response things where you're gonna be able to see the system like automatically retrieve information for you like if you know like I frequently send out you know emails to folks were like I can't find a paper or a document or whatnot there's no reason why the system won't be able to do that for you and like I think the the its building towards like having things that look more like like a fully integrated you know assistant but like you'll have a bunch of steps that you will see before you like it will not be this like Big Bang thing where like Clippy comes back and you've got this like you know manifestation of you know like a fully fully powered assistant so I think that's that's definitely coming in like all of the you know collaboration co-authoring stuff's getting better you know it's like really interested like if you look at how we use like the office product portfolio at Microsoft like more and more of it is happening inside of like teams as a canvas and like it's this thing where you know you've got collaboration is like at the center of the product and like we we built some like really cool stuff that's some of which is about to be open source that are sort of framework level things for doing for doing co-authoring so in is there a cloud component to that so on the web or is it I forgive me if I don't already know this but with office 365 we still the collaboration would do if you're doing word which still send the file around no advice yeah this is it we're already a little bit better than that and like you know so the fact that you're unaware of it means we've got a better job to do feel like helping you discover discover this stuff but yeah I mean it's already like got a huge huge clock but and like part of you know part of this framework stuff I think we're calling it like I like we've been working on it for a couple years so like I know the the internal code name for it but I think when we launched it a bill is called a fluid framework and but like what fluid lets you do is like you can go into a conversation that you're having in teams and like reference like part of a spreadsheet that you're working on where somebody's like sitting in the Excel canvas like working on the spreadsheet with a you know chart or whatnot and like you can sort of embed like part of the spreadsheet in the team's conversation where like you can dynamically update it and like all of the changes that you're making to the to this object are like you know coordinate and everything is sort of updating in real time so you can be in whatever canvas is most convenient for you to get your work done so I out of my own sort of curiosity is engineer I know what it's like to sort of lead a team of 10 50 Engineers Microsoft has I don't know what the numbers are maybe fifty maybe sixty thousand engineers with a lot more genius I don't know exactly what the number is it's a lot it's it's tens of thousands sites this is more than ten or fifteen what I mean you've uh you've led different sizes mostly large size of Engineers what does it take to lead such a large group into a continued innovation continue being highly productive and yet develop all kinds of new ideas and yet maintain like what does it take to lead such a large group of brilliant people I think the thing that you learn as you manage larger and larger scale is that there are three things that are like very very important for big engineering teams like one is like having some sort of forethought about what it is that you're gonna be building over large periods of time like not exactly like you don't need to know that like you know I'm putting all my chips on this one product and like this is gonna be the thing but like it's useful to know like what sort of capabilities you think you're going to need to have to build the products of the future and then like invest in that infrastructure like whether and I like I'm not just talking about storage systems or cloud api's it's also like what does your development process look like what tools do you want like what culture do you want to build around like how you're you know sort of collaborating together to like make complicated technical things and so like having an opinion and investing in that is like it just gets more and more important and like the sooner you can get a concrete set of opinions like the better you're going to be like you can wing it for a while small scales like you know when you start a company like you don't have to be like super specific about it but like the biggest miseries that I've ever seen as an engineering leader are in places where you didn't have a clear enough opinion about those things soon enough and then you just sort of go create a bunch of technical debt and like culture debt that is excruciating ly painful to to clean up so like that's one bundle of things like the other the other you know another bundle of things is like it's just really really important to like have a clear mission that's not just some cute crap you say because like you think you should have a mission but like something that clarifies for people like where it is that you're headed together like I know it's like probably like a little bit too popular right now but you've all her re book sapiens one of the central ideas and in his book is that like storytelling is like the quintessential thing for coordinating the activities of large groups of people like once you get past Dunbar's number and like I've really really seen that just managing engineering teams like you you can you can just brute force things when you're less than 120 hundred fifty folks where you can sort of know and trust and understand what the dynamics are between all the people but like past that like things just sort of start to catastrophic ly fail if you don't have some sort of set of shared goals that you're marching towards and so like even though it sounds touchy-feely and you know like a bunch of technical people will sort of balk at the idea that like you need to like have a clear like the missions like very very very important you've always write write stories that's how our society that's the fabric that connects us all of us is these powerful stories and that works for companies - and it works for everything like it mean even down to like you know you sort of really think about like a currency for instance is a story a constitution is a story our laws or story I mean like we believe very very very strongly in them and thank God we do but like they are there they're just abstract things like they're just words it's like we don't believe in them they're nothing and in some sense those stories are platforms and the kinds some of which Microsoft is creating right you have platforms on which we define the future so last question what do you think if philosophical maybe bigger than you know Microsoft what do you think the next 20 30 plus years looks like for computing for technology for devices do you have crazy ideas about the future of the world yeah look I think we you know we're entering this time where we've got we have technology that is progressing at the fastest rate that it ever has and you've got you get some really big social problems like society scale problems that we have to we have to tackle and so you know I think we're gonna rise to the challenge and like figure out how to intersect like all of the power of this technology with all of the big challenges that are facing us whether it's you know global warming whether it's like the biggest remainder of the population boom is in Africa for the next 50 years or so and like global warming is gonna make it increasingly difficult to feed global population in particular like in this place where you're gonna have like the biggest population boom I think we you know like AI is gonna like if we push it in the right direction like you can do like incredible things to empower all of us to achieve our full potential and to you know like live better lives but like that also means focus on like some super important things like how can you apply it to health care to make sure that you know like air quality and cost oh and in sort of ubiquity of health coverage is is better and better over time like that's more and more important every day is like in the in the United States and like the rest of the industrialized were also in Europe China Japan Korea like you've got this population bubble of like aging working you know working aged folks who are you know at some point over the next 20-30 years they're gonna be largely retired and like you you're gonna have more retired people than working age people and then like you've got you know sort of natural questions about who's gonna take care of all the old folks and who's gonna do all the work and the the answers to like all of these sorts of questions like where you're sort of running into you know like constraints of the you know the the world and of society has always been like what tech is gonna like help us get around this you know like when I was when I was a kid in the seventies and eighties like we talked all the time about like oh like population boom population boom like we're gonna like we're not gonna be able to like feed the planet and like we were like right in the middle of the Green Revolution we're like this this massive technology driven increase in crop productivity like worldwide and like some of that was like taking some of the things that we knew in the West and like getting them distributed to the you know to the to the developing world and like part of it were things like you know just smarter biology like helping us increase and like we don't talk about like yep overpopulation anymore because like we can more or less we sort of figured out how to feed the world like that's a that's a technology story and so like I'm super super hopeful about the future and in the ways where we will be able to apply technology to solve some of these super challenging problems like I've I've like one of the things that I I'm trying to spend my time doing right now is trying to get everybody else to be hopeful as well because you know back to the Harare like we we are the stories that we tell like if we you know if we get overly pessimistic right now about like the the potential future of technology like we you know like we may fail to fail to get all the things in place that we need to like have our best possible future and that kind of hopeful optimism I'm glad that you have it because you're leading large groups of engineers that are actually defining that are writing that story that are helping build that future which is super exciting and I agree with everything you said except I do hope cliff he comes back haha we miss him I speak for the people Alan thank you so much for talking to now thank you so much for having me it was pleasure you
Gustav Soderstrom: Spotify | Lex Fridman Podcast #29
the following is a conversation with Gustav Sorum he's the chief research and development officer Spotify leading their product design data technology and engineering teams as I've said before in my research and in life in general I love music listening to it and creating it and using technology especially personalization through machine learning to enrich the music discovery and listening experience that is what Spotify has been doing for years continually innovating defining how we experience music as a society in the digital age that's what Gustav and I talk about among many other topics including our shared appreciation of the movie true romance in my view one of the great movies of all time this is the artificial intelligence podcast if you enjoy it subscribe on YouTube give it five stars on iTunes support on patreon or simply connect with me on twitter at lux Friedman spelled Fri D ma n and now here's my conversation with Gustav Soros from Spotify has over 50 million songs in its catalogue so let me ask the all-important question I feel like you're the right person to ask what is the definitive greatest song of all time it varies for me personally she can't speak definitively for everyone I wouldn't believe very much in machine learning if I did right because everyone had the same taste so for you what is you have to pick what is the song alright so it's it's pretty easy for me there is this song called you're so cool Hans Zimmer soundtrack to true romance it was a movie that made a big impression on me and it's kind of been following me through my life actually had it play at my wedding I start with the organist and help them play it on an organ which was a pretty pretty interesting experience that is probably my I would say top 3 movie of all time yeah it's just an incredible yeah and then it came out during my formative years and as I've discovered in music you shape your music taste during those years so it definitely affected me quite a bit did it affect you in any other kind of way well the movie itself affected me back then it was a big part of culture I didn't really adopt any characters from the movie about it it was a it was a great story of love fantastic actors and and you know really I didn't even know who Hans Zimmer was at a time but fantastic music and so um that song has followed me and the movie actually follow me throughout my life that was a Quentin Tarantino actually I think director director produced that her so it's not stairway to heaven or Bohemian Rhapsody so those are those are great they're not my personal favorites but uh but they're I realized that people have different tastes and that's it's a big part of what we do well for me I have to stick with stairway to heaven so 35,000 years ago I looked this up on Wikipedia flute like instruments started being used in caves as part of hunting rituals then primitive cultural gatherings things like that this is the birth of music since then we had a few folks Beethoven Elvis Beatles Justin Bieber of course Drake so in your view let's start like high level philosophical what is the purpose of music on this planet of ours I think music has many different purposes I think there's there's certainly a big purpose which is the same as multiple attainment which is ESCA pisum and to be able to live in some sort of other mental state for a while but I also think you have the the opposite of escaping which is to help you focus on something you are actually doing and so I think people use music as a tool to to tune the brain to the activities that they are actually doing and it's kind of like in one sense maybe it's the rawest signal if you if you think about the brain that's known that works it's maybe the most efficient hack we can do to actually actively tune it into some state that you want to be you can do it in other ways you can tell stories to put people in a certain mood but music is probably very F to get to a certain mood very fast and you know there's uh there's a social component historically to music where people listen to music together I was just thinking about this that to me you mentioned machine learning but to me personally music is a really private thing I'm speaking for myself I listen to music like almost nobody knows the kind of things I have in my library except people who are really close to me and they really only know a certain percentage there's like some weird stuff that I'm almost probably embarrassed but by right it's called to give the pleasure everyone I said the guilty pleasures yet hopefully they're not too bad but it's just the ice for me it's personal do you think of music is something that's social or as something that's personal this is a very so I think it's the same it's the same answer that you use it for for both we we've thought a lot about this during these 10 years at Spotify obviously in one sense as you said music is incredibly social you go to concerts and so forth on the other hand it is your your escape and everyone has these things are very personal to them so what we've found is that when it comes to to most people claim that they have a friend or two that they are heavily inspired by and that they listen to so I actually think music is very social but in a smaller group setting it's in it's an intimate form of of it's an intimate relationship it's not something that you necessarily share broadly now at concerts you can argue you do but then you've gathered a lot of people that you have something in common with I think this broadcast sharing on music it's something we tried on social networks and so forth but it turns out that people aren't super interested in is what their friends listen to they're interested in understanding if they have something in common crabs with a friend but not you know not just as information right that that that's really interesting that was just thinking about this morning listening to Spotify I really have a pretty intimate relationship was modified with my playlists right I've had them for many years now and they've grown with me together there's there's an intimate relationship you have with a library of music you've developed and we'll talk about different ways to play with that can you do the impossible task and try to give a history of music listening from your perspective from before the Internet and after the Internet and just kind of everything leading up to streaming Spotify I'll try it it could be a 100 year podcast yeah I'll try to do a brief version there are some things that that I think are very interesting during the history of music which is that before recorded music you to be able to enjoy music you actually had to be where the music was produced because he couldn't he couldn't record it and time-shifted write creation and consumption had to happen at the same time basically concerts and so you either had to get to the nearest village to listen to music and while that was cumbersome and it severely limited the distribution of music it also had some different qualities which was that the Creator could always interact with the audience it was always live and also there was no time cap on the music so I think it's not a coincident that these early classical works they're much longer than the three minutes the three minutes came in as a restriction of the first wax disc that could only contain a three minute singsong on one side right so actually the recorded music severely limited there or could constraint I won't say limit I mean constraints often good but it put very hard constraints on the music format so you kind of said like instead of doing these this opus like many tens of minutes or something now you get three and a half minutes because then you're out of wax on this disc but in return you get in amazing distribution you reach will widen right just on that point real quick without the mass scale distribution there's a scarcity component where you kind of look forward what we had that it's like the Netflix versus HBO Game of Thrones you like wait for the event because you can't really listen to it see you're like look forward to it and then it's you derive perhaps more pleasure because it's more rare for you to listen to particular piece you think there's value to that scarcity yeah I think that that is definitely a thing and there's always this component of if you have something in infinite amount so will you value it as much probably not humanity is always seeking some is relative so you're always seeking something you didn't have and when you have it you don't appreciate as much I think that's probably true but I think that's why concerts exist so you can actually have both but I think net if you couldn't listen to music in your car driving that that'd be worse that cost will be bigger than the benefit of of the anticipation I think that you would have so yep it started with live concerts then it's being able to you know the the phonograph invented right you start to be able to record music exactly so then then you got this massive distribution that that made it possible to create two things I think first of all cultural phenomenons they probably need distribution to be able to happen but it also opened access to you know for a new kind of artists so you started to have these phenomenons like Beatles inhale this and so forth that would really a function of distribution I think obviously of talent and innovation but there was also taking a component and of course the next big innovation that come along was was radio broadcast radio and I think radio is interesting because it started not as a music medium and started us as an information medium for for news and then radio need to define something to fill the time wid so that they could honestly play more ads and make more money and music was free so so then you had this massive distribution we could program to people I think those things that ecosystem is what created the ability for for for hits but it was also very broadcast medium so you would tend to get these massive massive hits but maybe not such a lot tail in terms of choice of everybody listening to the same stuff yeah and as you said I think there are some social benefits to that yeah I think for example there is there's a high statistical chance that if I talk about the latest episode of Game of Thrones we have something to talk about yeah just statistically in the age of individual choice maybe some of that goes away so I I do see the value of like you know shared cultural components but I also obviously love personalization and so let's catch us up to the Internet so maybe Napster well first of all there's like mp3's exact tape CDs there was a digitalization of music with a CD really it was physical distribution but the music became did you don't yeah and so they were files but basically boxed software and use the software analogy and then you could start downloading these files and I think there are two interesting things that happen back to music used to be longer before it was constrained by the distribution medium I don't think that was a coincidence and then really the only music genre to have developed mostly after music was a file again on the Internet is EDM and EDM is often much longer than the traditional music I think I think it's interesting to think about the fact that music is no longer constrained in minutes per song or something it's it's a it's a legacy of our own distribution technology and you see some of this new music that that breaks the format not so much as I would have expected actually by now but but it still happens so first of all I don't really know what EDM is electronic dance music yeah right you could say Avicii was one of the biggest in this genre so the main constraint is of time something like three four or five minute songs songs there were eight minutes ten minutes and so forth because the you know it started as a digital product that you downloaded so you didn't have this this constraint anymore so I think it's something really interesting that I don't think has fully happened yet we're kind of jumping ahead a little bit to where we are but I think there's there's tons of formal innovation in music that should happen now that couldn't happen when you needed to really adhere to the distribution constraints if you didn't adhere to that you will get no distribution so so jerk for example Icelandic artist she made a full I pad app as an album that's very expensive you know even though the App Store has great distribution she gets nowhere near the distribution versus staying within the 3-minute format so I think now that music is fully digital inside these streaming services there is there is the opportunity to change the format again and allow creators to be much more creative without limiting their their distribution ability that's interesting that you're right it's surprising we don't see that taking advantage more often it's almost like the constraints of the distribution from the 50s and 60s have molded the culture to where we want the five three to five minutes on that anything else not just so we want the song as consumers and as artists like I cuz I write a lot of music and I never even thought about writing something longer than 10 minutes that's it's really interesting that those constraints because all your training data has been three minutes right it's right okay so yes digitization of data later than mp3s yeah so I think you had this file then that was distributed physically but then you had the components of digital distribution and then the internet happened and there was this vacuum where you had a format that could be digitally shipped but there was no business model and then all these pirate networks happen Napster and in pirate in Sweden Pirate Bay which was one of the biggest and it you know I think from a consumer point of view which which kind of leads up to the inception of of Spotify from a consumer point of view consumers for the first time had this access model to music where they could without kind of any marginal costs they could they could try different tracks you could use music in in new ways there was no marginal cost and that was a fantastic consumer experience that just all the music ever made I think was fantastic but it was all so horrible for artists because there was no business model around it so they didn't make any money so the user need almost drove the user interface before there was a business model and then there were these download stores that allowed you to download files which was a solution but it didn't solve the access problem there was still a marginal cost of 99 cents to try one more track and I think that that heavily limits how you listen to music the example always give this you know in Spotify a huge amount of people listen to music while they sleep while they go to sleep and while they sleep if that costed you $0.99 per three minutes you probably wouldn't do that and you would be much less adventurous if there was a real dollar cost to exploring music so the access model is interesting in that it changes your music behavior you can be you can take much more risk because there's no marginal cost to it maybe let me linger on piracy for a second because I I find especially coming from Russia piracy is something that's very interesting to me not me of course ever but my friends who partook in piracy of music software TV shows sporting events and usually to me what that shows is not that they're they can actually pay the money and they're not trying to save money they're choosing the best experience so what to me piracy shows is a business opportunity in all these domains and that's where I think you're right spot if I stepped in is basically piracy was is an experience you can explore was fine music you like and actually the interface of piracy isn't as horrible because it's I mean it's not metadata yeah that metadata is long download times all kinds of stuff and what Spotify does is basically first rewards artists and second makes the experience of exploring music much better I mean the same is true I think for movies and so on this piracy reveals in the software space for example I'm a huge user and fan of Adobe products and the there was much more incentive to pirate Adobe products before they went to a monthly subscription plan and now all of the sudden that you used to pirate Adobe products that I know now actually pay gladly for the monthly subscription I think you're right I think it's in it's a sign of an opportunity for product development and that sometimes the there's a product market fit before there's a business model fit in product development I think that that is that's a sign of it in in Sweden I think was a bit of both there was there was a culture where even had a political party called the pirate party and this was during the time when when people said that you know information should be free it's not was somehow wrong to charge for ones and zeros so I think people felt that artists should probably make some money somehow else and you know concerts or something so at least in Sweden it was part really social acceptance even at the political level and that but that also forced Spotify to compete with with free which which I don't think would actually could have happen anywhere else in the world the music industry needed to be doing bad enough to take that risk and Sweden was like a perfect testing ground it had government-funded high-bandwidth low-latency broadband which meant that the product would work and it was also there was no music revenue anyway so they were kind of like I don't think this is going to work but why not so this product is one that I don't think could have happened in America there was large music market for example so how do you compete with free because that's an interesting world of the Internet where most people don't like to pay for things so Spotify steps in and tries to yes compete with free how do you do it so I think two things one is people are starting to pay for things on the Internet I think one way to think about it was that advertising was the first business model because no one would put a credit card on internet transactional with Amazon was the second and maybe subscription is their third and if you look offline subscription is the biggest those so that may still happen I think people are starting to pay but definitely back then we needed to compete with free and the first thing you need to do is obviously to lower the price to free and then you need to be better somehow and the way that Spotify was better was on the user experience on the on the actual performance the latency of you know even if even if you had high bandwidth broadband it would still take you thirty Seconds to a minute to download one of these tracks so the Spotify experience of starting within the perceptual limit of immediacy about 250 milliseconds meant that the the whole trick was that felt as if you had downloaded all the part that it was on your harddrive it was that fast even though it wasn't and it was still free but somehow you were actually still being a legal citizen now that was the trick that's what if I managed to to pull off so yeah I've actually heard you say this to write this and that was surprised that wasn't aware of it because I just took it for granted you know whenever an awesome thing comes along you just like of course it has to be this way that that's exactly right that it felt like the entire world's libraries at my fingertips because of that of that latency being reduced what was the technical challenge in reducing Olli so there was a group of really really talented engineers one of them called Ludwig freakiest he wrote the actually from Gothenburg he wrote the initial the uterine client which is kind of an interesting backstory to Spotify you know that we have one of the top developers from from BitTorrent clients as well so he wrote utorrent the world's smallest BitTorrent clients and then he he was acquired very early by daniel and martin who found it spotify and they actually sold the u torrent client to BitTorrent but kept living so Spotify had a lot of experience within peer-to-peer networking so the original innovation wasn't was a distribution innovation where Spotify built an end-to-end media distribution system up until only a few years ago we actually hosted all the music ourselves so we had both the server side in the cloud and that meant that we could do things such as having a peer-to-peer solution to use local caching on the client-side because back then the world was mostly desktop but we could also do things like hack the TCP protocols things like niggles algorithm for kind of exponential back-off or ramp up and just go full throttle and optimize for latency at the cost of bandwidth and all of this end-to-end control meant that we could do an experience that felt like a step change these days we actually are on on GCP we don't host our own stuff and everyone is really fast these days so that was the initial competitive advantage but then obviously you have to move on over time and that was I was over 10 years ago right that was in 2008 the product was launched in Sweden it was in a beta I think 2007 and it was on the desktop right so his desktop only there's no phone there was no phone the iPhone came out in 2008 but the App Store came out one year later I think so the writing was on the wall but there was no phone yet you've mentioned that people would use Spotify to discover the songs they liked and then they would torrent those songs just so they can copy it to their phone just hilarious because I'm not torrent quiet it seriously piracy does seem to be and like a good guide for business models video content as far as I know Spotify doesn't have video content well we do have music videos and we do have videos on the on the service but the way we think about ourselves is that we're an audio service and we think that if you look at the amount of time that people spend on audio it's actually very similar to the amount of time that's people spend on video so the opportunity should be equally big but today is not at all valued videos value much higher so we think it's basically completely undervalues we think of ourselves as an audio service but within that audio service I think video can make a lot of sense I think for when you're when you're discovering an artist you probably do want to see them and understand who they are to understand their identity you won't see the video every time now 90% of the time the phone is gonna be in your pocket for podcasters you use video I think that can make a ton of sense so we do have video but we're an audio service where think of it as we call it internally background able video video that is helpful but isn't isn't the driver of the narrative I think also if we look at YouTube the way people there's quite a few folks who listen to music on YouTube so in some sense YouTube was a bit of a competitor to to Spotify which is very strange to me that people use YouTube to listen to music they play essentially the music videos right but don't watch the videos and put it in their pocket well I think I think it's similar to to what strange I mean it's similar to what we were for the piracy networks know where YouTube for historical reasons have a lot of music videos so you use people use YouTube for a lot of the discovery part of the process I think but then it's not a really good sort of quote unquote mp3 player because it doesn't even background then you have to keep the app in the foreground so so the consumption on a good consumption tool but it's a decently good discoveries I mean I think YouTube is fantastic products and I use it for all kinds of purposes so if I were to admit something I do use YouTube a little bit for the discovery to assistant discovery process of songs and then if I like it I'll add it just fine that's okay that's okay with that ok so sorry we're jumping around a little bit so the it's kind of incredible you look at Napster you look at the early days of Spotify how do you one fascinating points how do you grow a user base see their ins in Sweden you have an idea I saw the initial sketches that look terrible how do you grow user base from all from a few folks to millions I think there are a bunch of tactical answers so first of all I think you need a great product I don't think you take a bad product and and market it to be successful so you need a great product but sorry to interrupt but it's a totally new way to listen to music too so it's not just did people realize immediately that Spotify is a great product I think they did so back to the point of pyrazine it was a totally new way to listen to music illegally but people had been used to the access model in Sweden and the rest of the world for a long time through piracy so one way to think about Spotify it was just legal and fast piracy yeah and so people have been using it for a long time so they weren't alien to it they didn't really understand how it could be legal because it was seemed too fast and too good to be true yeah which i think is a great product proposition if you can be too good to be true but what I saw again and again was people showing each other clicking the song showing how fast it started and saying I can't believe this yeah so I really think it was about speed then we also had an invite product program that was there was really meant for scaling because we hosted our own service we needed as a control scaling but that built a lot of expectation and I don't want to say hype because I hype implies that it was that it wasn't true excitement around the product and we've replicated that when we launched in the in the US we also built up and it might only program first there are lots of tactics but I think you need a you need a great product that solves some problem and B basically the key innovation there was technology but on a method level the innovation was really the access model versus the ownership model and that was tricky a lot of people said that they I mean they wanted to own their music they would never kind of rent it or borrow it but I think the fact that we had a free tier which meant that you get to keep this music for life as well helped quite a lot so this is an interesting psychological point maybe you can speak to it was a big shift for me like I get to it's almost like a I go to therapy for this is uh I think I would describe my early listening experience and I think a lot of my friends do is basically hoarding music is your like slowly one song by one song or maybe albums gathering a collection of music that you love and you own it it's like awful especially with CDs or tape you like physically had it and and what Spotify what I had to come to grips with it was kind of liberating actually is to throw away all the music I've had this therapy session yes people and I think the mental trick is so actually we seen the user data once what if I started a lot of people did the exact same thing they started hoarding as if the music would disappear right almost the equivalent of downloading and so you know we had these playlists that had limits of like a few hundred thousand tracks which we no one will ever like well they do needs and hundreds and hundreds of thousands of tracks and to this day you know some people want to actually save code and coordinate play the entire catalog but I think that the therapy session goes something like instead of throwing away your music if you took your files and you store them in the locker at Google it'd be a streaming service it's just that in that locker you have all the world's music now for free so instead of giving away your music you got all the music it's yours it's a you could think of it at having a copy of the world's catalogue that forever so you actually got more music instead of less it's just that you just took that hard disk and you sent it to to someone who stored it for you and once you go through that mental journey I'm like still my files they're just over there and I just have 40 million other 50 million or something now then people are like okay that's good the problem is I think because you paid us a subscription if we hadn't had the free tier where you would feel like even if I don't want to pay anymore I still get to keep them you keep your playlist forever they don't disappear even though you stopped paying I think that was really important if we would have started us you know you can put in all this time but if you stopped paying you lose all your work I think that would have been a big challenge and what's the big challenge for a lot of our competitors that's another reason why I think the free tier is really important that people need to feel the security that the work they put in it will never disappear even if they decide not to pay I like how you put the work you put in I she stopped even think of it that way I just actually Spotify taught me to just enjoy music I'm sorry as opposed as opposed to what I was doing before which is like in an unhealthy way hoarding music which I found that because I was doing that I was listening to a small selection of songs way too much to our where I was getting sick of them whereas Spotify the more liberating kind of approaches I was just enjoying of course I listened to stairway to heaven over and over but because of the extra variety I don't get as sick of them there is an interesting statistic I saw that so Spotify has maybe you can correct me but over 50 million songs tracks and over three billion playlists so yes a million songs and three billion playlists 60 times more playlists what do you make of that yeah so the way I think about it is that from a from is that the station or machine learning point of view you have all these if you only thing about reinforcement learning where you have this state space of all the tracks and you can take different journeys through this through this world and these I think of these is like people helping themselves and each other creating interesting vectors through this space of tracks and then it's not so surprising that across you know many tens of millions of kind of atomic units there will be billions of paths that make sense and we're probably pretty quite far away from having found all of them so kind of our job now is users when Spotify started it was really a search box that was for that time pretty powerful and then I'd like to refer to that this programming language called play listing where if you as you probably were pretty good at music you knew your new releases you knew your backyard law you knew your stairway to heaven you could create a soundtrack for yourself using this playlist thing - oh that's like meta programming language for music - sounds like your life and people who were good at music it's back to how do you scale the product for people who are good at music that wasn't actually enough if you had the catalog in a good search tool and you can create your own sessions you could create really good a soundtrack for your entire life probably perfectly personalized because you did it yourself but the problem was most people many people aren't that good at music they just can't spend the time even if you're very good at news it's gonna be hard to keep up so what we did to try to scale this was to essentially try to build you can think of them as a instead there's this friend that some people had that helped them navigate this music catalog that's what we're trying to do for you but also there is something like 200 million active users on Spotify so there it's okay so from the machine learning perspective you have these 200 million people plus they're creating it's really interesting to think of playlist as I mean I don't know if you meant it that way but it's almost like a programming language it's a released a trace of exploration of those individual agents of the the listeners and you have all this new tracks coming in so it's a fascinating space that is ripe for machine learning so that is there is it is it possible how can playlist be used as data in terms of machine learning and just to help Spotify organize the music so we found in our data not surprising that people who play listed lots they retain much better they had a great experience and so our first attempt was to playlist for users and so we acquired this company called tune ego of editors and professional playlist errs and kind of leverage the maximum of human intelligence to help to help build kind of these vectors through the track space for four people and that that broaden the product then the obvious next and we you know use statistical means where they could see what when they created a playlist how did that play this perform you know they could see skips of the songs they could see how the songs perform and they manually iterated the playlist to maximize performance for a large group of people but there were never enough editors to playlist for you personally so the promise of machine learning was to go from kind of group personalization using editors and tools into statistics to individualization and then what's so interesting about the 3 billion playlist we have is we ended the truth is we lucked out this was not a priori strategy as is often the case it looks really smart in hindsight was as dumb luck we looked at these playlists and we had some people in the company a person named their grandson it was really good at machine learning already back in in back then in like 2007-2008 back then it was mostly collaborative filtering you so forth but we realized that what what this is is people are grouping tracks for themselves that have some semantic meaning to them and then they actually label it with a playlist name as well so in a sense people were grouping tracks along semantic dimensions and labeling them and so could you could you use that information to find that that latent embedding and so we started playing around with collaborative filtering and we saw tremendous success with it basically trying to extract some of these some of these dimensions and and if you think about it's not surprising at all it'd be quite surprising if playlists were actually random if they had no semantic meaning for most people they group these tracks for some reason so we just happen to cross this incredible data set where people are taking taken these tens of millions of tracks and group them along different semantic vectors and the semantics being outside the individual users it's some kind of universal there's a universal embedding that holds across people on this earth yes I do think that the embeddings you finally gonna be reflective of the people who play listed so if if you have a lot of indie lovers who playlist your embed is going to perform better there but what we found was that yes there were these these latent similarities they were very powerful and we we had them it was interesting because I think that the people who play listed the most initially were this so-called music aficionados who who really into music and they often had a certain they're tasteful stuff is often certain geared towards a certain type of music and so what surprised us if you look at the problem from the outside you might expect that the algorithms would start performing best with mainstreamers first because it somehow feels like an easier problem to solve mainstream tastes than really particular tastes it was the complete opposite for us the recommendations performed fantastically for people who saw themselves as having very unique taste that's probably because all of them playlist ed and they didn't perform so well for mainstream is they actually thought they were a bit too particular and unorthodox so we had the complete opposite of what we expected success within the hardest problem first and then had to try to scale to more mainstream recommendations so you've also acquired echo nests that analyze a song data so in your view maybe you can talk about so what kind of data is there from a machine learning perspective there's a like a huge amount what we're talking about playlists thing and just user data of what people are listening to the playlists are constructing and so on and then there's the the actual data within a song what makes a song I don't know the actual waveforms right is there any how do you mix the two how much values are in each to me it seems like user data is well it's a romantic notion that the song itself would contain useful information but if I were to guess user data would be much more powerful like playlists would be much more powerful yeah so we use both our biggest success initially what was with playlist data without understanding anything about the structure of this song but when we acquire the echo nest they had the inverse problem they actually didn't have any play data they were just they were a provider of recommendations but they didn't actually have any play data so they they looked at the structure of songs sonically and they looked at Wikipedia for cultural references and so forth right cool and did a lot of NLU and so forth so we got that skill into the company and combine kind of our user data with their with their kind of content-based so you can think of as we were used to based and they were content based in their recommendations and we combine those two and for some cases where you have a new there's no no play date obviously you have to try to go by either you know who the artist is or or the sonic information in the song or what it's similar to so there's definitely value in in both and we do a lot in both but I would say yes the user data captures things that that have to do with culture in the greater society that you would never see in the in the content itself but that said we have seen we have a research lab in Paris when you know we can talk about more about that on kind of machine layer on the creator side what it can do for creators not just for the consumers but where we looked at how does the structure of a song actually affect the listening behavior and it turns out that there is a lot of we can we can predict things like skips based on we you know based on on the song itself we could say that maybe you should move that chorus a bit because you're skippers gonna go up here there is a lot of latent structure in the music which is not surprising because it is some sort of mind hack so there should be structured that's probably what we respond to you just blew my mind actually for from the creator perspective so that's really interesting topic that probably most creators aren't taking advantage of right so there's I've recently got to interact with a few folks youtubers who are like obsessed with this idea of what do I do to make sure people keep watching the video and then like look at the analytics of which point if people turn off and so on first of all don't think that's healthy but it's it's because you can do it a little too much but it is a really powerful tool for helping the creative process you just made me realize you could do the same thing for creation of music and so is that something you've looked into oh is it can you speak to how much opportunity there is for that yeah I think I listen to to the podcast with Suraj yeah and I thought it was fantastic and directed to do the same thing where he said and he said he posted something in the morning yeah immediately watch the feedback where the drop off was and then responded to that in the afternoon yeah which which is quite different from how people make podcasts for example yes exactly I mean the feedback loop is almost non-existent it's very so if we back out a one-level I think actually both for music and podcasts which we also do is let Spotify I think there's a tremendous opportunity just for the creation workflow and I think it's really interesting speaking to you who because you're a musician a developer and a podcaster if you think about those three different roles if you if you make the leap as a musician if you if you think about it as a software tool chain really your door with the stems that's the IDE right that's what you work in source code formant with your with with what you're creating then you sit around and you play with that and when you're happy you compile that thing into some sort of you know AAC or mp3 or something you do that because you get distribution there's so many runtimes for that mp3 across the world and Carstairs and stuff so you kind of compile this executable you ship it out and kind of an old fashioned box software analogy and then you hope for the best right right but as a as a as a software developer you'd never do that first you go and get helping you collaborate with other Creators yeah and then you know you think it'd be crazy to just ship one version of your software without doing an a/b test without any feedback loop and then HD tracking exactly and then you would you would look at the feedback loops and try to optimize that thing right so I think if you think of it as a as a very specific software tool chain it looks quite arcane you know the tools that a music creator has versus what a software developer has so that's kind of how we think about it and why wouldn't a why wouldn't a music creator have something like github you could collaborate much more easily so we have we bought this company called sound trap which has a kind of Google Docs for music approach where you can collaborate with other people on the kind of source code format with stamps and I think introducing things like AI tools there to help you as you're creating music both in in helping you you know put accompaniment your music like drums or something help you master and mix automatically help you understand how this track will perform exactly what you would expect as a software developer I think makes a lot of sense and I think the same goes for a podcaster I think podcasters will expect to have the same kind of feedback loop that Siraj has like why wouldn't you maybe maybe it's not healthy but sorry I wanted to criticize the fact cuz you can overdo it because a lot of the each and we're in a new era of that so you can become addicted to it and therefore what people say you become a slave to the YouTube algorithm are sort of it's a it's always a danger of a new technology as opposed to say if you're creating a song becoming too obsessed about the intro riff to the song that keeps people listening versus actually the entirety of the creation process it's a balance absolutely but the fact that there's zero I mean you're blowing my mind right now because you're completely right that there is no signal whatsoever there's no feedback whatsoever in the creation process and music or podcasting almost at all and are you saying that Spotify is hoping to help create tools to not tools but no tools actually actually tools from traders absolutely so we have we've remains micro stations the last few years around music creation this company called soundtrap which is the door digital audio workstation but that is browser-based and that their focus was really the Google Docs approach where you can collaborate with people much more easily then you could in previous tools so we have some of these tools that we're working with that we want to make accessible and then we can connect it with our with our consumption data we can create this feedback loop where we could help you understand we could help you create and help you understand how you will perform we also acquired this other company within podcasting called anchor which is one of the biggest podcasting tools mobile focused so really focused on simple creation or easy access to create but that also gives us this feedback loop and even before that we invested in something called Spotify for artists and Spotify for podcasters which is an app that you can download you can verify that you are that creator and then you get you get things that you know software developers have had for years you can see where if you look at your podcast for example on Spotify or or a song that you released you can see how it's performing which cities is performing and who is listening to it what's the demographic break up so similar in the sense that you can understand how you're actually doing on the on the platform so we we definitely want to build tools I think you also interviewed the head of research for Adobe and I think that's an item back to photoshop that you like I think that's an interesting analogy as well Photoshop I think has been very innovative in helping photographers and artists and I think there should be the same kind of tools for for music creators where you could get you know AI assistants for example that's you creating music as you can do with with Adobe where you can I want to sky over here and you can get help creating that sky the really fascinating thing is what Adobe doesn't have is a distribution for the content you create so you don't have the data of if I create if I uh you know whatever creation I'm making Photoshop a premiere I can't get like immediate feedback like I can on YouTube for example about the way people are responding and if Spotify is creating those tools that that's a really exciting actually world but let's talk a little about podcast it's so I have trouble talking to one person so it's a bit terrifying and kind of hard to fathom but an average sixty to a hundred thousand people will listen to this episode okay so it's intimidating it's intimidating so I hosted on blueberry I don't know if I'm pronouncing that correctly actually it looks like most people listen to an Apple podcast cashbox and pocket gas and only about a thousand listen on Spotify in just my podcast right so where do you see a time when Spotify will dominate this so Spotify is relatively new into this podcasting talk nesting site yeah in podcasting what's the deal with podcasting and Spotify how serious is Spotify about podcasting do you see a time where everybody would listen to you know probably a huge amount of people majority perhaps listen to music on Spotify do you see a time when the same is true for podcasting well I certainly hope so that is our mission our mission as a company is actually to enable a million creators to live off of their art in a billion people inspired by it and what I think it is interesting about that mission is it actually puts the crater's first even though it's not as a consumer focused company and it says to be able to live off of their art not just make some money or further art as well so it's quite an ambitious project and so we think about creators of all kinds and we kind of expanded our mission from being music - being audio a while back and that's not so much because we think we made that decision we think that my decision was was made for us we need the world made that decision whether we like it or not when you put in your headphones you're gonna make a choice between music and new episode of of your podcast or something else right we're in that world whether we like it or not and that you know that's how radio work so we decided that we think it's about audio you can see the rise of audiobooks and so forth we think audio is this great opportunity so we decided to enter it and and obviously Apple and Apple podcast is absolutely dominating in podcasting and we didn't have a single podcast only like two years ago what we did though was we we we looked at this and said no can we bring something to this you know we want to do this but the back to the Josefa we have to do something that consumers actually value to be able to do this and the reason we've gone from not existing at all to being the the record of what quite a wide margin the second-largest podcast consumption still still wide gap to iTunes but we're growing quite fast I think it's because when we when we looked at the consumer problem people said surprisingly that they wanted their podcasts and music in the same in the same application so what we did was we took a little bit of a different approach what we said instead of building a separate podcast app we thought it's their consumer problem to solve here because the others are very successful already and we thought there was in making a more seamless experience where you can have your podcast in your music in the same application because we think it's audio to you and that that has been successful and that meant that we actually had 200 million people to offer this to instead of starting from 0 so I think we have a good chance because we're taking a different approach than the competition and back to the other thing I mentioned about creators because we're looking at the end-to-end flow I think there's a tremendous amount of innovation to do around podcast as a format when we have creation tools and consumption I think we could start improving what podcasting is I mean podcast is this this opaque big like 1/2 hour file that you're streaming which it really doesn't make that much sense in 2019 that it's not interactive there's no feedback loops nothing like that so I think if we're gonna win it's gonna have to be because we build a better product for creators and for for consumers so we'll see but it's certainly our goal we have a long way to go well the creators part is really exciting you ready you got me hooked there is the only stats I have a blueberry just recently added the stats of whether it's listened to the end or not and that's like a huge improvement but that's still nowhere to where you could possibly go into her statistics you just download this pot of five podcasters up and verify and then then you know where people dropped out in this episode oh wow ok the moment I started talking okay I might be depressed by this but okay so one one other question the original Spotify for music and I have a question about podcasting in this line is the idea of albums I have what did you use ik aficionados a friends who are really big fans of music often really enjoy albums listening to entire albums of an artist and correct me if I'm wrong but I feel like Spotify has helped replace the idea of an album with playlists so you create your own albums it's kind of the way at least I've experienced music and I have really enjoy it that way one of the things that was missing in podcasting for me I don't know if it's missing I don't know it's an open question for me but the way I listen to podcasts is the way I would listen to albums so I take Joe Rogan experience and that's an album and I listen you know I like I put that on and I listen one episode after the next and there's a sequence and so on is there a room for doing what you did for music or doing what about if I did for music but creating playlists sort of this kind of play listing idea of breaking apart from podcasting from individual podcasts and creating kind of this interplay or Debbie thought about that space it's a great question so I think in in music you're right basically you bought an album so it was like you bought a small catalog of like 10 tracks right it was it was again it was actually a lot of a lot of consumption you think it's about what you like but it's based on the business model right so you paid for this 10 track yeah service and then you listened to that for a while and then when everything was flat priced you tended to listen differently now so so I think the I think the album is still tremendously important that's why we have it and you can save albums and so forth and you have a huge amount of people who really listen according to albums and I like that because it is a creator format you can tell a longer story over several tracks and so some people listen to just one track some people actually want to hear that whole story now in podcast I think I think it's different you can argue that podcasts might be more like shows on Netflix you have like a full season of of narcos and you're probably not going to do like one episode of Marcos and then one of house of cards like you know that there's a narrative there and you you you love the cast and you love these chairs so I think people will people love shows and I think they will they will listen to those shows I do think you follow a bunch of shows at the same time so there's certainly an opportunity to bring you the latest episode of you know whatever that 5 6 10 things that that you're into but but I think I think people are gonna listen to specific hosts and love those hosts for a long time because I think there's something different with podcast where this format of the the experience of the of the audience is actually sitting here right between us whereas if you look at something on TV the audio actually would come from you would sit over there the order would come to you from both of us as if you were watching notice you were part of the conversation so my experience is I mean listen to podcasts like yours and Joe Rogan is I feel like I know all of these people they they have no idea who I am but I feel like you know so many artists and that it's very different from me watching a watching like a TV show or an interview so I think you you kind of fall in love with people and experience in a different way so I think I think shows and hosts are gonna be very very important I don't think that's gonna go away in just some sort of thing where well you don't even know who you're listening to I don't think that's gonna happen what I do think is I think there's a tremendous discovery opportunity in podcast because the catalog is growing quite quickly and I think podcast is only a few like five six hundred thousand shows right now if you look back to YouTube as another in knowledge of creators no one really knows if you would lift the lid on YouTube but it's probably billions of episodes and so I think the podcast catalog would probably grow tremendously because the creation tools are getting easier and then you're gonna have this discovery opportunity that I think is really big so so a lot of people tell me that they love their shows but discovering podcasts kind of suck it's really hard to get into new show they're usually quite long it's a big time investments I think there's plenty of opportunity in the discovery part yeah for sure a hundred percent in in even the dumbest there's so many low-hanging fruit too for example just knowing what episode to listen to first to try out a podcast exactly because most podcasts don't have an order to them they they can be listening to out of order and sorry to say some are better than others episodes so some episodes of Joe Rogan are better than others and it's nice to know which you should listen to to try it out and there's as far as I know almost no information in terms of like up votes on how good an episode is exactly so I think part of the problem is you it's kind of like music there isn't one answer people use music for different things and there's actually many different types of music there's workout music and there's classical piano music and focus music and and and so forth I think the same with podcasts some podcasts are sequential they're supposed to be listened to in in order it's actually it's actually telling a narrative some podcasts are one topic kind of like yours but different guests so you could jump in anywhere some potes I have completely different topics and for those podcasts it might be that I want you know we should recommend one episode because it's about AI yeah from someone but then they talk about something that you're not interested in the rest of the episodes so I think or well we're spending a lot of time on now it's just first understanding the domain and creating kind of the knowledge graph of how do these objects relate and how do people consume and I think we'll find that it's gonna be it's gonna be different I'm excited is it your the Spotify is the first people I'm aware of that are trying to do this for podcasting podcasting has been like a Wild West up until now it's been a very we want to be very careful though because it's been a very good Wild West I think is this fragile ecosystem and I you we want to make sure that you don't barge in and say like oh we're gonna internalize this thing and you have to think about the the creators you have to understand how they get distribution today who listens to how they make money today started you know make sure that their business model works that they understand the I think it's back to doing something improving their products like feedback loops and distribution so jumping back into terms of this fascinating world of recommender system and listening to music and using machine learning to analyze things do you think it's better to what currently correct me if I'm wrong but currently Spotify lets people pick what they listen to the most part there's a discovery process but you kind of organize playlists is it better to let people pick what they listen to or recommend what they should listen to something like stations by Spotify that I saw that you're playing around with maybe you can tell me what's the status of that this is a Pandora style app that just kind of as opposed to you select the music you listen to it kind of feeds you music you listen to what's the status of stations by Spotify what's its future the store is modify as we have grown has been that we made it more accessible to different different audiences and stations is another one of those where the question is some people want to be very specific they actually want to hear stairway to heaven right now that needs to be very easy to do and some people or even the same person at some point might say I want to feel up beats or I want to feel happy or I want songs to sing in the car alright so they put in they put in the information in a very different level and then we need to translate that into that what that means musically so stations is a test to to create like a consumption input vector that is much simpler where you can just tune in a little bit and see if that increases the overall reach but we're trying to kind of serve the entire gamut of super-advanced so-called music aficionados all the way to to people who they love listening to music but it's it not their number-one priority in life right they're not going to sit and follow every new release from every new artist they need to be able to to influence music at a at a at a different level so we're trying you can think of it as different products and I think when one of them one of the interesting things to answer your question on if it's better to lift the user to soar to play I think the answer is the the challenge when you when machine learning kind of came along there was a lot of thinking about what this product development mean in a machine learning context people like Andrew Aang for example when I went to buy do I started doing a lot of practical machine learning went from academia and you know he thought a lot about this and he had this notion that you know product manager designer an engine they used to work around this wireframe I kind of described what the product should look like who some talk about when you're doing like a chatbot or a playlist how do you what are you gonna say like it should be good that's not a good product description so how do you how do you do that he came up with this notion that the test set is the new wireframe the the job of the product manager is to source a good test set that is representative of what like if you say like I want to play the status songs missing in the car job the product managers go in source like a good test out of what that means then you can work with engineering to have algorithms to try to produce that right so we try to think a lot about how to structure product development for for a machine learning age and and what we discovered was that a lot of it is actually in the expectation and you can go you can go two ways so let's say that if you if you set the expectation with the user that this is a discovery product like discover weekly you're actually setting the expectation that most of all we show you will not be relevant when you're in the discovery process you're gonna accept that actually if you find one gem every Monday that you totally love you're probably gonna be happy even though the statistical meaning one out of ten is terrible or one out of twenty is terrible from a user point of view because the studying was discoveries fine can I get a sorry to interrupt real quick I just actually learned about discover weekly which is a Spotify I don't know it's it's a feature a Spotify that shows you cool songs to listen I maybe I can do issue tracking I couldn't find them my Spotify app it's in your library it's in the library it's in the list of life cuz I was like whoa is just cool I didn't know this existed and I try to find it but I'll show it to you back product yeah but yeah it's a so yes I just uh just to mention the expectation there is basically they you're going to discover in your song yes so then you can be quite adventurous in in the recommendations you do but but if you're but we have another product called daily mix which kind of implies that these are only going to be your favorites so if you have one out of ten that is good and nine out of ten that doesn't work for you you're gonna think it's a horrible product so actually a lot of the product development we learned over the years it's about setting the right expectation so for daily mix you know algorithmically we would pick among things that feel very safe in your taste space there's discover weekly we go kind of well because the expectation is most of this is not gonna so a lot of that a lot of times of your question there a lot of should you let the user pick or not it depends we have some products where the whole point is that the user can click play put the phone in the pocket and it should be really good music for like an hour we have other products where you probably need to say like no no say no no and it's very interactive I see that makes sense and then the radio product the station's product is one of these like click Play put in your pocket four hours that's really interesting so you're thinking of different test sets for differ users interact create products that sort of optimize optimize for those test sets that represents a specific set of users yes I think one thing that I think is interesting is we we invested quite heavily in editorial in in people creating playlists using a statistical data and I was successful for us and then we also invested in in machine learning and for the longest time you know within Spotify and within the rest of the industry there was always this narrative of humans versus the machine how go versus editorial and an editors would would say like well if I had that data if I could see your playlist in history and I made a choice for you I would have made a better choice and they would have because they honest they're much smarter than these algorithms the human is incredibly smart compared to our algorithms they can take culture into account and so forth the problem is that they can't make 200 million decisions you know per hour for every user that logs in so the algo may be not as sophisticated but much more efficient so there was this there was this contradiction but then a few years ago we started focusing on this kind of human-in-the-loop thinking around machine learning and we actually coined an internal term for it called algo toriel combination of algorithms and and editors where if we take a concrete example you think of the editor there's this paid expert that we have there's really good at something like so hip-hop EDM something right there a true expert no and one in the industry so they have all the cultural knowledge you think of them as the product manager and you say that let's say that you want to create a you think that there's a there's a product need in the world for something like songs to sing in the car or someone sitting in the shower and I'm taking that example because it exists people love yeah The Scream songs in the car when they drive right yeah so you want to create that product and you have this product manager if it's a musical expert they create they come up with a concept like I think this is a missing thing in in humanity like upside it's called song sitting in the car they they create the framing the image the title and they create a test set or they create a group of songs like a few thousand songs out of the catalogue that they manually curate that our known songs that are great to sing in the car and they can take like to romance into account they understand things that our algorithms do not at all so they have this huge set of tracks then when we deliver that to you we look at your taste vectors and you get the 20 tracks that are songs to sing in the car in your taste so you have you have personalization and editorial input in the same process if that makes sense yeah it makes total sense and I have several questions around that this is a this is like fascinating okay so first it is a little bit surprising to me that the world expert humans are outperforming machines at specifying songs to sing in the car so maybe you could talk to that a little bit I don't know if you can put it into words but what is it how difficult is this problem of tea do you really I guess what I'm trying to ask is there how difficult is it to encode the cultural references the the context of the song the artists all those things together can machine learning really not do that I mean I think machine learning is great at replicating patterns if you have the patterns but if you try to write with me a spec of what songs greatest song testing in the card definition is is it is it loud there's many choruses should've been in movies it's it quickly gets incredibly complicated right yeah and and a lot of it may not be in the structure of the song or the title it could be cultural references because you know it was a history once so so the definition problems quickly get and I think that was the that was the insight of Andrew Aang when he said the job of the product manager is to understand these things that that algorithms don't and then define what that looks like and then you have something to train towards right then you have kind of the test set and then so so today the editors create this pool of tracks and then we personalize you could easily imagine that once you have this set you could have some automatic exploration on the rest of the catalogue because then you understand what it is and then the other side of it what machine learning does help is this taste vector how hard is it to construct a vector that represents the things an individual human likes it this human preference so you can you know music isn't like it's not like Amazon like things you usually buy music seems more amorphous like it's this thing that's hard to specify like what what is what you know if you look at my playlist what is the music that I love it's harder it seems to be much more difficult to specify concretely so how hard is it to a build the taste vector it is very hard in the sense that you need a lot of data and I think what we found was that so it's not so it's not a stationery problem most it changes over time and so we've gone through the journey of if you've done a lot of computer vision obviously I've done a bunch of computer vision in my past and and we started kind of with the handcrafted heuristics for you know this is kind of in the music this is this and if you consume this you probably like this so we have we started there and we have some of that still then what was interesting about the playlist data was that you could find these latent things that wouldn't necessarily even make sense to you that could could even capture maybe cultural references because they Co occurred things that they wouldn't have appeared mechanistically either in the content or so forth so I think that I think the core assumption is that there are patterns in in almost everything and if there are patterns these these embedding techniques are getting better and better now if now as everyone else we're also using kind of deep embedding so you can encode binary values and so forth and what I think is interesting is is this process to try to find things that they do not necessarily you wouldn't actually have have guessed so it is very hard in a in an engineering sense to find the dimensions it's an incredible scalability problem to do for hundreds of millions of users and to update it every day but in but in theory in theory embeddings isn't that complicated the fact that you try to find some principal components or some like that dimensionality reduction is also the theory I guess is easy that the practices is very very hard and it's a it's a huge engineering challenge but fortunately we have some amazing research and engineering teams in this space yeah I guess the the question is all I mean it's similar I deal with it with an autonomous vehicle spaces the question is how hard is driving and here is basically the question is of edge cases so embedding probably works not probably but I would imagine works well in a lot of cases so there's a bunch of questions that arise then so do song preferences does your taste vector depend on context like mood right so there's different moods and absolutely so how does that take in is it is it possible to take that at consideration or do you just leave that as a interface problem that allows the user to just control it so when I'm looking for workout music I kind of specify it by choosing certain players doing certain search yeah so that's a great point it's back to the product development you could try to spend a few years trying to predict which mood you're in automatically open Spotify or you create a tab which is happy and sad right and you're gonna be right 100% of the time with one click now it's probably much better to let the user tell you if they're happy or sad or if they want to work out on the other hand if you use the interface become 2,000 tabs you're introducing so much friction so no one will use the product so then you have to get better so it's this thing where I think it maybe was I remember who coined it but it's called fault tolerant you is right you build the UI that is tolerant of being wrong and then you can be much less right in your you know in your algorithms so we you know we've had to learn a lot of that building the right UI that fits where the where the machine learning is and and a great discovery there which is which was by the teams during one of our hack days was this thing of taking discovery packaging it into a playlist and saying that these are new tracks that we think you might like based on this and studying the right expectation made it made it a great product so I think we have this benefit that for example Tesla doesn't have that we can we can we can change the expectation we can we can build a fault-tolerant setting it's very hard to be fault tolerant when you're driving at you know 100 miles per hour or something and and we we have the luxury of being able to say that of being wrong if we have the right UI which gives us different abilities to take more risk so I actually think the self-driving problem is is much harder oh yeah for sure it's much less fun because people die exactly and in Spotify it's just such a more fun problem because failure will mean failures beautiful away at least exploration so it's a really fun reinforcement learning project the worst case scenario is to get these WTF tweets like how did I get this this song which is a lot better than the self-driving favor so what's the feedback that a user puts the signal that a user provides into the system so the the you mentioned skipping what is like the strongest signal is uh you didn't mention clicking like so so we have a few signals that are important obviously playing playing through so so one of the benefits of music actually even compared to podcast or or movies is the object itself is really only about three minutes so you get a lot of chances to recommend and the feedback loop is is every three minutes instead of every two hours or some things you actually get kind of noisy but but quite fast feedback and so you can see if people played through or if they which is you know the inverse of skip really that an important signal on the other hand much of the consumption happens when your phone is in your pocket maybe you're running or driving or you're playing on a speaker and so you not skipping doesn't mean that you love that song it might be that it wasn't bad enough that you would walk up and skip so it's a noisy signal then then we have the equivalent of the like which is you saved it to your library that's a pretty strong signal of affection and then we have the more explicit signal of play listing like you took the time to create a playlist you put it in there there's a very little small chance that if you took all that trouble this is not a really important track to you and then we understand also what are the tracks it relates to so we have we have the playlist thing we have the like and then we have the listening or skipped and you have two very different approaches to all of them because at different levels of noise one one is very voluminous but noisy and the others rare but you can you can probably trust it yeah it's interesting I think between those signals captures all the information you'd want to capture I mean there's a feeling a shallow feeling for me that there's sometimes I'll hear songs like yes this is you know this was the right song for the moment but there's really no way to express that fact except by listening through it all the way yeah and maybe playing it again at that time or something yeah there's no need for a button that says this was the best song I could have heard at this moment well we're playing around with that with kind of the thumbs up concept saying like I really like this just kind of talking to the algorithm it's unclear if that's the best way for you miss to interact maybe it is maybe they should think of Spotify has a person an agent sitting there trying to serve you and you can say like that's modified good Spotify right now the analogy we've had is more you shouldn't think of us we should be invested oh and the feedback is if you save it it's kind of you work for yourself you do a playlist because you think is great and we can learn from that it's kind of back to back to Tesla how they kind of have that shadow mode they sit in what you drive we kind of took the same analogy we stayed in watch your playlist and then maybe we can we can offer you an autopilot where you can take over for a while or something like that and then back off if you say like that's not that's not good enough but but I think it's interesting to figure out what your mental model s if Spotify is an AI that you talk to which I think might be a bit too abstract for for many consumers or if you still think of it as it's my music app but it's just more helpful and depends on the device it's running on which brings us to smart speakers so I have a lot of the Spotify listening I do is on things like on device they can talk to you whether it's from Amazon Google or Apple what's the role Spotify on those devices how do you think of it differently then on the phone or on the desktop there are few things to say about the first of all it's incredibly exciting they're growing like crazy especially here in the in the in the US and it's solving a consumer need that I think is is you can think of it as just remote interactivity you can control this thing from from from across the room and it might may feel like a small thing but it turns out that friction matters to consumers being able to say play pause and so forth from across the room is very powerful so basically you made you made the living room interactive now and what we see in our data is that the the number one use case for these speakers is music music and podcasts so fortunately for us it's been important to these companies to have those use case covered so they wanted Spotify on this we have very good relationships with with them and we're seeing we're seeing tremendous success for them what what I think it's interesting about them is it's already working with we kind of had this epiphany many years ago back when we started using Sonos if you went through all the trouble of setting up your sonar system you had this magical experience where you had all the music ever made in your living room and and we we we made this assumption that the home everyone use that a CD player at home but they never managed to get their files working in the home having this network attached which was too cumbersome for most consumers so we made the assumption that the home would skip from the CD all the way to streaming books where where you will get would by the steering without all the music built-in that took longer than we thought but with the voice speakers that was the unlocking that made kind of the connected speaker happen in the home so so it really it really exploded and we saw this engagement that we predicted would happen what I think is interesting though is where it's going from now right now you think of them as voice speakers but I think if you look at Google i/o for example they just added a camera to it where you know when the alarm goes off instead of saying hey Google stop you can just wave your hand so I think they're gonna think more of it as a as an agent or as an assistant truly an assistant and an assistant that can see you it's gonna be much more effective than then a blind assistant so I think these things will morph and we won't necessarily think of them as quote-unquote voice speakers anymore just as interactive access to the Internet in the home but I still think that the biggest use case for those will be will be audio so for that reason we're investing heavily in it and we've built our own NLU stack to be able to the the challenge here is how do you innovate in that well it's it's it lowers friction for consumers but it's also much more constrained there you have no pixels to play with in an in an audio-only world it's really the vocabulary that is the interface so we started investing and playing around quite a lot with that trying to understand what the future will be of you speaking and gesturing and waving at your music and actually you're actually nudging closer to the autonomous vehicle space because from everything I've seen the level of frustration people experience upon failure of natural language understanding is much higher than failure in other content people get frustrated really fast so if you screw screw that experience up even just a little bit they give up really quickly yeah and I think you see that in the data while well it's tremendously successful the most common interactions are play pause and you know next the things were if you compare it to taking up your phone unlocking you bringing up the app and skipping clicking skip yeah it was it was much lower friction but then for for longer more complicated things like you if I mean that song about people stopping up the phone and search and then play it on their speaker so we tried again to build a fault-tolerant UI where for the more for the more complicated things you can still pick up your phone have powerful full keyboard search and then try to optimize for whether it's actually lower friction and try to it's it's kind of like the test autopilot thing you have to be at the level where you're helpful if you're too smart and just in the way people are gonna get frustrated and first of all I'm not obsessed with stairway to heaven it's just a good song but let me mention that as a use case because it's an interesting one I've literally told one of I don't want to say the name of the speaker cuz though when people are listening to it'll make their speaker go off but I talk to the speaker and I say play stairway to heaven and every time it like not every time but a large percent of the time plays the wrong stairway to heaven it plays like some cover of the and that that part of the experience I actually wonder from a business perspective the Spotify control that entire experience or no it seems like the NLU the the natural language stuff is controlled by the speaker and then Spotify stays at a layer below that it is a good and complicated question some of which is dependent on the on the partner so it's hard to comment on the own specifics but the question is the right one the challenges if you can't use any other personalization I mean we know which stairway to heaven and and the truth is maybe for for one person it is exactly the cover that they want and they will be very frustrated if a place I I think we I think we default to the right version but but you actually want to be able to do the cover for the person that just played the cover 50 times or Spotify is just gonna seem stupid so you want to be able to leverage the personalization but you have this stack where where you have the the ASR and this thing called the N best list so then that guesses here and then the presentation comes in at the end you actually want the personalization to be here when you're guessing about what they actually meant so we're working with these partners and it's a complicated it's a complicated thing where you want to you want to be able so first of all you want to be very careful with you users data you don't share your users data without the permission but you want to share some data so that their experience gets better so that these partners can understand enough but not too much and so forth so it's really the trick is that it's like a business driven relationship where you're doing product development across companies together yeah which is which is really complicated but this is exactly why we built our own NLU so that we actually can make personalized guesses because this is the biggest frustration from a user point of view they don't understand about ASRs and n best lists and and business deals they're like how hard can it be I've told this thing 50 times in this version and still applies the wrong thing it can't it can't be hard so we try to take the user approach if the user the users not going to understand the complications of business we have to solve it let's talk about sort of a complicated subject that I myself I'm quite torn about the idea of sort of paying artists right I saw as of August 31st 2018 over 11 billion dollars were paid to rights holders so and further distributed to artists from Spotify so a lot of money is being paid to artists first of all the whole time is a consumer for me when I look at Spotify I'm not sure I'm remembering correctly but I think you said exactly how I feel which is this is too good to be true like when I started using Spotify I assume you guys will go bankrupt in like a month it's like this is too good a lot of people did like this is amazing so one question I have is sort of the bigger question how do you make money in this complicated world how do you deal with a relationship with record labels who are complicated these big you're essentially and have the task of herding cats but like rich and powerful cats and also have the task of paying artists enough and paying those labels enough and still making money in the internet space or people are not willing to pay hundreds of dollars a month so how do you navigate the space how do you know that's a beautiful description hurting rich cats yeah that's before it is very complicated and I think certainly actually betting against Porter fire has been statistically a very smart thing to do just looking at the at the line of roadkill in music streaming services it's it's kind of I think if I had understood the complexity when I joined Spotify unfortunately fortunately I didn't know enough about the music industry to understand the complexities because then I would have made a more rational guess that it wouldn't work so you know ignorant is bliss but I think there have been a few distinct challenges I think as I said one of the things that made it work at all was that Sweden and the Nordic was a lost market so there was you know there was there was no risk for labels to try this I don't think it would have worked if if the market was was healthy so so that was the initial condition then then we had this tremendous challenge with the model itself so now most people were pirating but for the people who bought a download or CD the artists will get all the revenue for all the future plays then right so you got it all up front whereas the streaming model was like almost nothing they want almost nothing they - and then at some point this curve of incremental revenue would intersect with your day one payment and that took a long time to play out before before the music labels they understood that but on the other side it took a lot of time to understand that actually if I have a big hit that is gonna be played for for many years this is a much better model because I get paid based on how much people use the product not how much they thought they would use it day one or so forth so it was a complicated model to get across and but time helped with that right and now now the revenues to the music industry actually are bigger again then you know it's gone through this incredible dip and now they're back up and so we're we're very really proud of having having been a part of that so there have been distinct problems I think when it comes to the to the labels we have taken the painful approach some of our competition at the time they kind of they kind of look that other companies instead if we just if we just ignore the rights and we get really big really fast we're gonna be too big for the the labels too kind of too big to fail they're not gonna kill us we didn't take that approach we went legal from day one and we we negotiated and negotiated and negotiated was very slow is very frustrating we were angry at seeing other companies taking shortcuts and seeming to get away with it it was this this this game theory thing where over many rounds of playing the game this would be the right strategy and even though to clearly there's a lot of frustrations at times during negotiations there is this there is this weird trust where we have been honest and fair we never screw them the inner screwed us it's ten years but there's this trust in like they know that if music doesn't get really big if lots of people do not want to listen to music I want to pay for it Spotify has no business model so we actually are incredibly aligned right other companies not to be tense but other companies have other business models where even if they may know music from no money from music they still be profitable companies but fall if I want so and I think the industry sees that we are actually aligned business-wise so there is this distrust that allows us to to do product development even if it's scary you know taking risks the free model itself was an incredible risk for the music industry to take that they should get credit for now some of it was that they had nothing to lose in Sweden but frankly a lot of the labels also took risk and so I think we built up that trust with a I think hurting what cats sounds a bit what's the word it sounds like I'm dismissive of the guests dismissive no everyday yeah you matter they're all beautiful and and very important exactly they've taken a lot of risks and certainly it's been frustrating about it yeah it's it's it's really like playing its its game theory if you play that if you play the game many times and then you can have this statistical outcome that you bet on and it feels very painful when you're in the middle of that thing I mean there's risk this trust there's relationships from just having read the biography of Steve Jobs similar kind of relationship were discussed in iTunes the idea of selling a song for a dollar was very uncomfortable for labels and exactly and there was no it was the same kind of thing it was trust it was game theory as is a lot of our relationships that had to be built and it's really a terrifyingly difficult process that Apple could go through a little bit because they could afford for that process to fail for Spotify it seems terrifying because you can't initially I think a lot of it comes out comes down to honestly Daniel and his tenacity in in negotiating which seems like an impossible this is a fun task because he was completely unknown and so forth but maybe that was also the reason that that it worked but I think yeah I think game theory is probably the best way to think about it you could straight go straight for this like Nash equilibrium that someone is going to defect or or you played many times you try to actually go for the top left the corporation cell is there any magical reason why Spotify seems to have won this so a lot of people have tried to do a spot if I try to do and Spotify has come out well so the order is that there's no magical reason because I I don't believe in magic but I think there are there are reasons and I think some of them are that people have misunderstood a lot of what we actually do the actual the actual Spotify model is very complicated they've looked at the premium model and said it seems like you can you can charge $9.99 for music and people are gonna pay but that's not what happened actually when we launched the original mobile product everyone said they would never pay what happened was they started on the on the free product and then their engagement grew so much that eventually they said maybe it is worth $9.99 right it's it's your propensity to pay grows with your engagement so we had this super complicated business model we operate two different business model advertising and premium at the same time and I think that is hard to replicate however I struggled to think of other companies that run large-scale advertising and subscription products at the same time so I think the business model is actually much more complicated than people think it is and and so some people went after just the premium part without the free part and ran into a wall where no one wanted to pay some people went after just music music should be free just ads which doesn't give you enough revenue and doesn't work for the music industry so I think that combination is kind of a pick from the outside so maybe I shouldn't say it here and reveal the secret but that that turns out to be hard to replicate then you with then you would think so there's a lot of brilliant business strategy here brilliance or luck probably more luck but it doesn't really matter it looks brilliant in retrospect let's call it brilliant yeah when the books are read no brilliant you've mentioned that your philosophy is to embrace change so how will the music streaming and music listening world change over the next 10 years 20 years you look out into the far future what do you think I think that music and for that matter audio podcasts audiobooks I think it's one of the few core human needs I think it there is no good reason to me why it shouldn't be at the scale of something like messaging or social networking I don't think it's a nice thing to listen to music or news or something so I think scale is obviously one of the things that I really hope for I think I hope that it's going to be billions of users I hope eventually everyone in the world gets access to all the world's music ever made so obviously I think it's going to be a much bigger business otherwise we we wouldn't be betting this big now if you if you look more at how it is consumed what I'm hoping is back to this analogy of the software tool chain where I think I sometimes internally I make this analogy to to text messaging text messaging was also based on standards in the in the area of mobile carriers you had the SMS the 140 character 2020 carrot SMS and it was great because everyone agreed on the standard so as a consumer you got a lot of distributions and interoperability but it was a very constrained format and and when the industry wanted to add pictures to that format to do the MMS I looked it up and I think it took from the late 80s to early 2000 this is like a 15 20 year product cycle to bring pictures into that now once that entire value chain of creation and consumption got wrapped in one software stack with in something like snapchat or whatsapp like the first week they had a disappearing messages like then two weeks later they added stories like the pace of innovation when you're on one software stack and you can you can you can affect both creation and consumption I think is going to be rapid so with these streaming services we now for the first time in history have enough I hope people on one of these services actually whether it's Spotify or Amazon or Apple or or YouTube and hopefully enough creators that you can actually start working with a format again and and that excites me I think being able to change these constraints from a hundred years that could really that could really do something interesting I don't I really hope it's not just going to be their iteration on on the same thing for the next ten to twenty years as well yeah changing the creation of music equation of audio equation of podcast is a really fascinating possibility I myself don't understand what it is about podcast that's so intimate it just is I listen to a lot of podcasts I think it touches on a human and a deep human need for connection that people do feel like they're connected to when they listen I don't understand what the psychology that is but in this world that's becoming more and more disconnected it feels like this is fulfilling a certain kind of need and empowering the creator as opposed to just the listener it's really interesting so this is I'm really excited that you're working on it yeah I think one of the things that is inspiring for our teams to work on podcast is exactly that what do you think like I like I probably do that it's something biological about perceiving to be in the middle of the conversation that makes you listen in a different way it doesn't really matter people seem to perceive it differently and there was this narrative for a long time that you know if you look at video everything kind of in the foreground I got shorter and shorter and shorter because of financial pressure is a monetization and so forth an event at the end there zones like 20 seconds clipped people just screaming something and I'm really I feel really good about the fact that you you could have interpreted that as people have no attention span anymore they don't want to listen to things they're not interested in deeper stories like you know people are people are getting dumber but then podcast came along and it's almost like no no they need still existed once what but maybe maybe it was the fact that you're not prepared to look at your phone like this for two hours but if you can drive at the same time it seems like people really want to dig and they want to hear like the more complicated version so to me that is very inspiring that that podcast is actually long-form it gives me a lot of hope for for Humanity that people seem really interested in hearing deeper more complicated this is a I don't understand it it's fascinating so the majority for this podcast listen to the whole thing this whole conversation we've been talking for an hour and 45 minutes and somebody will I mean most people will be listening to these words I'm speaking right now they wouldn't have thought that 10 years ago with what the world seemed to go that's very positive I think that's really exciting and empowering the creator and there's really exciting last question you also have a passion for just mobile in general how do you see the smartphone world this the the digital space of of smartphones and just everything that's on the move whether it's Internet of Things and so on changing over the next 10 years and so on I think that one way to think about it is that computing might be moving out of these multi-purpose devices the computer we had in the phone into specific specific specific purpose devices and you know it will be ambient that you know at least in my home you just shout something at someone and there's always like one of these speakers close enough and so you start behaving differently it's as if you have the Internet ambient ambiently around you and you can ask it things so I think computing or kind of get more integrated and we won we won't necessarily think of that as as connected to a device and the same thing in the same way that we do today I don't know the path to that maybe we used to have these desktop computers and then we partially replace that with the with the laptops and left you know at home and at work and then we got these phones and we started leaving the laptop at home for a while and maybe the maybe for stretches of time you're gonna start using the watch and you can leave your your phone at home like for a run or something and you know we're on this progressive path where you I think what what is happening with the voice is that you haven't you have an interactive interaction paradigm that doesn't require as large physical devices so I definitely think there's a future where you can have your your airports and and your watch and you can do a lot of computing and I don't think it's gonna be this binary thing I think it's gonna be like many of us still have a laptop we just use it less and so you shift your your consumption over and I don't know about a or glasses and so forth um I'm excited about I spend a lot of time in that area but I still think it's quite far away they are VR oh yes VR is is happening and working I think the recent oculus quest is quite impressive I think AR is further away at least that type of AR I think but I do think your phone or water glasses understanding where you are and maybe what you're looking at and being able to give you all your cues about or you can say like what is this and it tells you what it is that I think might happen you you know you use your your watch to your glasses there's a as a mouse pointer on reality I think it might be a while before I might be wrong I hope I'm wrong I think it might be a while before we walk around with these big like lab glasses then project things I agree with you there's a it's actually really difficult when you have to understand the physical world enough to project onto it well I lied about the last question because I just thought of audio and my favorite topic which is the movie her mm-hmm do you think well there's part of Spotify or not will have I don't know if you've seen the movie her absolutely and their audio is the primary form of interaction and the connection with another entity that you can actually have a relationship with actually fall in love with based on voice alone audio alone you how far do you think that's possible first of all based on audio alone to fall in love with somebody somebody or well yeah let's go with somebody just have a relationship based on audio alone and second question to that can we create an artificial intelligence system that allows one to fall in love with it and hurt him with you so there's my personal personal answer speaking for me as a person the answer is quite unequivocally yes on both I think what we just said about podcasts and the feeling of being in the middle of a conversation if you could have an assistant where and we just said that feels like a very personal settings so if you walk around with these headphones and this thing you're speaking with this thing all of the time that feels like it's in your brain I think it's it's gonna be much easier to fall in love with than something that would be on your screen I think that's entirely possible and then from that you can programs is better than me but from the concept of if it's going to be possible to build a machine that they can achieve that I think whether you whether you think of it as if you can fake it the philosophical zombie that it simulates it enough or it it somehow actually is I think there's it's only question if you ask me about time I'd had to be a financier but if you say a given some half infinite time absolutely I think it's just atoms and arrangement of information well I personally think that love is a lot simpler than people think so we started with true romance and ended in love I don't see a better place to end beautiful Gustav thanks so much for talking to thank you so much there was a lot of fun was fun you
Chris Urmson: Self-Driving Cars at Aurora, Google, CMU, and DARPA | Lex Fridman Podcast #28
the following is a conversation with Chris Urmson he was a CTO of the Google self-driving car team a key engineer and leader behind the Carnegie Mellon University autonomous vehicle entries in the DARPA Grand Challenges and the winner of the DARPA urban challenge today he's the CEO of Aurora innovation and the autonomous vehicle software company he started with sterling Anderson who was the former director of Tesla autopilot and drew back now Uber's former autonomy and perception lead chris is one of the top roboticists and autonomous vehicle experts in the world and a longtime voice of reason in a space that is shrouded in both mystery and hype he both acknowledges the incredible challenges involved in solving the problem of autonomous driving and is working hard to solve it this is the artificial intelligence podcast if you enjoy it subscribe on YouTube give it five stars and iTunes supported on patreon or simply connect with me on Twitter at Lex Friedman spelled Fri D ma a.m. and now here's my conversation with Chris Armisen you were part of both the DARPA Grand Challenge and the DARPA urban challenge teams at CMU with red Whittaker what technical or philosophical things have you learned from these races I think the the high order bit was that it could be done I think that was the thing that was incredible about the first the the Grand Challenges that I remember you know I was a grad student at Carnegie Mellon and there we was kind of this dichotomy of it seemed really hard so that'd be cool and interesting but you know at the time we were the only robotics Institute around and so if we went into it and fell on our faces that would that would be embarrassing so I think you know just having the will to go do it to try to do this thing that at the time was marked as you know darn near impossible and and then after a couple of tries be able to actually make it happen I think that was you know that was really exciting but at which point did you believe it was possible did you from the very beginning did you personally because you're one of the lead engineer you actually had to do a lot of the work yeah I was the technical director there and did al huddle the work along with a bunch of other really good people did I believe it could be done yeah of course right like why would you go do something you thought was impossible completely impossible we thought it was gonna be hard we didn't know how we're gonna be able to do it we didn't know if we'd be able to do it the first time turns out we couldn't that yeah I guess you have to I think there's a certain benefit to naivete right that if you don't know how hard something really is you you try different things and you know gives you an opportunity that others who are you know wiser maybe don't don't have what were the biggest pain points mechanical sensors hardware software algorithms for mapping localization just general perception control what the hardware soft first of all I think that's the joy of this field is that it's all hard and that's you have to be good at at each part of it so for the first for the urban challenges if I look back at it from today it should be easy today that you know it was a static world there weren't other actors moving through it that is what that means it was out in the desert so you get really good GPS you know so that that went in you know we could map it roughly and so in retrospect now it's you know it's it's within the realm of things we could do back then just actually getting the vehicle and the you know there's a bunch of engineering work to get the vehicle so that we could control and drive it that's you know that's still a pain today but it was even more so back then and then the uncertainty of exactly what they wanted us to do was was part of the challenge as well right you didn't actually know the track heading in you know approximately but you know it didn't actually know the route the route that's gonna be taken that's right we didn't know the route we didn't even really the way the rules had been described you had to kind of guess so if you think back to that challenge the idea was to that the the government would give us the DARPA would give us a set of waypoints and kind of the width that you had to stay within between the line that went between you know each of those waypoints and so the the most devious thing they could have done is set you know a kilometer wide corridor across you know a field of scrub brush and rocks and said you know go figure it out fortunately it really it turned into basically driving along a set of trails which you know is much more relevant to to the application they were looking for but no it was it was a hell of a thing back in the day so the legend read was kind of leading that effort in terms just broadly speaking so you're a leader now what have you learned from reading about leadership I think there's a couple things one is you know go and try those really hard things that that's where there is an incredible opportunity I think the other big one though is to see people for who they can be not who they are it's one of the things that I actually one of the deepest lessons I learned from read was that he would look at you know undergraduates or graduate students and empower them to be leaders to to you know have responsibility to do great things that I think another person might look at them and think oh that's just you know another graduate student what could they know and so I think that that you know kind of trust but verify I have confidence and what people can become I think is a really powerful thing so through that it's just like fast-forward through the history can you maybe talk through the technical evolution of autonomous vehicle systems from the first to Grand Challenges to the urban challenge to today are there major shifts in your mind or is it the same kind of technology just made more robust I think there's been some big big steps so the for the grand challenge the real technology that unlocked that was HD mapping prior to that a lot of the off-road robotics work had been done without any real prior model of what the vehicle was going to encounter and so that innovation that the fact that we could get you know decimeter resolution models was really a big deal and that allowed us to to kind of bound the complexity of the driving problem the vehicle had and allowed it to operate at speed because we could assume things about the environment that it was going to encounter so that was a that was one of the that was the big step there for the urban challenge you know one of the big technological innovations there was the multi beam lidar and being able to generate a high resolution you know mid to long range 3d models the world and use that for you know for understanding the world around the vehicle and that was really a you know kind of a game-changing technology in parallel with that we saw a bunch of other technologies that have been kind of converging half their their day in the Sun so Bayesian estimation had been you know slam had been a big field in robotics you know you would go to a conference you know a couple years before that and every paper would effectively have slams somewhere in it and so seeing that you know that looks Bayesian estimation techniques you know play out on a very visible stage you know I thought that was that was pretty exciting to see and mostly slam was done based on lidar that time well yeah and in fact we weren't really doing slam per se you know it you know in real time because we had a model ahead of time we had a road map but we were doing localization and we were using you know the lidar or the cameras depending on you know who exactly was doing it to localize to a model of the world and I thought that was that was a big step from kind of naively trusting GPS I and s before that and and again like lots of work had been going on in this field certainly this was not doing anything particularly innovative in slam over in localization but it was seeing that technology necessary in a real application on a big stage I thought was very cool so for the urban challenge that was already maps constructed offline yes in general okay and did people do that individually individual teams do it individually so they had their own difference of different approaches there or they never really kind of share that information at least intuitively so so the DARPA gave all the teams a a model of the world they you know a map and then you know one of the things that we had to figure out back then was and it's still one of these things that trips people up today is actually the coordinate system so you get a latitude longitude and you know - so many decimal places you don't really care about kind of the ellipsoid of the earth that's being used but when you want to get to ten centimeter or centimeter resolution you care whether the the core system is you know Nats 83 or wgs84 or you know these are different ways to describe both the the kind of non spherical nosov the earth but also kind of the actually in I think I can't remember which one the tectonic shifts that are happening and how to transform you know the the global datum as a function of that so you're getting a map and then actually matching it to reality two centimeter resolution that was kind of interesting and fun back then so how much work was the perception doing there so how how much were you relying on localization based on maps without using perception to register to the maps and how I guess the question is how advanced was perception at that point it's certainly behind where we are today right we're we're more than a decade since the graph or the urban challenge but the the core of it was there that we were tracking vehicles we had to do that at a hundred plus meter range because we had to merge with other traffic we were using you know Bayesian again Bayesian estimates for for state of these vehicles we had to deal with a bunch of the problems that you you think of today of predicting what that where that vehicle is going to be a few seconds into the future we had to deal with the fact that there were multiple hypotheses for that because a vehicle at an intersection might be going right or it might be going straight or I'd be making a left turn and we had to deal with the challenge of the fact that our behavior was going to impact the behavior of that other upper operator and you know we did a lot of that in relative Najee relatively naive ways but it caused third still had to have some kind of Thanos yeah and so where does that ten years later where does that take us today from that artificial city construction to real cities to the urban environment yeah I think the the biggest thing is that the you know the the actors are truly unpredictable that most of the time you know the drivers on the road the other road users are out there behaving well but everyone's father or not the variety of other vehicles is you know you have all of the intended behavior in terms of perception or both that we have you know back then we didn't have to deal with cyclists we didn't have to deal with pedestrians didn't have to deal with traffic lights you know the scale over which that you have to operate us now you know is much larger than you know the airbase that we were thinking about back then so what easy question what do you think is the hardest part about driving easy question yeah no I'm joking I I'm sure no nothing really jumps out at you as one thing but in in the jump from the urban challenge to the real world is there something that's a particularly for seus very serious difficult challenge I think the most fundamental difference is that were doing it for real and that in that environment it was both a limited complexity environment because certain actors weren't there because you know the roads were maintained there were barriers keeping people separate from from robots at the time and it only had to work for 60 miles which looking at it from you know 2006 it had to work for 60 miles yeah right looking at it from now you know we we want things that will go and drive for you know half a half a million miles and you know it's just a it's a different game so how important he said leiter came into the game early on and it's really the primary driver of autonomous vehicles today as a sensor so how important is the role of lidar in the sense of suite in the near term so I think it's I think it's essential you know I believe it but I also believe is the cameras are essential and I believe the radars is essential I think that you you really need to use the composition of data from from these different sensors if you want the thing to to really be robust the question I want to ask let's see if we kind of tangle is what are your thoughts on the Elon Musk provocative statement that lidar is a crutch that is the kind of I guess growing pains and that's much of the perception tasks can be done with cameras so I think it is undeniable that people walk around without you know lasers in their forehead and they can get into vehicles and drive them and and so there's an existence proof that you can drive using you know passive fission no doubt can't argue with that in terms of sensors yeah so yes maybe sensors right so like there's there's an example that we all go do it have many of us everyday in terms of latter being a crutch sure but but you know in the same way that you know the combustion engine was a crutch on the path to an electric vehicle on the same way that you know any technology ultimately gets replaced by some superior technology in the future and really what with the way that I look at this is that the way we get around on the ground the way that we use transportation is broken and that we have you know this this you know what was I think the number I saw this morning 37,000 Americans killed last year on our roads and that's just not acceptable and so tech any technology that we can bring to bear that accelerates the this techno you know self-driving technology coming to market and saving lives is technology we should be using and it feels just arbitrary to say well you know I'm I'm not okay with using lasers because that's whatever but I am okay with using an 8 megapixel camera or a 16 megapixel camera you know like it's just these are just bits of technology and we should be taking the best technology from the tool bin that allows us to go and you know and solve a problem the question I often talk to well obviously you do as well to sort of automotive companies and you know if there's one word that comes up more often than anything is costs and and trying to drive cost down so while it's it's true that it's a tragic number the 37,000 the the question is what and I'm not the one asking these questions I hate this question but yeah we want to find the cheapest sensor suite that the creates a safe vehicle so in that uncomfortable trade-off do you foresee lidar coming down in cost in the future or do you see a day where level for autonomy is possible without lighter I see both of those but it's really a matter of time and I think really maybe the I would talk to the question you asked about you know the cheapest set certainly I don't think that's actually what you want what you want is a sensor suite that is economically viable and then after that everything is about margin and driving cost out of the system what you also want is a sense suite that were and so it's great to tell a story about how you know how it'd be better to have a self-driving system with a $50 sensor instead of a you know a $500 dancer but if the $500 sensor makes it work and the $50 sensor doesn't work you know who cares the longest you can actually you have an economic offer you know there's an economic opportunity there and the economic opportunity is important because that's how you actually have a sustainable business and that's how you can actually see this come to scale and and and be out in the world and so when I look at lidar I see a technology that has no underlying fundamentally you know expense to it fundamental expense to it it's it's going to be more expensive than an imager because you know CMOS processes or you know fab processes are dramatically more scalable than mechanical processes but we still should be able to drive cost out substantially on that side and then I also do think that with the right business model you can absorb more you know certainly more cost on the Bill of Materials yeah if the sense of sweetie works extra values provided thereby you don't need to drive costs down to zero it's the basic economics you've talked about your intuition that level to autonomy is problematic because of the human factor of vigilance that command complacency over trust and so on just us being human yeah we trust the system we start doing even more so partaking in the secondary activities like smart phone and so on have your views evolved on this point in either direction can you can you speak to it so and I want to be really careful because sometimes this gets twist in a way that's that that I certainly didn't intend so active safety systems are a really important technology that we should be pursuing and integrating into vehicles and there's an opportunity in the near term to reduce accidents reduce fatalities and that's and we should be we should be pushing on that level two systems are systems where the vehicle is controlling two axes so in breaking and braking and throttle / steering and I think there are variants of level two systems that are supporting the driver that absolutely like we should we should encourage to be out there where I think there's a real challenge is in the the human factors part around this and the misconception from the public around the capability set that that enables and the trust they should have in it and that is where I you know I kind of I am actually incremental II more you know concerned around level three systems and you know how exactly a level two system is marketed and delivered and you know how people how much effort people have put into those human factors so I still believe several things around this one is people will over trust the technology we've seen over the last few weeks you know a spate of people sleeping in their Tesla you know I watched an episode last night of Trevor Noah talking about this and you know him you know this is a smart guy who's has a lot of resources at his disposal describing a Tesla's a self-driving car and that why shouldn't people be sleeping in their Tesla there's like well because it's not a self-driving car and it is not intended to be and you know these people will almost certainly you know die at some point or hurt other people and so we need to really be thoughtful about how that technology is described and brought to market I also think that because of the economic issue you know Iike my economic challenges we were just talking about that that technology path will ugly these level two driver assistance systems that technology path will diverge from the technology path that we need to be on to actually deliver truly self-driving vehicles ones where you can get it and sleep and have the equivalent or better safety then you know a human driver behind the wheel because the again the economics are very different in those two worlds and so that leads to you know divergent technology so you just don't see the economics of gradually increasing from level two and doing so quickly enough to where it doesn't cost safety critical safety concerns you believe that the it needs to diverge at this point in two different basically different routes and really that comes back to what are those l2 and l1 systems doing and and they are driver systems functions where the the the people that are marketing that responsibly are being very clear and putting human factors in place such that the driver is actually responsible for the vehicle and that the technology is there to support the driver and the safety cases that are built around those or dependence on that driver attention and attentiveness and at that point you you can kind of give up to some degree for economic reasons you can give up on safe false negatives and so and the way to think about this is for a for collision mitigation braking system if it half the times the driver missed a vehicle in front of it it hit the brakes and brought the vehicle to a stop that would be an incredible incredible advance and in safety on our roads right that would be equivalent to seatbelts but it would mean that if that vehicle wasn't being monitored it would hit one out of two cars and so economically that's a perfectly good solution for a driver assistance system what you should do at that point if you can get it to work 50 percent of the time is drive the cost out of that so you can get it on as many vehicles as possible but driving the cost out of it doesn't drive up performance on the false negative case and so you'll continue to not have a technology that could you know really be available for for a self driven vehicle so clearly the communication and this probably applies though for vehicles as well the marketing and a communication of what the technology is actually capable of how hard it is how easy it is all that kind of stuff is highly problematic so but it's say everybody in the world was perfectly communicated and were made to be completely aware of every single technology out there what they what it's able to do what's your intuition and now maybe getting into philosophical ground is it possible to have a level 2 vehicle where we don't over trust it I don't think so if people truly understood the risks and internalized it then then sure you could do that safely but that that's a world that doesn't exist that people are going to they're gonna you know if the facts are put in front of them they're gonna then combine that with their experience and you know let's say they're they're using an l2 system and they go up and down the 101 every day and they do that for a month and it just worked every day for a month like that's pretty compelling at that point you know just even if you know the statistics like well I don't know maybe there's something funny about those maybe they're you know driving in difficult places like I've seen it with my own eyes it works and the problem is that that sample size that they have so it's 30 miles up but now so 60 miles times 30 days so 60 180 a thousand eight hundred miles that's that's a drop in the bucket compared to the one you know what eighty-five million miles between fatalities and so they don't really have a true estimate based on their personal experience of the real risks but they're gonna trust it anyway because it's hard not to work for a month West what's gonna change so even if you start at perfect understanding of the system your own experience will make it drift and that's a big concern you know over a year over two years even it doesn't have to be months and I think that as this technology moves from what I say it's kind of the more technology savvy ownership group to you know the mass market you may be able to have some of those folks who are really familiar with technology they may be able to internalize it better and you know you're kind of immunization against this kind of false risk assessment might last longer but as folks who are who aren't as savvy about that you know read the material and they compare that to their personal experience I think there that you know it's it's going to it's gonna move more quickly so your work the program that you've created a Google and now at Aurora is focused more on the second path of creating full autonomy so it's such a fascinating I think it's one of the most interesting AI problems of the century right it's a I just talked to a lot of people just regular people I don't know my mom about autonomous vehicles and you begin to grapple with ideas of giving your life control over to a machine is philosophically interesting it's practically interesting so let's talk about safety how do you think we demonstrate you spoken about metrics in the past how do you think we demonstrate to the world that an autonomous vehicle an Aurora system is safe this is one where it's difficult because there isn't a soundbite answer that we have to show a combination of work that was done diligently and thoughtfully and this is where something like a functional safety process as part of that is like here's here's the way we did the work that means that we were very thorough so you know if you believe that we what we said about this is the way we did it then you can have some confidence that we were thorough in in in the engineering work we put into the system and then on top of that the you know to kind of demonstrate that we weren't just thorough we were actually good at what we did there'll be a kind of a collection of evidence in terms of demonstrating that the capabilities work the way we thought they did you know statistically and and to whatever degree we can we can demonstrate that both in some combination of simulations some combination of unit testing and decomposition testing and then some part of it will be on Road data and and I think the the way we will ultimately convey this to the public is they'll be clearly some conversation with the public about it but we'll you know kind of invoke the the kind of the trusted nodes and that will spend more time being able to go into more depth with folks like like nitsa and other federal and state regulatory bodies and kind of given that they are operating in the public interest and they're trusted that if we can you know show enough work to them that they're convinced then you know I think we're in a in a pretty good place that means you work with people that are essentially experts at safety to try to discuss and show do you think the answer is probably no but just in case do you think there exists a metric so currently people have been using number of disengagement yeah and it quickly turns into a marketing scheme to just sort of you alter the experiments you run to adjust I think you've spoken that you don't like no Mohammed no in fact I I was on the record telling DMV that I thought this was not a great metric do you think it's possible to create a metric a number that that could demonstrate safety outside of fatalities so so I I do and I think that it won't be just one number so as we are internally grappling with us and at some point we'll be we'll be able to talk more publicly about it is how do we think about human performance in different tasks say detecting traffic lights or safely making a left turn across traffic and what do we think the failure rates are for those different capabilities for people and then demonstrating to ourselves and then ultimately folks the regulatory role and and then ultimately the public that we have confidence that our system will work better than that and so these these individual metrics will can tell a compelling story ultimately I do think at the end of the day what we care about in terms of safety is life saved and injuries reduced and then and then ultimately you know kind of casualty dollars that people aren't having to pay to get their car fixed and I do think that you can you know we in aviation they look at a kind of an event pyramid where you know a crash is at the top of that and that's the worst event obviously and then there's injuries and you know near-miss events and whatnot and and you know violation of operating procedures and and you kind of build a statistical model of the relevance of the low severity things to the high spirit of things I think that's something where we'll be able to look at as well because you know an event per 85 million miles that you know statistically a difficult thing even at the scale of the u.s. to to to kind of compare directly and that event the fatality that's connected to an autonomous vehicle is significantly at least currently magnified in the amount of attention and yet so that speaks to public perception I think the most popular topic about autonomous vehicles in the public is the trolley problem formulation right which has let's not get into that too much but is misguided but in many ways but it speaks to the fact that people are grappling with this idea of giving control over to a machine so how do you win the hearts and minds of the people that autonomy is something that could be a part of their lives thank you let them experience it alright I think it's I think I think it's right I think people should be skeptical I think people should ask questions I think they should doubt because this is something new and different they haven't touched it yet and I think it's perfectly reasonable and but at the same time it's clear there's an opportunity to make the roads safer it's clear that we can improve access to mobility it's clear that we can reduce the cost of mobility and that once people try that and are you know understand that it's safe and are able to use in their daily lives I think it's one of these things that will will just be obvious and I've seen this practically in you know demonstrations that I've you know given where I've had people come in and you know they're very skeptical they again in a vehicle you know my favorite one is taking somebody out on the freeway and we're on the 101 driving at 65 miles an hour and after ten minutes they they kind of turn and ask is that all it does and you're like yeah it's self-driving car not sure exactly which I thought it would be right but they you know they it becomes mundane which is which is exactly what you want a technology like this to be right we don't really when I turn the light switch on in here I don't think about the complexity of you know the those electrons you know being pushed down a wire from wherever it was and being generated it's not like it's just it's like I just get annoyed if it doesn't work right and and what I value is the fact that I can do other things in this space I can you know see my colleagues I can read stuff on a paper I can you know not be afraid of the dark and I think that's what we want this technology to be like is it's it's in the background and people get to have those those life experiences and and do so safely so putting this technology in the hands of people speaks to scale the deployment all right so what do you think the the dreaded question about the future because nobody can predict the future yeah but just maybe speak poetically about when do you think we'll see a large-scale deployment of autonomous vehicles ten thousand those kinds of numbers you will see that within ten years I'm pretty confident we what's an impressive scale what moment so you've done DARPA Challenger there's one vehicle at which moment does it become wow this is serious scale so so I think the moment it gets serious is when we really do have driverless vehicle operating on public roads and that we can do that kind of continuously without a safety dry without a safety driver in the vehicle I think at that moment we've we've kind of crossed the zero to one throw shoulde and then it is about how do we continue to scale that how do we build the right business models how do we build the right customer experience around it so that it is actually you know a useful product out in the world and I think that is really at that point it moves from a you know what is this kind of mixed science engineering project into engineering and commercialization and really starting to deliver on the value that we all see here and you know actually making that real in the world what do you think that deployment looks like where do we first see the inkling of no safety driver one or two cars here and there is it on the highway is it in specific roads in the urban environment I think it's going to be urban suburban type environments you know with a roar when we we thought about how to tackle this I is kind of enfoque to think about trucking as opposed to urban driving and and you know the again the human intuition around this is that freeways are easier to drive on because everybody's kind of going in the same direction and you know lanes are wider etc and I think that that intuition is pretty good except we don't really care about most of the time we we care about all of the time and when you're driving on a freeway with a truck say 70 70 miles an hour and you've got 70,000 pound load with you that's just an incredible amount of kinetic energy and so when that goes wrong it goes really wrong and that those those challenges that you see occur more rarely so you don't get to learn as all as quickly and there you know incrementally more difficult than urban driving but they're not easier than urban driving and so I think this happens in moderate speed urban environments because they're you know if if two vehicles crash at 25 miles per hour it's not good but probably everybody walks away those those events where there's the possibility for that occurring happened frequently so we get to learn more rapidly we get to do that with lower risk for everyone and then we can deliver value to people that they need to get from one place to another and then once we've got that solved then the kind of the freeway driving part of this just falls out but we were able to learn it's more safely more quickly in the urban environment so ten years and then scale twenty thirty year I mean who knows if if it's sufficiently compelling experience is created it can be faster and slower do you think there could be breakthroughs and what kind of break throughs might there be that completely changed that timeline again not only am I asked to predict the future oh yeah I'm asking you to predict breakthroughs that haven't happened yet so what's the I think another way to ask that was would be if I could wave a magic wand what part of the system would I make work today to accelerate it as quick as possible as quickly as possible don't say infrastructure please don't say infrastruc no it's definitely not infrastructure it's really that car that perception forecasting capability so if if tomorrow you could give me a perfect model of what's happened what is happening and what will happen for the next five seconds around a vehicle on the roadway that would accelerate things pretty dramatically how you in terms of staying up at night are you mostly bothered by cars pedestrians or cyclists so I I worry most about the vulnerable road users about the combination of cyclists and cars right just because I khlyst and pedestrians because you know they're not in armor you know with the cars they're bigger they've got protection for the people and so the ultimate risk is is lower there whereas a pedestrian or cyclist they're out in the road you know they they don't have any protection and so you know we need to pay extra attention to that do you think about a very difficult technical challenge of the fact that pedestrians if you try to protect pedestrians by being careful and slow they'll take advantage of that so the game theoretic dance yeah does that worry you of how from a technical perspective how we solve that because as humans the way we solve that it's kind of nudge our way through the pedestrians which doesn't feel from a technical perspective as a appropriate algorithm but do you think about how we solve that problem yeah I think I think there's there's I think that was actually there's two different concepts there so one is am I worried that because these vehicles are self-driving people kind of step in the road and take advantage of them and I've heard this and I don't really believe it because if I'm driving down the road and somebody steps in front of me I'm going to stop right like a even if I'm annoyed I'm not going to just drive through a person stood on the road right and so I think today people can take advantage of this and you and you do see some people do it I guess there's an incremental risk because maybe they have lower confidence that I'm going to see them than they might have for an automated vehicle and so maybe that shifts it a little bit I think people don't want to get hit by cars and so I think that I'm not that worried about people walking out of the 101 and you know creating chaos more than they would today regarding kind of the nudging through a big stream of pedestrians leaving a concert or something I think that is further down the technology pipeline I think that you're right that's tricky I don't think it's necessarily I think the algorithm people use for this is pretty simple Yeah right it's kind of just move forward slowly and if somebody's really close and stop and and I think that that probably can be replicated pretty pretty easily and particularly given that it's you don't do this at 30 miles an hour you do it at one that even in those situations the risk is relatively minimal but I you know it's not something we're thinking about in any serious way and probably the that's less an algorithm problem or creating a human experience so they see AI people that create a visual display that you're pleasantly as a pedestrian nudged out of the way yes that's a that's a yeah that's an experienced problem not an algorithm problem who's the main competitor to Arora today and how do you out-compete them in the long run so we really focus a lot of what we're doing here I think that you know I've said this a few times that this is a huge difficult problem and it's great that a bunch of companies are tackling it because I think it's so important for society that somebody gets there so we you know we're we don't spend a whole lot of time like thinking tactically about who's out there and and how do we beat that that that person individually what are we trying to do to go faster ultimately well part of it is the leadership team we have has got pretty tremendous experience and so we kind of understand the landscape and understand where the coldest acts are to some degree and you know we try and avoid those I think there's a part of it just this great team we've built people this is a technology and a company that people believe in the mission of and so it allows us to attract just awesome people to go work we've got a culture I think that people appreciate that allows them to focus allows them to really spend time solving problems and I think that keeps them energized and then we've invested hard invested heavily in the infrastructure and architectures that we think will ultimately accelerate us so because of the folks were able to bring in early on because the the the great investors we have you know we don't spend all of our time doing demos and kind of leaping from one demo to the next we've been given the freedom to invest in infrastructure to do machine learning infrastructure to pull data from our on-road testing infrastructure to use that to accelerate engineering and I think that that early investment and continuing investment and those kind of tools will ultimately allow us to accelerate and do something pretty incredible Chris beautifully put it's a good place to end thank you so much for talking today oh thank you very much really enjoyed it you
Kai-Fu Lee: AI Superpowers - China and Silicon Valley | Lex Fridman Podcast #27
the following is a conversation of Chi Fuli he's the chairman and CEO of sin evasion ventures that manages a two billion dollar dual currency investment fund with a focus on developing the next generation of Chinese high-tech companies he's the former president of Google China and the founder of what is now called Microsoft Research Asia an institute that trained many of the artificial intelligence leaders in China including CTOs or AI execs at Baidu Tencent Alibaba innova and Huawei he was named one of the 100 most influential people in the world by Time magazine he's the author of seven best-selling books in Chinese and most recently the New York Times bestseller called AI superpowers China Silicon Valley and the New World Order he has unparalleled experience in working across major tech companies and governments and applications of AI and so he has a unique perspective on global innovation in the future of AI that I think is important to listen to and think about this is the artificial intelligence podcast if you enjoy it subscribe on YouTube and iTunes supported enough patreon or simply connect with me on Twitter at lex friedman and now here's my conversation with chi foo lee I emigrated from Russia to US when I was 13 you emigrated to us at about the same age the Russian people the American people the Chinese people each have a certain soul a spirit that permeates throughout the generations so maybe it's a little bit of a poetic question but could you describe your sense of what defines the Chinese soul I think the Chinese soul of people today right we're talking about people who have had centuries of burden because of the poverty that the country has gone through and suddenly shined with hope of prosperity in the past 40 years as China opened up and embraced market economy and undoubtedly there are two sets of pressures on the people that of the tradition that of facing difficult situations and that of Hope of wanting to be the first to become successful and wealthy so that it's a very strong a hunger and strong desire and strong work ethic that drives China forward and is their roots to not just this generation but before that's that's deeper than just the new economic development is there something that's unique to China that you could speak to that's in the people yeah well the Chinese some tradition is about excellence dedication and results and the Chinese exams and study subjects in schools have traditionally started from memorizing ten thousand characters not an easy task to start with and further by memorizing his historic philosophers literature poetry so it really is the probably the strongest rote learning mechanism created to make sure people had good memory and remembered things extremely well that's I think at the same time suppresses the breakthrough innovation and also enhances the speed execution get results and that I think characterizes the historic basis serve on China that's interesting because there's echoes of that and Russian education as well as rote memorization to memorize a lot of poche I mean there's just the emphasis on perfection in all forms mmm that's not conducive to perhaps what you're speaking to which is creativity but you and you think that kind of education holds back the innovative spirit that you might see in the United States well it holds back the breakthrough innovative spirits that we see in the United States but it does not hold back the valuable execution oriented result-oriented value creating engines which we see China being very successful so is there a difference between a Chinese AI engineer today and an American AI engineer perhaps rooted in the culture that we just talked about or the education or the very soul of the people or no and what would your advice be to each if there's a difference well there's a lot that's similar because AI is about mastering Sciences about using known technologies and trying new things but it's also about picking from many parts of possible networks to use and different types of parameters to tune and that part is somewhat rote and it is also as anyone who's built AI products can tell you a lot about cleansing the data because AI runs better with more data and data is generally unstructured error error fall and unclean and the effort to clean the data is is immense so I think the better part of American engineering ai engineering process is to try new things to do things people haven't done before and to use technology to solve most if not all problems so to make the algorithm work despite not so great data find you know error tolerant ways to deal with the data the Chinese way would be - basically enumerate to the fullest extent all the possible ways by a lot of machines try lots of different ways to get it to work and spend a lot of resources and money and time cleaning up data that mean that means the AI engineer may be writing data cleansing algorithms working with thousands of people who label or correct or do things with the data that is the incredible hard work that might lead to better results so the Chinese engineer would rely on and ask for more and more and more data and find ways to cleanse them and make them work in the system and probably less time thinking about new algorithms that can overcome they there are other issues so where's your intuition what do you think the biggest impact the next 10 years lies is it in some breakthrough algorithms or is it in just this at scale rigor a rigorous approach to data cleaning data organizing data onto the same algorithm packed in the applied world is well if you're really in the company and you have to deliver results using known techniques and enhancing data seems like the more expedient approach that's very low risk and likely to generate better and better results and that's why the Chinese approach has done quite well now there are a lot of more challenging startups and problems such as autonomous vehicles medical diagnosis that existing algorithms may probably won't solve and that would put the Chinese approach more challenged and give them more breakthrough innovation approach more more of an edge on those kinds of problems so let me talk to that a little more so you know my intuition personally is that data can take us extremely far so you brought up autonomous vehicles and medical diagnosis so your intuition is that huge amounts of data might not be able to completely help us solve that problem right so breaking that down further in autonomous vehicle I think huge amounts of data probably will solve trucks driving on highways which will deliver a significant value and China will probably lead in that and full l5 autonomous is likely to require new technologies we don't yet know and that might require academia and great industrial research both innovating and working together and in that case us has an advantage so the interesting question in there is I don't know if you're familiar on the autonomous vehicle space and the developments with Tesla and Elon Musk I am where they are in fact full steam ahead into this mysterious complex world of full autonomy l5 l4 l5 and they're trying to solve that purely with data so the same kind of thing that you're saying is just for highway which is what a lot of people share your intuition yeah they're trying to solve with data it's just a linger on that moment forever do you think possible for them to achieve success with simply just a huge amount of this training on edge cases on difficult cases in urban environments not just highway and so on I think they'll be very hard one could characterize Tesla's approach as kind of a Chinese strength approach right gather all the data you can and hope that will overcome the problems but in autonomous driving clearly a lot of the decisions aren't nearly solved by aggregating data and having a feedback loop there are things that are more akin to human thinking and how would those be integrated and built there has not yet been a lot of success integrating human intelligence or you know call it expert systems to be well even though that's a taboo word with the machine learning and the integration the two types of thinking hasn't yet been demonstrated and the question is how much can you push a purely machine learning approach and of course Tesla also has an additional constraint that they don't have all the sensors I know that they think is foolish to use lidar s but that's clearly a one less very valuable and reliable source of inputs that they're forgoing which may also have consequences I think the advantage of course is capturing data no one has ever seen before and in some cases such as computer vision and speech recognition I have seen Chinese companies accumulate data that's not seeing anywhere in the Western world and they have delivered superior results but then speech recognition and object recognition are relatively suitable problems for deep learning and don't have the potentially need for the human intelligence analytical planning elements in the same on the speech recognition side your intuition that speech recognition and the machine learning approaches to speech recognition won't take us to a conversational system that can pass the Turing test which is sort of maybe akin to what driving is so it needs to have something more than just simply simple language understanding simple language generation roughly right I would say that's based on purely machine learning approaches it's hard to imagine it could lead to a full conversational experience across arbitrary domains which is akin to l5 I'm a little hesitant to use the word Turing tests because the original definition was probably too easy we probably do that yeah the spirit of the Turing test that's what I was referring of course so you've had major leadership research positions at Apple Microsoft Google so continuing on the discussion of America Russia Chinese soul and culture and so on what is the culture Silicon Valley in contrast to China and maybe us broadly and what is the unique culture of each of these three major companies in your view I think in aggregates Silicon Valley companies and we could probably include Microsoft in that even though they're not in the valley is really dream big and have visionary goals and believe that technology will conquer all and also the self confidence and the self entitlement that whatever they produce the whole world should use and must use and those are historically important I think you know Steve Jobs famous quote that he doesn't do focus groups he looks in the mirror and asks the first in the mirror what do you want and that really is an inspirational comment that says the great company shouldn't just ask users what they want but develop something that users will know that they want when they see it but they could never come up with themselves I think that is probably the most exhilarating description of what the essence of Silicon Valley is that this brilliant idea could cause you to build something that couldn't come out of focus groups or a be tests and iPhone would be an example of that no one in the age of blackberry would write down they want an iPhone or multi-touch a browser might be another example no one would say they want that in the days of FTP but once they see it they want it so I think that is was Silicon Valley's best at but it also comes with came with a lot of success these products became global platforms and there were basically no competitors anywhere and that has also led to belief that these are the only things that one should do that companies should not tread on other companies territory so that's a you know Groupon and the Yelp and then open table and the GrubHub with each field ok I'm not gonna do the other company's business because that would not be the pride of innovating whatever each of these four companies have innovated but I think the Chinese approach is do whatever it takes to win and it's a winner take all market and in fact in the internet space the market leader will get predominantly all the value extracted out of the system so and the and the and the system isn't just defined as one narrow category but gets broader and broader so it's amazing ambition for success and domination of increasingly larger product categories leading to clear market winner status and the opportunity to extract tremendous value and that develops a practical result oriented ultra ambitious winner-take-all gladiatorial mentality and if what it takes is to build what the competitors built essentially a copycat that can be done without infringing laws if what it takes is to satisfy a foreign country's need by forking the codebase and building something that looks really ugly and different they'll do it so it's contrasted very sharply with the Silicon Valley approach and I think the flexibility and the speed and execution has helped the Chinese approach and I think the Silicon Valley approach is potentially challenged if every Chinese entrepreneurs learning from the whole world US and China and the American entrepreneurs only look internally and right off China as copycat and the second part of your question about the three companies the unique elements of the three companies perhaps yeah I think Apple represents while the user please the user and the essence of design and brand and it's the one company and perhaps the only tech company that draws people with a a strong serious desire for the product and the knee and the willingness to pay a premium because of the halo effect of the brand which came from the attention to detail and great respect for user needs microsoft represents a platform approach that builds giant products that become very strong modes that others can't do because it's well architected at the bottom level and the work is efficiently delegated to individuals and then the the the whole product is build by adding small parts that sum together so it's probably the most um effective high tech assembly line that builds a very difficult product that and the whole process of doing that is kind of a differentiation and something competitors can't easily repeat are there elements of the Chinese approach and the way Microsoft went about assembling those little pieces and dominating them was essentially dominating the market for a long time or do you see this is distinct I think there are elements that are the same I think the three American companies that had or have Chinese characteristics and obviously as well as American characteristics are Microsoft Facebook and Amazon yes that's right Amazon because these are companies that will tenaciously go after adjacent markets build up strong private offering and and find ways to extract greater value from a sphere that's ever-increasing and they understand the value of the platforms so that's the similarity and then with Google I think is a genuinely value oriented company that does have a heart and soul and that wants to do great things for the world by connecting information and that has also very strong technology genes and wants to use technology and has found out-of-the-box ways to use technology to deliver incredible value to the end-user we can look at Google for example you mentioned heart and soul there seems to be an element where Google is after making the world better there's a more positive view I mean I used to have the slogan don't be evil yeah and and Facebook a little bit more as a negative tint to it at least in the perception of privacy and so on do you have a sense of how these different companies can achieve because you've talked about how much we can make the world better in all these kinds of ways with AI what is it about a company that can make give it a heart and soul gain the trust of the public and just actually just not be evil and do good for the world it's really hard and I think Google has struggled with that first that don't do evil mantra is very dangerous because every employees definition of evil is different and that has led to some difficult employee situations for them so I don't necessarily think that's a good value statement but just watching the kinds of things Google or its parent company alphabet does in new areas like health care like you know eradicating mosquitoes things that are really not in the business of a Internet tech company I think that shows that there is the heart and soul and desire to do good and willingness to put in the resources to do something when they see it's good they will pursue it that doesn't necessarily mean it has all the trust of the users I realize while most people would view Facebook as the primary target of their recent and happiness about Silicon Valley companies many would put Google in that category and some have named Google's business practices as predatory also so it's kind of difficult to have the two parts of a body the brain wants to do what it's supposed to do for shareholder maximize profit and then the heart and soul wants to do good things that may run against at what that brain wants to do so in this complex balancing that these companies have to do you've mentioned that you're concerned about a future were too few companies like Google Facebook Amazon are controlling our data are controlling too much of our digital lives can you elaborate on this concern and perhaps do you have a better way forward I think I'm hardly the most vocal a complainer of this course there are a lot louder complaints out there I do observe that's having a lot of data thus perpetuates their strengths and limit competition in many spaces but I also believe a AI is much broader than the internet space so the entrepreneurial opportunity still exists in using AI to empower financial retail manufacturing education applications so I don't think it's quite a case of um full monopolistic dominance that makes that totally stifles innovation but I do believe in their areas of strength is hard to to dislodge them I don't know if I have a good solution probably the best solution is let the entrepreneurial VC ecosystem work well and find all the places that can create the next Google the next Facebook so there will always be increasing number of challengers in some sense that has happened a little bit you see uber Airbnb having emerged despite the strength of the that the big three and and I think China as an environment may be more interesting for the emergence because if you look at companies between let's say 50 to 300 billion dollars China has emerged more of such companies than the you in the in the last three to four years because of the larger marketplace because of the more fearless nature of the entrepreneurs that and and the Chinese Giants are just as powerful as American ones Tenzin Alibaba very strong but by tense has emerged worth 75 billion and financial well it's Alibaba affiliated it's nevertheless independent and worth 150 billion and so III do think if we start to extend to traditional businesses we will see value very valuable companies so it's probably not the case that in five or ten years we'll still see the whole world with these five companies having such dominance so you've mentioned a couple times this fascinating world of entrepreneurship in China of the fearless nature of the entrepreneur so can you maybe talk a little bit about what it takes to be an entrepreneur in China what are the strategies that are undertaken what are the ways to achieve success what is the dynamic of vfc funding of the way the government helps companies isn't one what are the interesting aspects here that are distinct from they're different from the Silicon Valley world of entrepreneurship home many of the listeners probably still would brand Chinese entrepreneur as copycats and no doubt ten years ago that would not be an inaccurate description back ten years ago an entrepreneur probably could not get funding if he or she could not describe what product he or she is copying from the US the first question is who has proven this business model which is a nice way of asking Corey copying and and that reason is understandable because China had a much lower internet penetration and and didn't have enough indigenous experience to build innovative products and secondly the internet was emerging link startup was the way to do things building a first minimally viable products and then expanding was the right way to go and the American successes have given the shortcuts that if you took your if you build your minimally Viable Product based on an American product it's guaranteed to be a decent starting point then you tweak it afterwards so as long as there's no IP infringement which as far as no there hasn't been in the mobile and AI spaces that's a much better shortcut and I think Silicon Valley would view that as still not very honorable because that's not your own idea to start with but you can't really at the same time believe every idea must be your own and believe in the Lean Startup methodology because Lean Startup is intended to try many many things and then converge one that works and it's meant to be iterating changed so finding a decent starting point without legal violations there should be nothing morally dishonorable about that so just a quick pause on that it's fascinating that that's is why is that not honorable right he's exactly as you formulated is it seems like a perfect start for business yeah is to to take you know look at Amazon and say okay well we'll do exactly what Amazon is doing let's start there yeah in this particular market and then let's I'll innovate them from that starting point yes I'm up with new ways I mean is it wrong to be accept the word copy catchy sounds bad but is it wrong to be a copycat it just seems like a smart strategy but yes doesn't have a heroic nature to it yeah that like I said like a Steve Jobs Elon Musk sort of in something completely coming up with something completely new yeah I like the way you describe it it's a non heroic acceptable way to start the company and maybe more expedient so that's the that's I think a baggage for silicon vally that if it doesn't let go then it made limits the ultimate ceiling of the company take snapchat as an example I think you know Evans brilliance he build great products but he's very proud that he wants to build his own features not copy others while Facebook was more willing to copy his features and you see what happens in the competition so I think putting that handcuff on a company would limit its ability to reach the maximum potential so back to the Chinese environment copying was merely a way to learn from the American masters just like we if you would we learned to play piano or painting you start by copying you don't start by innovating when you don't have the basic skill sets so very amazingly the Chinese entrepreneurs about six years ago started to branch off with these lean startups built on American ideas to build better products than American products but they did start from the American idea and today we we Chad is better than whatsapp Weibo is better than Twitter and Yahoo is better than Korra and so on so that I think is some Chinese entrepreneurs going to step two and in step three is once these entrepreneurs have done one or two of these companies they they now look at the Chinese market and the opportunities and come up with ideas that didn't exist elsewhere so products like and financial under which includes Ali pay which is mobile payments and also the financial products for loans built on that and also in education VIP kid and in social video social network tick-tock and in social ecommerce pin dodo and then in ride-sharing mo bike these are all Chinese innovative products that now are being copied elsewhere so and and the additional interesting herbs raishin is some of these products are built on unique Chinese demographics which may not work in the US but may work very well in Southeast Asia Africa and other developing worlds that are a few years behind China and a few of these products may be armed Universal and are getting traction even in the United States such as tick tock so this whole ecosystem is supported by VCS as a virtuous cycle because a large market with with innovative entrepreneurs will draw a lot of money and then invest in these companies so as the market gets larger and larger u.s. mark china market is easily three four times larger than the u.s. they will create greater value and greater returns for the VCS thereby raising even more money so at San ovations ventures our first fund was fifteen million our last fund was five hundred million so it reflects the valuation of the companies and our us going multistage and things like that it also has government support but not in the way most Americans would think of it the government actually leaves the entrepreneurial space as a private enterprise so they're self-regulating and the government would build infrastructures that would around it to make it work better for example the mass entrepreneur mass innovation plan builds eight thousand incubators so the pipeline is very strong to the VCS for autonomous vehicles the Chinese government is building smart highways with sensors smart cities that separate pedestrians from cars that may allow initially an inferior autonomous vehicle company to launch a car without increasing with lower casualty because the roads or the city is smart and the Chinese government at local levels would have these guiding funds acting as LPS passive LPS to funds and when the fund makes money part of the money made is given back to the GPS and potentially other LPS to reach increase everybody's return at the expense of the government's return so that's interesting incentive that and trusts that task of choosing entrepreneurs to VCS who are better added in the government by letting some of the profits I'll move that way so this is really fascinating right so I look at the Russian government as a case study where let me put it this way there is no such government driven large-scale support of entrepreneurship and probably the same is true in the United States but the entrepreneurs themselves kind of find a way yeah so maybe in a form of advice or explanation how did the Chinese government arrive to be this way so supportive entrepreneurship to be in this particular way so forward-thinking at such a large scale and also perhaps how can we copy it in other countries yeah that how can we encourage other governments given the United States government to support infrastructure for autonomous vehicles in that same kind of way perhaps yes so these some techniques are the result of several key things some of which may be learn the both some of which may be very hard one is just trial and error and watching what everyone else is doing I think it's important to be humble and not feel like you know all the answers the guiding funds idea came from Singapore which came from Israel and China made a few tweaks and turned it into a because the Chinese cities and government officials kind of compete with each other because they all want to make their city more successful so they can get the next level in their crew in their political career and it's somewhat competitive so the central government made it a bit of a competition everybody has a budget they can put it on AI or they can put it on bio or they can put it on energy and then whoever gets the results the city shines the people are better off the mayor gets a promotion so the tools kind of almost like an entrepreneurial environment for local governments to see who can do a better job and also many of them try different experiments some have given award to very smart researchers just give them money and hope they'll start the company some have given money to academic research labs maybe government research labs to see if they can spin-off some companies from the science lab or something like that some have tried to recruit overseas Chinese to come back and start companies and they've had mixed results the one that worked the best was the guiding funds so it's almost like a Lean Startup idea where people try different things and what works sticks and everybody copies so now every city has a guiding fund so that's how that came about the autonomous vehicle and the massive spending in highways in smart cities that's a Chinese way it's about building infrastructure to facilitate it's a clear division of the government's responsibility from the market the markets should do everything in a private freeway but there are things the market can't afford to do like infrastructure so the government always appropriate large amounts of money for infrastructure building this happened happens with not only autonomous vehicle in the eye but happened with the 3G and 4G you'll find that the Chinese a wireless reception is better than the u.s. because massive spending that tries to cover the whole country whereas in the US it may be a little spotty it's a government driven because I think they view the coverage of of cell access and 3G 4G access to be a governmental infrastructure spending as opposed to as opposed to capitalistic so that's of course the state-owned enterprises also public traded but they also carry a government responsibility to deliver infrastructure to all so it's a different way of thinking that may be very hard to inject into Western countries to say starting tomorrow bandwidth infrastructure and highways are going to be governmental spending with some characteristics what's your sense and sorry to interrupt but because it's such a fascinating point do you think on the autonomous vehicle space it's possible to solve the problem of full autonomy without significant investment in infrastructure well that's really hard to speculate I think it's not a yes/no question but how long does it take question you know 15 years 30 years 45 years clearly with infrastructure augmentation where there's ro the city or whole city planning building a new city I'm sure that will accelerate the day of the l5 I I'm not knowledgeable enough and it's hard to predict even one we're knowledgeable because a lot of it is speculative but in the US I don't think people would consider building a new city the size of Chicago to make it a I slash autonomous city they're smaller ones being built I'm aware of that but is infrastructure spent really impossible for us or Western countries I don't think so the u.s. highway system was built was that during President Eisenhower or Kennedy as Eisenhower yeah so so so maybe historians can study how the President Eisenhower get the resources to build this massive infrastructure that surely gave us tremendous amount of prosperity over the next decade if not century if I may comment on that then it takes us to artificial intelligence a little bit because in order to build infrastructure it it creates a lot of jobs so I'll be actually interested if you would say that you talk in your book about all kinds of jobs that could could not be automated I wonder if building infrastructures one of the jobs that would not be easily automated something you could think about because they think you mentioned somewhere in the talk or that there there might be as jobs are being automated a role for government to create jobs that can't be automated yes I think that's a possibility back in the last financial crisis China puts a lot of money to basically give this economy a boost and a lot of it a lot of the one into infrastructure building and and I think that's a legitimate way and the government level to to deal with the employment issues as well as build out the infrastructure as long as the infrastructures are truly needed and as long as there isn't an employment problem which no we don't know so maybe taking a little step back if you've been a leader and a researcher in AI for several decades at least 30 years so how is AI changed in the west and the east as you've observed as you've been deep in it over the past 30 years well a I began as the pursuits of understanding human intelligence and the term itself represents that but it kind of drifted into the one sub area that worked extremely well which is machine intelligence and that's actually more using pattern recognition techniques to basically do incredibly well on the limited or domain large amount of data but relatively simple kinds of farm planning tasks and not very creative so so we didn't end up building human intelligence we built a different machine that was a lot better than us some problems but nowhere close to us other problems so today I think a lot of people still misunderstand when we say artificial intelligence and what various products can do people still think it's about replicating human intelligence but the products out there really are closer to having invented the internet or the spreadsheet or the database and getting broader adoption and peeking further to the fears near-term fears that people have about AI so you're commenting on the sort of the general intelligence that people in the popular culture from sci-fi movies have a sense about AI but there's practical fears about AI the kind the narrow AI that you're talking about of automating particular kinds of jobs and you talk about them in the book so what are the kinds of jobs in your view that you see in the next 5-10 years beginning to be automated by AI systems algorithms yes this is a also maybe a little bit counterintuitive because it's the routine jobs that will be displaced the soonest and they may not be displaced entirely maybe 50% 80% of a job but when the workload drops by that much employment will come down and also another part of misunderstanding as most people think of AI replacing routine jobs then they think of the assembly line the workers well that will have some effect but it's actually the routine white-collar workers that's easier to replace because to replace the white-collar worker you just need software to replace a blue-collar worker you need robotics mechanical excellence and the ability to deal with dexterity and maybe even unknown environments very very difficult so if we were to categorize the most dangerous white-collar jobs there would be things like back-office people who copy and paste and deal with simple computer programs and data and maybe paper and OCR and they don't make strategic decisions they basically facilitate the process the software and papers don't work so you have people dealing with new employee orientation searching for past lawsuits and financial documents and doing reference check for basic searching and management of data data that's the most in danger of being lost in addition to the white-collar repetitive work a lot of simple interaction work can also be taken care of such as telesales telemarketing customer service as well as many physical jobs that are in the same location and don't require a high degree of dexterity so fruit picking dishwashing assembly line inspection our jobs in that category so all together back office is a big part and the other the the the blue-collar may be smaller initially but over time they I will get better and when we start to get to over the next 15 20 years the ability to actually have the dexterity of doing assembly line that's a huge chunk of jobs and and when autonomous vehicles start to work initially starting with truck drivers but eventually to all drivers that's another huge group of workers so I see modest numbers in the next five years but increasing rapidly after that I'm worried of the jobs that are in danger and the gradual loss of jobs I'm not sure if you're familiar with Andrew yang yes I am so there's a candidate for president of the United States whose platform Andrew yang is based around in part around job loss due to automation and also in addition the need perhaps of universal basic income to support jobs that are folks who lose their job due to automation and so on and in general support people under complex unstable job market so what are your thoughts about his concerns him as a candidate his ideas in general I think his thinking is generally in the right direction [Music] but his approach as a presidential candidate maybe a little bit head at a time I think the displacements will happen but will they happen soon enough for people to agree to vote for him the unemployment numbers are not very high yet and I think you know he and I have the same challenge if I want to theoretically convince people this is an issue and he wants to become the president people have to see how can this be the case when an employment numbers are low so that is the challenge and I think I think we do I do agree with him on the displacement issue on universal basic income at a very vanilla level I don't agree with it because I think the main issue is retraining so people need to be incented not by just giving a monthly $2,000 check or $1,000 check and do whatever they want because they don't have to know how to know what to retrain to go into what type of a job and guidance is needed and Retraining is needed because historically when technology revolutions when routine jobs were displaced new routine jobs came up so they there was always room for that but with a eye on automation the whole point is replacing all routine jobs eventually so there will be fewer and fewer routine jobs and an AI will create jobs but it won't create routine jobs because if it creates routine jobs why wouldn't a I just do it so therefore the people who are losing the jobs aren't losing routine jobs the jobs that are becoming available are non routine jobs so the social stipend needs to be put in place is for the routine workers who lost their jobs to be retrained maybe in six months maybe in three years takes a while to retrain on a non routine job and then take out a job that will last for that person's lifetime now having said that if you look deeply into Andrews document he does cater for that so I'm not disagreeing with what he's trying to do but for simplification sometimes he just says ubi but simple ubi wouldn't work and I think you've mentioned elsewhere that I mean the goal isn't necessarily to give people enough money to survive or live or even to prosper the point is to give them a job that gives a meaning that meaning is extremely important that our employment at least in the United States and perhaps it cares across the world provides something that's forgive me for saying greater than money it provides meaning so now what kind of jobs do you think can't be automated you talk a little bit about creativity and compassion in your book what aspects do you think it's difficult to automate for an AI system because an AI system is currently merely optimizing it's not able to reason plan or think creatively or strategically it's not able to deal with complex problems it can't come up with a new problem and solve it a human needs to find the problem and pose it as an optimization problem then have the AI work habits so in AI would have a very hard time discovering a new drug or discovering a new style of painting or dealing with complex tasks that such as managing a company that isn't just about optimizing the bottom line but also about employee satisfaction corporate brand and many many other things so that is one category of things and because these things are challenging creative complex doing them creates a higher high degree of satisfaction and therefore appealing to our desire for working which isn't just to make the money make the ends meet but also that we've accomplished something that others maybe can't do or can do as well another type of job that is much numerous would be compassionate jobs jobs that require calm empathy human touch human trusts hey I can't do that because AI is cold calculating and even if it can fake that to some extent it will make errors and that will make it look very silly and also I think even if they added okay people would want to interact with the people another person whether it's for some kind of a service or a teacher or a doctor or concierge or a masseuse or a bartender there are so many jobs where people just don't want to interact with a cold robot or software I've had an entrepreneur who built an elderly care robot and they found that the elderly really only used it for customer service and huh but not to service the product but they click on the customer service and the video of a person comes up and then the person says how come my daughter didn't call me let me show you the grandkids so people learn for that people people interaction so even the robots improved people just don't want it and those jobs are going to be increasing because AI will create a lot of value 16 trillion dollars to the world in next 11 years according to PwC and that will give people money to enjoy services whether it's eating a gourmet meal or tourism and traveling or having concierge services the the service is revolving around you know every dollar of that 16 trillion dollars will be tremendous it will create more opportunities that are to service the people who did well through AI with with with things but even at the same time the entire society is very much short in need of many service oriented compassionate oriented jobs the best example is probably in healthcare services there's going to be 2 million new jobs not coming replacement just in brand-new incremental jobs in the next six years in health care services that includes nurses orderly in the hospital elderly care and and also at home care it's particularly lacking and those jobs are not likely to be filled so there's likely to be a shortage and the reason they're not filled is simply because they don't pay very well and that the social status of these jobs are not very good so they pay about half as much as a heavy equipment operator which will be replaced a lot sooner and they pay probably comparably to someone on the assembly line and if so if we ignoring all the other issues and just think about satisfaction from one's job someone repetitively doing the same manual action and assembly line that can't create a lot of job satisfaction but someone taking care of a sick person and and getting a hug and thank you from that person in the and the family I think is is quite satisfying so if only we could fix the pay for service jobs there are plenty of jobs that require some training or a lot of training for the people coming off the routine jobs to take we can easily imagine someone who was maybe a cashier at the grocery store s stores become automated learned to become a nurse or a at home care also to one the point now the blue-collar jobs are going to stay around a bit longer some of them quite a bit longer you know a I cannot be told go clean an arbitrary home that's incredibly hard arguably is an l5 level of difficulty right and then AI cannot be a good plumber because plumber is almost like a mini detective that has to figure out where the leak came from so yet AI probably can be an assembly line and auto mechanic and so on so one has to study which blue-collar jobs are going away and facilitate retraining for the people to go into the ones that won't go away or maybe even will increase I mean it is fascinating that it's easier to build a world champion chess player than it is to build a mediocre plumber yes right very true iji and that goes counterintuitive to a lot of people's understanding of what artificial intelligence is so it sounds I mean you're painting a pretty optimistic picture about retraining about the number of jobs and actually the meaningful nature of those jobs once we automate repetitive tasks so overall are you optimistic about the future where much of the repetitive tasks are automated that there is a lot of room for humans for the compassionate for the creative input that only humans can provide I am optimistic if we start to take action if we have no action in the next five years I think it's going to be hard to deal with the devastating losses that will emerge so if we start thinking about retraining maybe with the low-hanging fruits explaining to vocational schools why they should train more plumbers that other mechanics may be starting with some government subsidy for corporations to have more training positions we start to explain to people why retraining is important we start to think about what the future of education how that needs to be tweaked for the era of AI if we start to make incremental progress and the greater number of people understand then there's no reason to think we can't deal with this because this technological revolution is arguably similar to what electricity industrial revolutions and internet brought about do you think there's a role for policy for governments to step in to help with policy to create a better world absolutely and I and the government's don't have to believe an employment will go up and they don't have to believe automation will be this fast to do something revamping vocational school would be one example another is if there is a big gap in health care service employment and we know that a country's population is is growing Oh more longevity living older because people over 80 require five times as much care as those under 80 then it is a good time to incent training programs for elderly care to find ways to improve the pay maybe one way would be to offer as part of Medicare or the equivalent program for people over 80 to be entitled to a few hours of elderly here at home and then that might be reimbursable and that will stimulate the service industry around the policy do you have concerns about large entities whether it's governments or companies controlling the future of AI development in general so we talked about companies do you have a better sense that governments can better represent the interest of the people then companies or do you believe companies are better representing the interests of the people or is there no easy answer I don't think there's an easy answer because there's a double-edged sword the companies and governments can provide better services with more access to data and more access to AI but that also leads to greater power which can lead to uncontrollable problems whether it's monopolies or corruption in the government so I think one has to be careful to look at how much data that companies and governments have and and some kind of checks and balances would be helpful so again I come from Russia there's something called the Cold War so let me ask a difficult question here looking at conflict the Steven Pinker written a great book that conflict all over the world is decreasing in general but do you have a sense that having written the book AI superpowers do you see a major international conflict potentially arising between major nations whatever they are with its roster China European nations United States or others in the next 10 20 50 years around AI around the digital space cyberspace do you worry about that that is there something is that something we need to think about and try to alleviate or prevent I believe in greater engagement a lot of the worries about more powerful AI are based on a arms race metaphor and the when you extrapolate into military kinds of scenarios AI can automate and and and you know animus weapons that needs to be controlled somehow and autonomous decision-making can lead to not enough time to fix international crises so I actually believe a Cold War mentality would be very dangerous because should two countries rely on AI to make certain decisions and they don't in talk to each other they do their own scenario planning then something could easily go wrong I think engagement interaction some protocols to avoid inadvertent disasters is actually needed so it's natural for each country to want to be the best whether it's in nuclear technologies or AI or bio but I think is important to realize if each country has a black box AI and that don't talk to each other that probably presents greater challenges to humanity then if they interacted I think there can still be competition but with some degree of protocol for interaction just like when there was a nuclear competition there were some protocol for deterrence among US Russia and China and I think that engagement is needed so of course we're still far from AI presenting that kind of danger but what I worry the most about is the level of engagement seems to be coming down the level of distrust seems to be going up especially from the US towards other large countries such as China and of course Russia and Russia yes is there a way to make that better so that's beautifully put level engagement and even just basic trust and communication as opposed to sort of you know making artificial enemies out of particular particular countries did you ever you have a sense how we can make it better mentionable items that as a society we can take on I'm not an expert at geopolitics but I would say that we look pretty foolish as humankind when we are faced with the opportunity to create sixteen trillion dollars for for Humanity and we we're in yet we're not solving fundamental problems with parts of the world still in poverty and for the first time we have the resources to overcome poverty and hunger we're not using it on that but we're fueling competition among superpowers and that's a very unfortunate thing if we become utopian for a moment imagine a a benevolent world government that has this 16 trillion dollars and maybe some AI to figure out how to use it to deal with diseases and problems and hate and things like that world would be a lot better off so what is wrong with the current world I think the people with more skill than then I should just think about this and then at your politics issue with superpower competition is one side of the issue there's another side which I worry maybe even even even more which is as the 16 trillion dollars all gets made by US and China and a few of the developed other developed countries the poorer country will get nothing because they don't have technology and the the wealth disparity and the end of inequality will increase so a poor a country with a large population will not only benefit from the AI pool or other technology booms but they will have their workers who previously had hoped they could do the china model and do outsourced manufacturing or the india model so they could do the house or some process or call center all those jobs are going to be gone in 10 or 15 years so the the individual citizen may be a net liability I mean financially speaking to a poorer country and not an asset to claw itself out of poverty so in that kind of situation these large countries with was not much tech are going to be facing a downward spiral and it's unclear what could be done and and then when we look back and say there's 16 trillion dollars being created and it's all being kept by us China and other developed countries it just doesn't feel right so I hope people who know about geopolitics can find solutions that's beyond my expertise so different countries that we've talked about have different value systems if you look at the United States to an almost extreme degree there is a an absolute desire for freedom of speech if you look at a country where I was raised that desire just amongst the people is not that not as elevated as it is basically fundamental level to the essence of what it means to be America right and the same is true with China there's different value systems and there is some censorship of Internet content that China and Russia and many other countries undertake do you see that having effects on innovation other aspects of some of the text of AI development we talked about and maybe from another angle do you see that changing different ways over the next 10 years 20 years 50 years as China continues to grow as it does now in its tech innovation there's a common belief that full freedom of speech and expression is correlated with creativity which is correlated with entrepreneurial success I think empirically we have seen that is not true and China has been successful that's not to say the fundamental values are not right or not the best but it's just that that that perfect correlation isn't isn't there it's hard to read the tea leaves on an opening up or not in any country and I've not been very good at that in my past predictions but I do believe every country shares some fundamental value a lot of fundamental values for the long-term so you know China is drafting its privacy policy for individual citizens and they don't look that different from the American or European ones so people do want to protect their privacy and have the opportunity to express and I think the fundamental values are there the question is in the execution and timing how soon or when will that start to open up so so as long as each government knows ultimately people want that kind of protection there should be a plan to move towards that as to when or how and I'm not an expert on the point of privacy to me it's really interesting so AI needs data to create a personalized awesome experience yeah all right I'm just speaking general in terms of products and then we have currently depending on the age and depending on the demographics of who we were talking about some people are more or less concerned about the amount of data they handover so in your view how do we get this balance right that we provide an amazing experience to people that use products you look at Facebook you know the more Facebook knows about you yes it's scary to say the better you can probably yeah but experience it could probably create so any of you how do we get that balance right yes I think a lot of people have a misunderstanding that it's okay and possible to just rip all the data out from a provider and give it back to you so you can deny them access to further data and still enjoy the services we have if we take back all the data all the services will give us nonsense will no longer be able to use products that function well in terms of you know right ranking right products right user experience so so yet I do understand we don't want to permit misuse of the data from legal policy and point I think there can be severe punishment for those who have egregious misuse of the data that's I think a good first step actually China in this side on this aspect has very strong laws about people who sell or give data to other companies and and that over the past few years since the that loud locking came into effect pretty much eradicated the illegal distribution sharing of data additionally I think giving I think in technology is often a very good way to solve technology misuse so can we come up with new technologies that will let us have our cake and eat it too people are looking into homomorphic encryption which is letting you keep the data have it encrypted and train on encrypted data of course we haven't solved that one yet but that kind of direction may be worth pursuing also federated learning which would allow one hospital to train on its hospital's patient data fully because they have a license for that and then hospitals would then share their models not data with models to create a super AI and that also maybe has some promise so I would want to encourage us to be open-minded and think this think of this as not just the policy binary yes/no but letting the technologists try to find solutions to let us have our cake in either two or have most of our cake and eat most of it too finally I think giving each end user a choice is important and having transparency is important also I think that's universal but the choice you give to the user should not be at a granular level that the user cannot understand gdpr today causes all these pop ups of yes/no will you give this site this right to use this part of your data I don't think any user understands what they're saying yes or no to and I suspect most are just saying yes because they don't understand so while GDP are in its current implementation has lived up to its promise of transparency and user choice it implemented it in such a way that really didn't deliver the the spirit of GDP our it fit the letter but not the spirit so again I think we need to think about is there a way to fit the spirit of GDP our by using some kind of technology can we have a slider that's an AI trying to figure out how much you want to slide between perfect protection secure security of your personal data versus high degree of convenience with some risks of not having full privacy each user's should have some preference and that gives you the user choice but maybe we should turn the problem on its head and ask can there be an AI algorithm that can customize this because we can understand the slider but we sure cannot understand every pop up question and I think getting that right requires getting the balance between what we talked about earlier which is heart and soul versus profit driven decisions and strategy I think from my perspective the best way to make a lot of money in the long term is to keep your heart and soul intact I think getting that slider right in the short term may feel like you'll be sacrificing profit but in the long term you were beginning user trusts and providing a great experience do you share that kind of view in general yes absolutely I would I sure would hope there is a way we can do long term projects that's really do the right thing I think a lot of people who embrace GDP are their hearts in the right place I think they just need to figure out how to build a solution I've heard utopians talk about solutions that get me excited but not sure how in the current funding environment they can get started right people talk about imagine this crowd-sourced data collection that we all trust and then we have these agents that we ask them to ask the trusted agent to we that agent only that platform said trusted joined platform that that we all believe is trustworthy that can give us all the closed-loop personal suggestions by the new social network new search engine new e-commerce engine that has access to even more of our data but not directly but indirectly so I think that general concepts of licenses into some trusted engine and finding a way to trust that engine seems like a great idea but if you think how long it's gonna take to implement and tweak and develop it right as well as to collect all the Trust's and the data from the people it's beyond the current cycle of venture capital right so how do you do that it's a big question you've recently had a fight with cancer Stage four lymphoma and a in a sort of deep personal level what did it feel like in the darker moments to face your own mortality well I've been the workaholic my whole life and I've basically worked nine ninety six 9:00 a.m. to 9:00 p.m. six days a week roughly and I didn't really pay a lot of attention to my family friends and people who loved me and my life revolved around optimizing for work while my work was not routine my optimization really what made my life basically a very mechanical process but I got a lot of highs out of it because of accomplishments that I thought were really important and dear and the highest priority to me but when I faced mortality and the possible death in a matter of months I suddenly realized that this really meant nothing to me that I didn't feel like working for another minute that if I had six months left in my life I would spend it all with my loved ones and you know thanking them giving them love back and apologizing to them that I lived my life the wrong way so so that moment of reckoning caused me to really rethink that's why we exist in this world is is something that we might be too much shaped by the society to think that success and accomplishments is why we live but while that can get you periodic successes and satisfaction it's really in the facing death you see what's truly important to you so as a result of going through them the challenges with cancer I've resolved to live a more balanced lifestyle I'm now in remission knock on wood and I'm spending more time with my family my wife travels with me when my kids need me I spend more time with them and before I used to prioritize everything around work when I had a little bit of time I would dole it out to my family now when my family needs something really need something I drop everything at work and go to them and then in the time remaining I allocate to work but once family is very understanding it's not like they will take me 50 weeks so 50 hours a week from me so I'm actually able to still work pretty hard maybe ten hours less per week so I realized the most important thing in my life is really love and the people I love and that give that the highest priority it isn't the only thing I do but when that is needed I put that at the top priority and I feel much better and I feel much more balanced and I think this also gives a hint as to a life of routine work a life of pursuit of numbers while my job was not routine it wasn't pursuit of numbers pursuit of can I make more money can i fund more great companies can I raise more money can I make sure our VC is ranked higher and higher every year this competitive nature of driving for bigger numbers and better numbers became a endless pursuit of that's mechanical and bigger numbers doesn't really didn't make me happier and faced with death I realized the bigger numbers really meant nothing and what was important is that people who have given their heart and their love to me deserve for me to do the same there's deep profound truth in that that everyone should hear and internalize and that's really powerful for you to say that I have to ask sort of a difficult question here so I've competed and sports my whole life looking historically I'd like to challenge some aspect of that a little bit on the point of hard work that it feels that there are certain aspects that is the greatest the most beautiful aspects of human nature is the ability to become obsessed of becoming extremely passionate to the point where yes flaws are revealed and just giving yourself fully to a task that is in another sense you mentioned love being important but in another sense this kind of obsession this pure exhibition of passion and hard work is truly what it means to be human what lessons should we take this deeper because Eve accomplished incredible things you say it chasing numbers but really there's some incredible work there so how do you think about that when you look back in your 20s 30s what was what would you do differently would you really take back some of the incredible hard work I would but it's it's in percentages right we're both computer scientists so I think when one balances one's life when when one is younger you you might give a smaller percentage to family but you will still give him high priority and when you get older you would give a larger percentage to them and still the high priority and and when you're near retirement you give most of it to them and the highest priority so I think the key point is not that we would work 20 hours less for the whole life and just spend it aimlessly with the family but that when the family has a need when your wife is having a baby when your daughter has a birthday or when they're depressed or when they're celebrating something or when they have a get-together or we have family time that it's important for us to put down our phone and PC and be a hundred percent with them and that priority on the things that really matter isn't going to be so taxing that it would eliminate or even dramatically reduce our accomplishments it might have some impact but it might also have other impact because if you have a happier family maybe you fight less you fight less you don't spend time taking care of all the aftermath of a fight so it's unclear that it would take more time and if it did I'd be willing to to take that reduction and it's not a dramatic number but it's a number that I think would give me a greater degree of happiness and knowing that I've done the right thing and still have plenty of hours to it to get the success that I want to get so given the many successful companies that you've launched and much success throughout your career what advice would you give to young people today King or doesn't have to be young but people today looking to launch and to create the next one billion-dollar tech startup or even AI base startup I would suggest that people understand technology waves move quickly what worked two years ago may not work today and that is very much case in point for AI I think two years ago or maybe three years ago you certainly could say I have a couple of super-smart PhDs and we're not sure what we're gonna do but here's what how we're gonna start and get funding for a very high valuation those days are over because AI is going from rocket science towards mainstream not yet commodity but more mainstream so first the creation of any company to eventual capitalists has to be creation of business value and monetary value and when you have a very scarce commodity VCS may be willing to accept greater uncertainty but now the number of people who have the equivalent of PhD three years ago because that can be learned more quickly platforms are emerging the costs to become a AI engineer is much lower and there are many more AI engineers so the market is different so I would suggest someone who wants to build an AI company be thinking about the normal business questions what customer cases are you trying to address what kind of pain are you trying to address how does that translate to value how will you extract value and get paid through what channel and how much business value will get created that's today needs to be thought about much earlier upfront then it did three years ago the scarcity question of AI talent has changed the number of AI talent has changed so now you need not just AI but also understanding of business customer and and the marketplace so I also think you should have a more reasonable valuation expectation and growth expectation there's going to be more competition but the good news though is that AI technologies are now more available in open source tensorflow pi torch and such tools are much easier to use so you should be able to experiment and get results iteratively faster than before so take more of a business mindset to this think less of this as a laboratory taken into a company because we've gone beyond that stage the only exception is if you truly have a breakthrough in some technology that really no one has then then the old way still works but I think that's harder and harder now so I know you believe as many do that we're far from creating an artificial general intelligence system but say once we do and you get to ask her one question what would that question be what is it that differentiates you and me beautifully put careful thank you so much for your time today thank you you
Sean Carroll: The Nature of the Universe, Life, and Intelligence | Lex Fridman Podcast #26
the following is a conversation with Sean Carroll he's a theoretical physicist at Caltech specializing in quantum mechanics gravity and cosmology he's the author of several popular books one on the arrow of time called from eternity to here one on the Higgs boson called particle at the end of the universe and one on science of philosophy called the big picture on the origins of life meaning in the universe itself he has an upcoming book on quantum mechanics that you can pre-order now called something deeply hidden he writes one of my favorite blogs and his website preposterous universe com I recommend clicking on the greatest-hits link that lists accessible interesting posts on the arrow of time dark matter dark energy the Big Bang general relativity string theory quantum mechanics and the big meta questions about the philosophy of science God ethics politics academia and much much more finally and perhaps most famously he's the host of a podcast called mindscape that you should subscribe to and support on patreon along with the Joe Rogan experience Sam Harris is making sense and Dan Carlin's hardcore history sean's mindscape podcast is one of my favorite ways to learn new ideas or explore different perspectives and ideas that I thought I understood it was truly an honor to meet and spend a couple hours with Sean it's a bit heartbreaking to say that for the first time ever the audio recorder for this podcast died in the middle of our conversation there are technical reasons for this having to do with phantom power that I now understand and will avoid it took me one hour to notice and fix the problem so much like the universal 60 percent dark energy roughly the same amount in this conversation was lost except in the memories of the two people involved and in my notes I'm sure we'll talk again and continue this conversation on this podcast or on Shawn's and of course I look forward to it this is the artificial intelligence podcast if you enjoy it subscribe on YouTube iTunes supported on patreon or simply connect with me on Twitter at Lex Friedman and now here's my conversation with Sean Carroll what do you think is more interesting and impactful understanding how the universe works at a fundamental level or understanding how the human mind works you know of course this is a crazy meaningless unanswerable question in some sense because they're both very interesting and there's no absolute scale of interestingness that we can rate them on there's the glib answers that says the human brain is part of the universe right and therefore under saying the universe is more fundamental than understanding the human brain but do you really believe that once we understand the fundamental way the universe works at the particle level the forces we would be able to understand how the mind works no certainly not we cannot understand how ice cream works just from understanding how particles work right so a big believer in emergence I'm a big believer that there are different ways of talking about the world beyond just the most fundamental microscopic one you know when we talk about tables and chairs and planets and people we're not talking the language of particle physics and cosmology so but understanding the universe you didn't say just at the most fundamental level right so understanding the universe at all levels is part of that I do think you have to be a little bit more fair to the question there probably are general principles of complexity biology information processing memory knowledge creativity that go beyond just the human brain right and and maybe one could count understanding those as part of understanding the universe the human brain as far as we know is the most complex thing in the universe so there's it's certainly absurd to think that by understanding the fundamental laws of particle physics you get any direct insight on how the brain works but then there's this step from the fundamentals of particle physics to information processing yeah a lot of physicists and philosophers maybe a little bit carelessly take when they talk about artificial intelligence do you think of the universe as a kind of a computational device know to be like the honest answer there is no there's a sense in which the universe processes information clearly there's a sense in which the universe is like a computer clearly but in some sense I think I tried to say this once on my blog and no one agreed with me but the universe is more like a computation than a computer because the universe happens once a computer is a general-purpose machine right if you can ask it different questions even a pocket calculator right and it's set up to answer certain kinds of questions the universe isn't that so information processing happens in the universe but it's not what the universe is because I know your MIT colleagues that Lloyd feels very differently about this well you're thinking of the universe it's a closed system I am so what makes a computer more like a like a PC like a computing machine is that there's a human that everyone comes up to it and moves the mouse around yeah so input gives it input gives it input and your and that's why you're saying is just a computation a deterministic thing that's just unrolling but the immense complexity of it is nevertheless like processing there's a state and then it changes with good rule with rules and there's a sense for a lot of people that if the brain operates the human brain operates within that world then it's simply just a small subset of that and so there's no reason we can't build arbitrarily great intelligences yeah do you think of intelligence in this way intelligence is tricky I don't have a definition of it offhand so I remember this panel discussion that I saw on YouTube I wasn't there but Seth Lloyd was on the panel and so was Martin Rees the famous astrophysicist and Seth gave his shtick for why the universe is a computer and explained this and and and Martin we said so what is not a computer that's a good question I'm not sure because if you have a sufficiently broad definition of what a computer is then everything is right and the simile or the analogy gains force when it excludes some things you know is the moon going around the earth performing a computation I can come up with definitions in which the answer is yes but it's not a very useful computation I think that it's absolutely helpful to think about the universe in certain situations certain contexts as an information processing device I am even guilty of writing a paper called quantum circuit cosmology where we modeled the whole universe as a quantum circuit as a circuit as a circuit yeah and what you bits kind of with new bits basically right yeah so in cube it's becoming more and more entangled so what do you want to do want to digress a little bit this is kind of fun so here's a mystery about the universe that is so deep and profound that nobody talks about it space expands right and we talk about in a certain region of space a certain number of degrees of freedom a certain number of ways that the quantum fields and the particles in that region can arrange themselves that that number of degrees of freedom in a region of space is arguably finite we actually don't know how many there are but there's a very good argument it says it's a finite number so as the universe expands and space gets bigger are there more degrees of freedom if it's an infinite number it doesn't really matter infinity times two is still infinity but if it's a finite number then there's more space so there's more degrees of freedom so where did they come from that would mean the universe is not a closed system there's more degrees of freedom popping into existence so what we suggested was that there are more degrees of freedom and it's not that they're not there to start but they're not entangled to start so the universe that you and I know of zien over the three dimensions around us that we see we said those are the entangled degrees of freedom making up space-time and as the universe expands there's a whole bunch of qubits in their in their zero State that become entangled with the rest of space-time through the action of these quantum circuits so what does it mean that there's now more degrees of freedom as they become more integral yeah so this is the universe expands that's right so there's more and more degrees of freedom they're entangled that are playing part playing the role a part of the entangled space-time structure so the basic the underlying philosophy is that space-time itself arises from the entanglement of some fundamental quantum degrees of freedom Wow okay so at which point is most of the the the entanglement happening are we talking about close to the bing bang are we talking about throughout the the time right yeah so the idea is that at the Big Bang almost all the degrees of freedom that the universe could have were there but they were unentangled with anything else and that's a reflection of the fact that the Big Bang had a low entropy it was very simple very small place and as space expands more and more degrees of freedom become entangled with the rest of the world well I have to ask John Carroll what do you think of the thought experiment from Nick Bostrom that we're living in a simulation so I think let me contextualize that a little bit more I think people don't actually take this thought experiments I think it's quite interesting it's not very useful but it's quite interesting from the perspective of AI a lot of the learning that can be done usually happens in simulation from artificial examples and so it's a constructive question to ask how difficult is our real world to simulate right which is kind of a dual part of for living in a simulation and somebody built that simulation now how if you were to try to do it yourself how hard would it be so obviously we could be living a simulation if you just want the physical possibility then I completely agree that it's physically possible I don't think that we actually are so take this one piece of data into consideration you know we we live in a big universe okay there's two trillion galaxies in our observable universe with 200 billion stars in each galaxy etc it would seem to be a waste of resources to have a universe that big going on just to do a simulation so in other words I want to be a good Bayesian I want to ask under this hypothesis what do I expect to see so the first thing I would say is I wouldn't expect to see a universe that was that big okay the second thing is I wouldn't expect the resolution of the universe to be as good as it is so it's always possible that if they're superhuman simulators only have finite resources that they don't render the entire universe right that that the part that is out there that two trillion galaxies isn't actually being simulated fully okay but then the obvious extrapolation of that is that only I am being simulated fully like the rest of you are just part not non-player characters right I'm the only thing that is real the rest of you are just chatbots beyond this wall I see the wall but there is literally nothing on the other side of the wall that is sort of the Bayesian prediction that's what it would be like to do an efficient simulation of me so like none of that seems quite realistic there's that I don't see I hear the argument that it's just possible and easy to simulate lots of things I don't see any evidence from what we know about our universe that we look like a simulated universe now maybe you could say well we don't know what it would look like but that's just abandoning your Bayesian responsibilities like your your job is to say under this theory here's what you would expect to see yes certainly if you think about simulation is a thing that's like a video game where only a small subset is being rendered but say the entire all of the laws of physics it the entire closed system of the quote-unquote universe mm-hmm it had a creator yeah it's always possible all right so that's not useful to think about when you're thinking about physics the way Nick Bostrom phrases it if is if it's possible to simulate a universe eventually we'll do it right you could use that by the way for a lot of things well yeah but I guess the question is how hard is it to create a universe I wrote a little blog post about this and maybe maybe I'm missing something but there's an argument that says not only that it might be possible to simulate a universe but probably if you imagine that you actually attribute consciousness and agency to the little things that we're simulating to our little artificial beings there's probably a lot more of them than there are ordinary organic beings in the universe or there will be in the future right so there's an argument that not only is being a simulation possible it's probable because in the space of all living consciousness is most of them are being simulated right most of them are not at the top level I think that argument must be wrong because it follows from that argument that you know if we're simulated but we can also simulate other things well but if we can simulate other things they can simulate other things right if we give them enough power and resolution and ultimately we'll reach a bottom because the laws of physics in our universe have a bottom or made of atoms and so forth so there will be the cheapest possible simulations and if you believe the original argument you should conclude that we should be in the cheapest possible simulation because that's where most people are but we don't look like that doesn't look at all like we're at the edge of resolution you know there were 16-bit you know things but it's it seems much easier to make much lower level things that then we are so and also I questioned the whole approach to the anthropic principle that says we are typical observers in the universe I think that that's not actually I think that there's a lot of selection that we can do that we're typically in things we already know but not typical within all the universe so do you think there's intelligent life however you would like to define intelligent life out there in the universe my guess is that there is not intelligent life in the observable universe other than us simply on the basis of the the fact that the likely number of other intelligent species in the observable universe there's two likely numbers zero or billions and if there had been billions we would have noticed already - for there to be literally like a small number like you know Star Trek there's you know a dozen intelligent civilizations in our galaxy but not a billion that that's weird that that's sort of bizarre to me it's easy for me to imagine that there are zero others because there's just a big bottleneck to making multicellular life or logical life or whatever it's very hard for me to imagine that there's a whole bunch out there that have somehow remained hidden from us the question I'd like to ask is what would intelligent life look like the what I mean by that question in a war that's going is what if intelligent life is just fundamental it's in some very big ways different than our the one that has on earth that there's all kinds of intelligent life that operates in different scales of both size and temporal right that's a great possibility because I think we should be humble about what intelligence is what life is we don't even agree on what life is much less what intelligent life is right so that that's an argument for humility saying there could be intelligent life of a very different character right like you could imagine that dolphins are intelligent but never invent space travel because they live in the ocean and they don't have thumbs right so they never invent technology you never events melting maybe the universe is full of intelligent species that just don't make technology right that that's compatible with the data I think and and I think maybe maybe what you're pointing at is even more out there versions of intelligence you know intelligence in inter molecular clouds or on the surface of a neutron star or in between the galaxies and giant things where the equivalent of a heartbeat is 100 million years on the one hand yes we should be very open-minded about those things on the other hand we all all of us share the same laws of physics there might be something about the laws of physics even though we don't currently know exactly what that thing would be that makes meters and years the right length and time scales for intelligent life maybe not but you know we're made of atoms atoms have a certain size we orbit stars or stars have a certain lifetime it's not impossible to me that there's a sweet spot for intelligent life that we find ourselves in so I'm hoping mine didn't either way I won't mind either being humble and there's all sorts of different kinds of life or no there's a reason we just don't know it yet why life like ours is the kind of life that's out there yeah I'm of two minds too but I often wonder if our brains is just designed to quite obviously to operate and see the world in the unnies timescales and we're almost blind and and the tools we've created for detecting things are blind yeah to the kind of observation needed to see intelligent life or other skills well I'm totally open to that but so here's another argument I would make you know we have looked for intelligent life but we've looked at for it in the dumbest way we can rise by turning radio telescopes to the sky and why in the world would a super advanced civilization randomly beam out radio signals wastefully in all directions into the universe that just doesn't make any sense especially because in order to think that you would actually contact another civilization you would have to do it forever you have to keep doing it for millions of years that sounds like a waste of resources if you thought that there were other solar systems with planets around them where maybe intelligent life didn't yet exist but might someday you wouldn't try to talk to it with radio waves she would send a spacecraft out there and you would park it around there and it would be like from our point of view be like 2001 or there's an you know an monolith monolith so there could be an artifact in fact the other way works also right there could be artifacts in our solar system that are have been put there by other technologically advanced civilizations and that's how we will eventually contact them and we have just haven't explored the solar system well enough yet to find them the reason why we don't think about that is because we're young and impatient right like it would take more than my lifetime to actually send something to another star system and wait for it and then come back so but if if we start thinking on hundreds of thousands of years or million year timescales that's clearly the right thing to do are you excited by the thing that Elon Musk says don't pay sex in general space but the idea of space exploration even though your or your species is young and impatient ya know I do think that space travel is crucially important long term even to other star systems and I think that many people overestimate the difficulty because they say look if you travel 1% the speed of light to another star system will be dead before we get there right and I think that it's much easier and therefore when they write their science fiction stories they imagine we feel fast from the speed of light because otherwise they are too impatient right we're not gonna go faster than the speed of light but we could easily imagine that the human lifespan gets extended to thousands of years and once you do that then the stars are much closer effectively right and what's a hundred year trip right so I think that that's gonna be the future the far future not not my lifetime once again but baby steps and unless your lifetime gets extended well it's in a race against time right a friend of mine who actually thinks about these things said you know you and I are gonna die but I don't know about our grandchildren that's right that's it I don't know for predicting the future is hard but that's at least a plausible scenario and so ya know I think that as we discussed earlier there are threats to the earth known and unknown right having spread humanity and biology elsewhere is a really important long-term goal what kind of questions can science not currently answer but might soon when you think about the problems and the mysteries before us hmm that may be within reach of science I think an obvious one is the origin of life we don't know how that happened there's a difficulty in knowing how it happened historically actually you know literally on earth but starting life from non-life is something I kind of think we're close to right we really think so how like how difficult is it to start lie - well I I've talked to people including on the podcast about this you know life requires three things life as we know it so there's a difference with life which who knows what it is and life as we know it which we can talk about with some intelligence so life as we know it requires compartmentalization you need like a little membrane around your cell metabolism you take in food and eat it and let that make you do things and then replication okay so you need to have some information about you are that you passed down through to future generations in the lab compartmentalization seems pretty easy not hard to make lipid bilayers that come into a little cellular walls pretty easily metabolism and replication are hard but replication we're close to people have made RNA like molecules in the lab that I think the state of the art is they're not able to make one molecule that reproduces itself but they're able to make two molecules that reproduce each other yeah so that's okay that's pretty close metabolism is hard to believe it or not even though it's sort of the most obvious thing but you want some sort of controlled metabolism and the actual cellular machinery in our bodies is quite complicated it's hard to see it just popping into existence all by itself probably took a while but it's we're making progress and in fact I don't think we're spending nearly enough money on it if I were the NSF I would flood this area with money because it would change our view of the world if we could actually make life in the lab and understand how it was made originally here on earth and I'm sure have some ripple effects that help cure disease and so on I mean that just that ending so synthetic biology is a wonderful big frontier where we're making cells the right now the best way to do that is to borrow heavily from existing biology right where craig Venter several years ago created an artificial cell but all he did was not all he did it was a tremendous accomplishment but all he did was take out the DNA from a cell and put in an entirely new DNA and let it boot up and go hmm what about the leap to creating intelligent life on Earth yeah however again we define intelligence of course but let's just even say Homo sapiens the the modern the modern intelligence in our human brain you have a sense of what's involved in that leap and how big of a leap that is so AI would count in this or you really want life you want a really organism in some sense AI would count okay I yeah of course of course AI would come but well let's say artificial consciousness right so I do not think we are on the threshold of creating artificial consciousness I think it's possible I'm not again very educated about how close we are but my impression is not that we're really close because we understand how little how little we understand of consciousness and what it is so if we don't have any idea what it is it's hard to imagine we're on the threshold of making it ourselves but it's doable it's possible I don't see any obstacles in principle so yeah I would hold out some interest in that happening eventually I think in general consciousness I think we'll be just surprised how easy consciousness is once we create intelligence you know I think consciousness is the thing that that's just something we all fake well good no actually I like this idea that in fact consciousness is way less mysterious than we think yeah because we're all at every time at every moment less conscious than we think we are right we can fool things and I think that plus the idea that you not only have artificial intelligent systems but you put them in a body right give them a robot body that will help the faking a lot yeah I think I think creating consciousness in our artificial consciousness is as simple as asking a Roomba to say I'm conscious and refusing to be talked out of it could be it could be and I mean I'm almost being silly but that's what we do yeah that's what we do with each other this is the kind of that consciousness is also a social construct and a lot of our ideas of intelligence is a social construct and so reaching that bar involves something that's beyond that's not necessarily involve the fundamental understanding of how you go from electrons to neurons to cognition no I actually I think that is a really good point and in fact what it suggests is you know so yeah you referred to Kate Kate darling who I had on the podcast and who does these experiments with very simple robots but they look like animals and they can look like they're experiencing pain and we human beings react very negatively to these little robots looking like they're experiencing pain and what you want to say is yeah but they're just robots it's not really pain he's just some electrons going around but then you realize you know you and I are just electrons going around and that's what pain is also and so what I what I would have an easy time imagining is that there is a spectrum between these simple robots that Kate works with and a human being where there are things that sort of by some strict definition touring test level thing are not conscious but nevertheless walk and talk like they're conscious and it could be that the future is I mean Siri is close right and so it might be the future has a lot more agents like that and in fact rather than some day going aha we have consciousness will just creep up on it with more and more accurate reflections of what we expect and in the future maybe the present for example we haven't met before and you're made basically assuming that I'm human as it's a high probability at this time because the yeah but in the future there might be question marks on that right yeah no absolutely certainly videos are almost to the point where you shouldn't trust them already photos you can't trust right videos is easier to trust but we're getting worse that yeah we're getting better at faking them Yeah right getting better yeah so physical embodied people what's what's so hard about faking that so this is very depressing this conversation right so to me is excited you're doing it so exciting to you but it's a sobering thought we're very bad right yet imagining what the next 50 years are gonna be like when we're in the middle of a phase transition as who you are right now yeah and I in general I'm not blind to all the threats yeah I am excited by the power of technology to solve to protect us against the threats as they evolve I'm not as much as Steven Pinker optimistic about the world but in everything I've seen all the brilliant people in the world that I've met are good people so the army of the good in terms of the development of technology is large okay you're way more optimistic than I am I think that goodness and badness are equally distributed among intelligent and unintelligent people I don't see much of a correlation there interesting neither of us have proof yeah exactly is yeah again that pinions are freeze right nor definitions of good and evil we come without definitions or with without data opinions so what kind of questions concerns not currently answer may never be able to answer in your view well the obvious one is what is good and bad you know I know what is right and wrong I think that there are questions that you know science tells us what happens what the world is and what it does it doesn't say what the world should do or what we should do because we're part of the world but we are part of the world and we have the ability to feel like something's right something's wrong and to make a very long story very short I think that the idea of moral philosophy is systematizing our intuitions of what is right what is wrong and science might be able to predict ahead of time what we will do but it won't ever be able to judge whether we should have done it or not so you know you're kind of unique in terms of scientists listen it doesn't have to do with podcast but even just reaching out I think you referred to as sort of an doing interdisciplinary science so you reach out and talk to people that are outside of your discipline which I always hope that's what science was for in fact I was a little disillusioned when I realized that academia is very siloed yeah and so the question is how well at your own level how do you prepare for these conversations how do you think about these conversations how do you open your mind enough to have these conversations and it may be a little bit broader how can you advise other scientists to have these kinds of conversations not at the podcast so the fact that you're doing a podcast is awesome other people don't hear them yeah but it's also good to have it without mics right in general it's a good question but a tough one to answer I think about you know a guy knows a personal trainer and he was asked on a podcast how do we you know psych ourselves up to do a workout how do we make that discipline to go and work out he's like why you asking me like I can't stop working out like I don't need to psych myself up so and likewise you know he asked me like how do you get to like have inner discipline conversations and all sorts of different things all sorts of different people and like that's that's what makes me go right like that's I couldn't stop doing that I did that long before any of them were recorded in fact a lot of the motivation for starting recording it was making sure I would read all these books that I had purchased right like all these books I wanted to read not enough time to read them and now if I have motivation because I'm gonna you know interview Pat Churchland I'm gonna finally read her for her book you know and it's absolutely true that academia is extraordinarily siloed right we don't talk to people we rarely do and in fact when we do is punished you know like the people who do it successfully generally first became very successful within their little silo you have discipline and only then did they start expanding out if you're a young person you know I I have graduate students I try to be very very candid with them about this that it's you know most graduate students are to not become faculty members right it's a it's a tough road and so live the life you want to live but do it with your eyes open about what it does to your job chances and the more broad you are and the less time you spend hyper specializing in your field the lower your job chances are that's just an academic reality it's terrible I don't like it yeah but it's a reality and for some people that's fine like there's plenty of people who are wonderful scientists who have zero interest in branching out and talking to things to anyone outside their field but it is disillusioning to me some of the you know romantic notion I had the intellectual academic life is belied by the reality of it the idea that we should reach out beyond our discipline and that is a positive good is just so rare in universities that it may as well not exist at all but that said even though you're saying you're doing it like the personal trainer because you just can't help it you're also an inspiration to others like I could speak for myself you know I also have a career I'm thinking about right now and without your podcast I may not have been doing this at all right so it makes me realize that these kinds of conversations is kind of what science is about in many ways what the reason we write papers this exchange of ideas is it's much harder to do interdisciplinary papers I would say yeah that's right and conversations are easier so conversations is a beginning and in the field of AI that's in - it's it's obvious that we should think outside of pure computer vision competitions and a particular data sets which should think about the broader impact of how this can be you know you know the reaching out the physics the psychology to neuroscience and having these conversations so that you're an inspiration and so well thank you never sweet but never know how the world changes I mean the the fact that this stuff is out there and I've a huge number of people come up to me a grad students really loving the podcast inspired by it and they will probably have that there'll be ripple effects when they become faculty and so on so we can end on a balance between pessimism and optimism and Shawn thank you so much for talking it was awesome no Lex thank you very much for this conversation was great you
Jeff Hawkins: Thousand Brains Theory of Intelligence | Lex Fridman Podcast #25
the following is a conversation with Jeff Hawkins he's the founder of the redwood centre for theoretical neuroscience in 2002 and Numenta in 2005 in this 2004 book titled on intelligence and in the research before and after he and his team have worked to reverse-engineer the neocortex and proposed artificial intelligence architectures approaches and ideas that are inspired by the human brain these ideas include hierarchical temporal memory htm' from 2004 and new work the thousands brains theory of intelligence from 2000 17 18 and 19 Jeff's ideas have been an inspiration to many who have looked for progress beyond the current machine learning approaches but they have also received criticism for lacking a body of empirical evidence supporting the models this is always a challenge when seeking more than small incremental steps forward in AI Jeff was a brilliant mind and many of the ideas he has developed and aggregated from your science are worth understanding and thinking about there are limits to deep learning as it is currently defined forward progress in AI is shrouded in mystery my hope is that conversations like this can help provide an inspiring spark for new ideas this is the artificial intelligence podcast if you enjoy it subscribe on youtube itunes or simply connect with me on twitter at lux friedman spelled fri d and now here's my conversation with Jeff Hawkins are you more interested in understanding the human brain or in creating artificial systems that have many of the same qualities but don't necessarily require that you actually understand the underpinning workings of our mind so there's a clear answer to that question my primary interest is understanding the human brain no question about it but I also firmly believe that we will not be able to create fully intelligent machines until we understand how the human brain works so I don't see those as separate problems I think there's limits so what can be done with machine intelligence if you don't understand the principles by which the brain works and so I actually believe that studying the brain is actually the fast the fastest way to get to machine intelligence and within that let me ask the impossible question how do you not define but at least think about what it means to be intelligent so I didn't try to answer that question first we said let's just talk about how the brain works let's figure out how certain parts of the brain mostly the new your cortex but some other parts to the parts of the very most associated intelligence and let's discover the principles about how they work because intelligence isn't just like some mechanism and it's not just some capabilities it's like okay we don't even have know where to begin on this stuff and so now that we've made a lot of progress on this after we've made a lot of progress on how the neocortex works and we can talk about that I now have a very good idea what's going to be required to make intelligent machines I can tell you today you know some of the things are gonna be necessary I believe to create intelligent machines well so we'll get there we'll get to the neocortex and some of the theories of how the whole thing works and you're saying as we understand more and more about the neocortex about our own human mind we'll be able to start to more specifically define what it means to be intelligent it's not useful to really talk about that until I don't know if it's not useful look there's a long history of AI as you know right and there's been different approaches taken to it and who knows maybe they're all useful right so you know the good old fashioned AI the expert systems current convolution neural networks they all have their utility they all have a value in the world but I would think almost everyone agree that none of them are really intelligent in a set of a deep way that that humans are and so it's it's just the question is how do you get from where those systems were or are today to where a lot of people think we're going to go and just big big gap there a huge gap and I think the quickest way of bridging that gap is to figure out how the brain does that and then we can sit back and look and say oh what do these principles that the brain works on are necessary and which ones or not kula we don't have to build this in and telogen machines aren't going to be built out of you know organic living cells but there's a lot of stuff that goes on the brain it's going to be necessary so let me ask me B before we get into the fun details let me ask me to get depressing or a difficult question do you think it's possible that we will never be able to understand how our brain works that maybe there's aspects to the human mind like we ourselves cannot introspectively get to the core that there's a wall you eventually hit yeah I don't believe that's the case I have never believed that's the case there's not have been a single thing we've ever humans have ever put their minds to so we've said oh we reached the wall we can't go any further it just people keep saying that people used to believe that about life you know Ilan's Vittal right there's like what's the difference in living matter and nonliving matter something special you never understand we no longer think that so there's there's no historical evidence to suggest is the case and I just never even considered that's a possibility I would also say today we understand so much about the neocortex we've made tremendous progress in the last few years that I no longer think of it as an open question the answers are very clear to me and the pieces we know we don't know I are clearly me but the framework is all there and it's like oh okay we're gonna be able to do this this is not a problem anymore it just takes time and effort but there's no mystery a big mystery anymore so then let's get it into it for people like myself we're not very well versed in the human brain except my own can you describe to me at the highest level what are the different parts of the human brain and then zooming in on the neocortex the parts of the neocortex and so on a quick overview yeah sure human brain we can divide it roughly into two parts there's the old parts lots of pieces and then there's a new part the new part is the neocortex it's new because it didn't exist before mammals the only mammals have a neocortex and in humans it's in primates it's very large in the human brain the neocortex occupies about seventy to seventy-five percent of the volume of the brain it's huge and the old parts of the brain are there's lots of pieces there there's a spinal cord and there's the brain stem and the cerebellum and the different parts of the basal ganglia and so on in the old parts of the brain you have the autonomic regulation like breathing and heart rate you have basic behaviors so like walking and running or controlled by the old parts of the brain all the emotional centers of the brain are in the old part of the brains when you feel anger or hungry lust with things like that those are all in the old parts of the brain and and we associate with the neocortex all the things we think about as sort of high-level perception and cognitive functions anything from seeing and hearing and touching things to language to mathematics and engineering and science and so on those are all associative the neocortex and they're certainly correlated our abilities in those regards are correlated with the relative size of our neocortex compared to other mammals so that's like the rough division and you obviously can't understand the new your cortex is completely isolated but you can understand a lot of it with just a few interfaces so the all parts of the brain and so it it gives you a system to study the other remarkable thing about the neocortex compared to the old parts of the brain is the neocortex it's extremely uniform it's not visually or anatomically or it's very sucky I always like to say it's like the size of a dinner napkin about two and a half millimeters thick and it looks remarkably the same everywhere everywhere you look and that children have millimeters is this detailed architecture and it looks remarkably the same everywhere and that's a cross species the mouse versus a cat and a dog and a human or if you look at the old parts of the brain there's lots of little pieces do specific things so it's like the old parts of a brain evolved look this is the part that controls heart rate and this is the part that controls this and this is this the kind of thing and that's this kind of thing and he's evolved for eons a long long time and they have their specific functions and all sudden mammals come along and they got this thing called the neocortex and it got large by just replicating the same thing over and over and over again this is like wow this is incredible so all the evidence we have and this is an idea that was first articulated in a very cogent and beautiful argument by a guy named Vernon mal Castle in 1978 was that the neocortex all works on the same principle so language hearing touch vision engineering all these things are basically underlying or all built in the same computational substrate they're really all the same problem all over the building blocks all look similar yeah and they're not even that low-level we're not talking about like like neurons we're talking about this very complex circuit that exists throughout the neocortex is remarkably similar it is it's like yes did you see variations of it here and there more of the cell uh so that's not all so on but what now encruster argued was it says you know if you take a section on your cortex why is one a visual area and one is a auditory area or why 'since and his answer was it's because one is connected to eyes and one is connected ears literally you mean just its most closest in terms of number of connections to listen sir literally if you took the optic nerve and it attached it to a different part of the neocortex that part would become a visual region this actually this experiment was actually done by Mercosur oh boy and uh in in developing I think it was lemurs I can't remember there was some animal and and there's a lot of evidence to this you know if you take a blind person the person is born blind at Birth they they're born with a visual neocortex it doesn't may not get any input from the eyes because of some congenital defect or something and that region become does something else it picks up another task so and it's it's so it's just it's this very complex thing it's not like oh they're all built on neurons no they're all built in this very complex circuit and and somehow that circuit underlies everything and so this is the it it's called the common cortical algorithm if you will some scientists just find it hard to believe and they decide can't really that's true but the evidence is overwhelming in this case and so a large part of what it means to figure out how the brain creates intelligence and what is intelligence in the brain is to understand what that circuit does if you can figure out what that circuit does as amazing as it is then you can then you then you understand what all these other cognitive functions are so a few words to sort of put neural cortex outside of your book on intelligence you look if you wrote a giant tome a textbook on the neocortex and you look maybe a couple centuries from now how much of what we know now would still be two centuries from now so how close are we in terms of understand I have to speak from my own particular experience here so I run a small research lab here it's like yeah it's like I need other research lab I'm the sort of the principal investigator there was actually two of us and there's a bunch of other people and this is what we do we started the neocortex and we published our results and so on so about three years ago we had a real breakthrough in this in this film just tremendous spectrum we started we've now published I think three papers on it and so I have I have a pretty good understanding of all the pieces and what we're missing I would say that almost all the empirical data we've collected about the brain which is enormous if you don't know the neuroscience literature it's just incredibly big and it's it's the most part all correct its facts and and experimental results and measurements and all kinds of stuff but it none none of that has been really assimilated into a theoretical framework it's it's data without it's in the language of Thomas Kuhns a historian it would be a sort of a pre paradigm science lots of data but no way to fit in together I think almost all of that's correct it's gonna be some mistakes in there and for the most part there aren't really good cogent theories about it how to put it together it's not like we have two or three competing good theories which ones are right and which ones are wrong it's like yeah people just like scratching their heads wrong things you know some people given up on trying to like figure out what the whole thing does in fact is very very few labs that we that we do that focus really on theory and all this unassimilated data and trying to explain it so it's not like we have we've got it wrong it's just that we haven't got it at all so it's really I would say pretty early days in terms of understanding the fundamental theories forces of the way our mind works I don't think so that what I would have said that's true five years ago so I we have some really big breakthroughs on this recently and we started publishing papers on this so look it but so I don't think it's I you know I'm an optimist and from where I sit today most people would disagree with this but from where I sit city from what I know uh it's not super early days anymore we are it's it's you know the way these things go is it's not a linear path right you don't just start accumulating and get better and better better no you okay all the stuff you've collected none of it makes sense all these different things we just turn around and then you're gonna have some breaking points or all sudden oh my god now we got it right so that's how it goes and science and I feel like we passed that little thing about a couple years ago all that big thing a couple years ago so we can talk about that time will tell if I'm right but I feel very confident about it that's my moment to say it on tape like this at least very optimistic so let's before those few years ago let's take take step back to HTM the hierarchical temporal memory theory which you first proposed on intelligence and went through a few different generations can you describe what it is how it evolved through the three generations yes you first put it on paper yeah so one of the things that neuroscientists just sort of missed for many many years and ice and especially people were thinking about theory was the nature of time in the brain brains process information through time the information coming into the brain is constantly changing the patterns from my speech right now if you're listening to it at normal speed we'd be changing on IRA's about every 10 milliseconds or so you'd have it change this constant flow when you look at the world your eyes are moving constantly three to five times a second and the inputs complete completely if I were to touch something like a coffee cup as I move my fingers that input changes so this idea that the brain works on time changing patterns is almost completely or was almost completely missing from a lot of the basic theories like fears of vision and so it's like oh no we're going to put this image in front of you and flash it and say what is it a convolutional neural networks work that way today right you know classify this picture but that's not what visions like vision is this sort of crazy time-based pattern that's going all over the place and was touched and so is hearing so the first part of a hierarchal temporal memory was the temporal part it's it's the same you you won't understand the brain orally understand intelligent machines unless you're dealing with time-based patterns the second thing was the memory component of it was is to say that we aren't just processing input we learn a model of the world that's the memory stands for that model we have to the point of the brain part of the New York white chest it learns a model of the world we have to store things that our experience is in a form that leads to a model the world so we can move around the world we can pick things up and do things and navigate know how it's going on so that's that's what the memory referred to and many people just they were thinking about like certain processes without memory at all it just like processing things and finally the hierarchical component was reflection to that the New York or check so though it's just uniform sheet of cells different parts of it project to other parts which project to other parts and there is this sort of rough hierarchy in terms of them so the hyperbole temporal memory is just saying look we should be thinking about the brain as time-based you know model memory based and hierarchical processing and and that was a placeholder for a bunch of components that we would then plug into that we still believe all those things I just said but we now know so much more that I'm stopping to use the word hierarchal thumper memory yeah because it's it's insufficient to capture the stuff we know so again it's not incorrect but it's I now know more and I would rather describe it more accurately yeah so you're basically we can think of HTM as emphasizing that there's three aspects of intelligence that important to think about whatever the whatever the eventual theory it converges to yeah so in terms of time how do you think of nature of time across different time scales so you mentioned things changing a sensory inputs changing every 10 being myself what about it every few minutes every few yeah Montse well if you think about a neuroscience problem the brain problem neurons themselves can stay active for certain perks of time they parts of the brain with this doctor 4-minute you know so you could hold up a certain perception or an activity for a certain period of time but not most of them don't last that long and so if you think about your thoughts are the activity neurons if you're going to want to involve something that happened a long time ago I'm even just this morning for example the neurons haven't been active throughout that time so you have to store that so if I asked you what did you have for breakfast today that is memory that is you've built into your model of the world now you remember that and that memory is in the in the synapses it's basically in the formation of synapses and so it's it you're sliding into what you know is two different time scales there's time scales of which we are like understanding my language and moving about and seeing things rapidly and over time that's the time scales of activities of neurons but if you want to get longer time scales then it's more memory and we have to invoke those memories to say oh yes well now I can remember what I had for breakfast because I stored that someplace I may forget it tomorrow but I'd stored for afor now so this is memory also need to have so the hierarchical aspect of reality is not just about concepts it's also about time do you think of it that way yeah time is infused in everything it's like you really can't separate it out if I ask you what is the what is your you know how's the brain learning a model of this coffee cup here I have a coffee cup and I'm at the coffee cup I said well time is not an inherent property of this of this of the model I have of this cup whether it's a visual model or attack the model I can sense it through time but if the model self doesn't really much time if I asked you if I said well what is the model of my cell phone my brain has learned a model of the cell phone so if you have a smart phone like this and I said well this has time aspects to it I have expectations when I turn it on what's gonna happen what water how long it's going to take to do certain things if I bring up an app what sequences and so I have instant it's all like melodies in the world you know yeah melody has a sense of time so many things in the world move and act and there's a sense of time related to them some don't but most things do really so it's it's sort of infused throughout the models of the world you build a model of the world you're learning the structure of the objects in the world and you're also learning how those things change through time okay so it's it's it really is just a fourth dimension that's infused deeply and they have to make sure that your models have been intelligence incorporated so like you mentioned the state of neuroscience is deeply empirical a lot of data collection it's uh you know that's that's where it is using meshing Thomas Kuhn right yeah and then you're proposing a theory of intelligence and which is really the next step the really important stuff to take but why why is HTM or what we'll talk about soon the right theory so is it more in this it what is it backed by intuition is it backed by evidence is it backed by a mixture of both is it kind of closer to or string theories in physics where this mathematical components would show that you know what it seems that this it fits together too well for not to be true which is what we're string theory is is that where your fix of all those things although definitely where we are right now it's definitely much more on the empirical side than let's say string theory the way this goes about we're theorists right so we look at all this data and we're trying to come up with some sort of model that explains it basically and there's yeah unlike string theory there's this vast more amounts of empirical data here that I think than most physicists deal with and so our challenge is to sort through that and figure out what kind of constructs would explain this and when we have an idea you come up with a theory of some sort you have lots of ways of testing it first of all I am you know there are hundred years of assimilated unassimilated empirical data from neuroscience so we go back and read papers we said oh did someone find this already with you we can predict x y&z and maybe no one's even talked about it since 1972 or something but we go back and find out we say Oh either it can support the theory or it can invalidate the theory and we said okay we have to start over again oh no it's the poor let's keep going with that one so the way I kind of view it when we do our work we come up we we look at all this empirical data and it's it's what I call is a set of constraints we're not interested in something that's biologically inspired we're trying to figure out how the actual brain works so every piece of empirical data is a constraint on a theory in theory if you have the correct theory it needs to explain every pin right so we have this huge number of constraints on the problem which initially makes it very very difficult if you don't have any constraints you can make up stuff all the day you know here's an answer how you can do this you can do that you can do this but if you consider all biology as a set of constraints all neuroscience instead of constraints and even if you're working on one little part of the neocortex for example there are hundreds and hundreds of constraints these are empirical constraints that it's very very difficult initially to come up with a radical framework for that but when you do and it solves all those constraints at once you have a high confidence that you got something close to correct it's just in mathematically almost impossible not to be so it that's the the curse and the advantage of what we have the curse is we have to solve we have to meet all these constraints which is really hard but when you do meet them then you have a great confidence that you discover something in addition then we work with scientific labs so we'll say oh there's something we can't find we can predict something but we can't find it anywhere in the literature so we will then we have people we collaborated with say that sometimes they'll say you know I have some collected data which I didn't publish but we can go back and look in it and see if we can find that which is much easier than designing in your experiment you know new neuroscience experiments take a long time years so although some people are doing that now too so but between all of these things I think it's reasonable it's actually a very very good approach we we are blessed with the fact that we can test our theories out the ying-yang here because there's so much on a similar data and we can also falsify our theories very easily which we do often it's kind of reminiscent to whenever whenever that was with Copernicus you know when you figure out that the sun's at the center of the the solar system as opposed to earth the pieces just fall into place yeah I think that's the general nature of aha moments is in history Copernicus it could be you could say the same thing about Darwin you could say same thing about you know about the double helix that that people have been working on a problem for so long and I have all this data and they can't make sense of it they can't make sense of it but when the answer comes to you and everything falls into place it's like oh my gosh that's it that's got to be right I asked both Jim Watson and Francis Crick about this I asked him you know when you were working on trying to discover the structure of the double helix and when you came up with the the sort of the structure that ended up being correct but it was sort of a guess you know I wasn't really verified yeah I said did you know that it was right and they both said absolutely so we absolutely knew it was right and it doesn't matter if other people didn't believe it or not we knew it was right they get around the thing agree with it eventually anyway and that's the kind of thing you hear a lot with scientists who who really are studying a difficult problem and I feel that way too about our work if you talk to Kirk or Watson about the the problem you're trying to solve the of finding the DNA of the brain yeah in fact Francis Crick was very interested in this in the latter part of his and in fact I got interested in brains by reading an essay he wrote in 1979 called thinking about the brain and that is when I decided I'm gonna leave my profession of computers and engineering and become a neuroscientist just reading that one essay from Francis Crick I got to meet him later in life I got I spoke at the Salk Institute and he was in the audience and then I had a tea with him afterwards you know he was interested in a different problem and he was he was focused on consciousness yeah and the easy problem right well I I think it's the red herring and and so we weren't really overlapping a lot there Jim Watson who's still alive is is also interested in this problem and he was when he was director of the coast of Harbor laboratories he was really sort of behind moving in the direction of neuroscience there and so he had a personal interest in this field and I have met with him numerous times and in fact the last time was a little bit over a year ago I gave a talk close to me Harbor labs about the progress we were making in in our work and it was a lot of fun because he said well you you wouldn't be coming here unless you had something important to say so I'm gonna go change our talk so he sat in the very front row next to most next to him was the director of the lab was Stillman so these guys are in the front row of this auditorium right so nobody else in the auditorium wants to sit in the front row because Jim Watson is detective and and I gave a talk and I had dinner with Jim afterwards but it's I there's a great picture of my colleague sue Battaglia mahad took where I'm up there sort of like screaming the basics of this new framework we have and Jim Watson is on the edge of his chair he's literally on the edge of his chair like intently staring up at the screen and when he discovered the structure of DNA the first public talk he gave was that Cold Spring Harbor labs so and there's a picture those famous picture Jim Watson standing at the whiteboard was where the overrated thing pointing at something was holding a double helix at this point it actually looks a lot like the picture of me so there was funny I got talking about the brain and there's Jim Watson staring intently I didn't course there was you know whatever sixty years earlier he was standing you know pointing at the double helix and it's one of the great discoveries and and all of you know whatever by all the science all science yeah yeah hey so this is the funny that there's echoes of that in your presentation do you think in terms of evolutionary timeline in history the development of the neocortex was a big leap or is it just a small step so like if we ran the whole thing over again from the from the birth of life on Earth how likely develop the mechanism and you okay well those are two separate questions one it was it a big leap and one was how like it is okay they're not necessarily related maybe correlated we don't really have enough data to make a judgment about that I would say definitely was a big league and leap and I can tell you why I think I don't think it was just another incremental step at that moment I don't really have any idea how likely it is if we look at evolution we have one data point which is earth right life formed on earth billions of years ago whether it was introduced here or it created it here or someone introduced it we don't really know but it was here early it took a long long time to get to multicellular life and then from multi to other started life it took a long long time to get his neocortex and we've only had the New York Texas for a few hundred thousand years so that's like nothing okay so is it likely well certainly isn't something that happened right away on earth and there were multiple steps to get there so I would say it's probably not get something what happened instantaneous on other planets that might have life it might take several billion years on average um is it likely I don't know but you'd have to survive for several billion years to find out probably is it a big leap yeah I think it's it is a qualitative difference than all other evolutionary steps I can try to describe that if you'd like sure you know which way uh yeah I can tell you how pretty much I'll start a little press many of the things that humans are able to do do not have obvious survival advantages precedent yeah you know we create music is that is there a really survival advantage to that maybe maybe not what about mathematics is there a real survival advantage to mathematics it's stretchy you can try to figure these things out right but up but mostly evolutionary history everything had immediate survival advantages too right so I'll tell you a story which I like me may not be true but the story goes as follows organisms have been evolving first since the beginning of life here on earth anything this sort of complexity on to that just sort of complexity and the brain itself is evolved this way in fact there's an old parts and older parts and older older parts of the brain that kind of just keeps calling on new things and we keep adding capabilities and we got for the neocortex initially it had a very clear survival advantage and that it produced better vision and better hearing and better thoughts and maybe a new place so on but what what I think happens is that evolution just kept it took it took a mechanism and this is in our recent theories but it took a mechanism evolved a long time ago for navigating in the world for knowing who you are these are the so called grid cells and place cells of an old part of the brain and it took that mechanism for building maps of the world and knowing we are in those maps and how to navigate those maps and turns it into a sort of a slimmed-down idealized version of it mm-hmm and that ideally this version could now apply to building maps of other things maps of coffee cups and maps the phone's maps of these concepts yes and not just almost exactly and and so you and it just started replicating this stuff right you just think more and more more bits so we went from being sort of dedicated purpose neural hardware to solve certain problems that are important to survival to a general purpose neural hardware that could be applied to all problems and now it's just it's the orbit of survival it's we are now able to apply it to things which we find enjoyment you know but aren't really clearly survival characteristics and that it seems to only have happened in humans to the large extent and so that's what's going on where we sort of have we've sort of escape the gravity of evolutionary pressure in some sense in the neocortex and it now does things which but not that are really interesting discovery models of the universe which may not really help us doesn't matter how is it help of surviving knowing that there might be multiple no there might be you know the age of the universe or what how do you know various stellar things occur it doesn't really help us survive at all but we enjoy it and that's what happened or at least not in the obvious way perhaps it is required if you look at the entire universe in an evolutionary way it's required for us to do interplanetary travel and therefore survive past our own Sun but you know let's not get too but you know evolution works at one time frame it's it's survival if you think of a survival of the phenotype survival of the individual it is that what you're talking about there is spans well beyond that so there's no genetic I'm not transferring any genetic traits to my children that are gonna help them survive better on Mars right it's totally different mechanism let's yeah so let's get into the the new as you've mentioned the idea that I don't know if you have a nice name thousand you call it a thousand brain theory often told I like it so can you talk about the this idea of spatial view of concepts and so on yeah so can I just describe sort of the there's an underlying core discovery which then everything comes from that that's a very simple this is really what happened we were deep into problems about understanding how we build models of stuff in the world and how we make predictions about things and I was holding a coffee cup just like this in my hand and I had my finger was touching the side my index finger and I moved it to the top and I was going to feel the the rim at the top of the cover and I asked myself a very simple question I said well first of all I have to say I know that my brain predicts what its gonna feel before it touches it you can just think about it and imagine it and so we know that the brain is making predictions all the time so the question is what does it take to predict that right and there's a very interesting answer that first of all it says the brain has to know it's touching a coffee cup and I said a model or a coffee cup and needs to know where the finger currently is on the cup relative to the cup because when I make a movement and used to know where it's going to be on the cup after the movement is completed relative to the cup and then it can make a prediction about what's going to sense so this told me that Dean your cortex which is making this prediction needs to know that it's sensing it's touching a cup and it needs to know the location of my finger relative to that cup in a reference frame of the cup it doesn't matter where the cup is relative my body it doesn't matter its orientation none of that matters it's where my finger is relative to the cup which tells me then that the neocortex is has a reference frame that's anchored to the cup because otherwise I wouldn't be able to say the location and I wouldn't be able to predict my new location and then we quickly vary installation instantly you can say well every part of my skin could touch this cup and therefore every part of my skin is making predictions and every part my skin must have a reference frame that it's using to make predictions so the the big idea is that throughout the neocortex there are everything as being is being stored and referenced in reference frames you can think of them like XYZ reference things but they're not like that we know a lot about the neural mechanisms for this but the brain thinks in reference frames and it's an engineer if you're an engineer this is not surprising you'd say if I wanted to build a a CAD model of the coffee cup well I would bring it up in some CAD software and I would assign some reference frame and say this features at this locations and so on but the fact that this the idea that this is occurring through out in your cortex everywhere it was a novel idea and and then zillion things fell into place after that it's doing so now we think about the neocortex as processing information quite differently than we used to do it we used to think about the neural cortex is processing sensory data and extracting features from that sensory data and then extracting features from the features very much like a deep Learning Network does today but that's not how the brain works at all the brain works by assigning everything every input everything to reference frames and there are thousands hundreds and thousands of them active at once in your neocortex it's a surprising thing the thing about but once you sort of internalize this you understand that it explains almost every all the almost all the mysteries we've had about this it's about this structure so one of the consequences of that is that every small part of the neocortex so you have a millimeter square and there's a hundred and fifty thousand of those so it's about 150,000 square millimeters if you take every little square millimeter of the cortex it's got some input coming into it and it's going to have reference frames which assign that input to and each square millimeter can learn complete models of objects so what do I mean by that if I'm touching the coffee cup well if I just touch it in one place I can't learn what this coffee cup is because I'm just feeling one part but if I move it around the cup it touched you to different areas I can build up a complete model the cup because I'm now filling in that three dimensional map which is the coffee cup I can say oh what am I feeling in all these different locations that's the basic idea it's more complicated than that but so through time and we talked about time earlier through time even a single column which is only looking at or a single part of the cortex it's only looking at a small part of the world can build up a complete model of an object and so if you think about the part of the brain which is getting input from all my fingers so there's they're spread across the top and here this is the somatosensory cortex there's columns associated all these from areas of my skin and what we believe is happening is that all of them are building models of this cup every one of them or things not do not all building all not every column every part of the cortex builds models of everything but they're all building models of something and and so you have it so when I when I touch this cup with my hand there are multiple models of the cup being invoked if I look at it with my eyes there again many models of the cup being invoked because each part of the visual system and the brain doesn't process an image that's mr. that's a misleading idea it's just like your fingers touching so different parts of my Radnor of looking at different parts of the cup and thousands and thousands of models of the cup are being invoked at once and they're all voting with each other trying to figure out what's going on so that's why we call it the thousand brains theory of intelligence because there isn't one model of a cop there are thousands of models to this Cup there are thousands of models for your cell phone and about cameras and microphones and so on it's a distributed modeling system which is very different than what people have thought about it so this is a really compelling and interesting idea of f2 first questions - one on the ensemble part of everything coming together you have these thousand brains how do you know which one has done the best job of forming the great question let me try Spain there there's a problem that's known in neuroscience called the sensor fusion problem yes and so is the idea of something like oh the image comes from the eye there's a picture on the retina and it gets projected to than your cortex no by now it's all spread out all over the place and it's kind of squirrely and distorted and pieces are all over this you know it doesn't look like a picture anymore when does it all come back together again right or you might say well yes but I also I also have sounds or touches associated with a couple so I'm seeing the cup and touching the cup how do they get combined together again so this it's called the sensor fusion problem is if all these disparate parts have to be brought together into one model someplace that's the wrong idea the right idea is that you get all these guys voting there's auditory models of the cup there's visual models the cup those tactile models of the cup there's one the individual system there might be ones that are more focused on black and white ones fortunate on color it doesn't really matter there's just thousands and thousands of models of this Cup and they vote they don't actually come together in one spot it just literally think of it this way I imagine you have these columns or like about the size of a little piece of spaghetti okay like a two and a half millimeters tall and about a millimeter in mind they're not physical like but you could think of them that way and each one's trying to guess what this thing is they're touching now they can they can do a pretty good job if they're allowed to move over to us so I could reach my hand into a black box and move my finger around an object and if I touch enough spaces like oh okay I don't know what it is but often we don't do that often I can just reach and grab something with my hand all the once and I get it or if I had to look through the world through a straw so long invoking one little column I can only see part of some things I have to move the straw around but if I open my eyes to see the whole thing at once so what we think is going on it's all these little pieces of spaghetti if you know all these little columns in the cortex or all trying to guess what it is that they're sensing they'll do a better guess if they have time and can move over time so if I move my eyes and with my fingers but if they don't they have a they have a poor guest it's a it's a probabilistic s of what they might be touching now imagine they can post their probability at the top of a little piece of spaghetti each one of them says I think and it's not really a probability decision it's more like a set of possibilities in the brain it doesn't work as a probability distribution it works is more like what we call the Union so you could say and one column says I think it could be a coffee cup sort of can or a water bottle and the other column says I think it could be a coffee cup or you know telephone or camera whatever right and and all these guys are saying what they think might be and there's these long range connections in certain layers in the cortex so there's been some layers in some cell types in each column send their projections across the brain and that's the voting occurs and so there's a simple associative memory mechanism we've described this in a recent paper and we've modeled this that says they can all quickly settle on the only or the one best answer for all of them if there is a single best answer they all vote and say yeah it's got to be the coffee cup and at that point they all know it's a coffee go and at that point everyone acts as if it's the coffee cup they yeah we know it's a coffee even though I've only seen one little piece of this world I know it's coffee cup I'm touching or I'm seeing or whatever and so you can think of all these columns are looking at different parts in different places different sensory and put different locations they're all different but this layer that's doing the voting that's it's solidifies it's just like it crystallizes and says oh we all know what we're doing and so you don't bring these models together in one model you just vote and there's a crystallization of the vote great that's a at least a compelling way to think about about the way you form a model of the world now you talk about a coffee cup do you see this as far as I understand you're proposing this as well that this extends to much more than coffee cups it does or at least the physical world it expands to the world of concepts yeah it does and well first the primary face every evidence for that is that the regions of the neocortex that are associated with language or high-level thought or mathematics or things like that they look like the regions of the new your cortex that process vision hearing and touch there they don't look any different or they look only marginally different and so one would say well if Vernon now Castle who proposed it all that come all the parts of New York or trees doing the same thing if he's right then the parts that during language or mathematics or physics are working on the same principle they must be working on the principle of reference frames so that's a little odd flawed hmm but of course we had no eye we had no prior idea how these things happen so that's let's go with that and we in our recent paper we talked a little bit about that I've been working on it more since I have better ideas about it now I'm sitting here very confident that that's what's happening and I can give you some examples to help you think about that it's not we understand it completely but I understand it better than I've described it in any paper so far so but we did put that idea out there says okay this is it's it's it's it's a good place to start you know and the evidence would suggest this how it's happening and then we can start tackling that problem one piece at a time like what does it mean to do high-level thought what it means a new language how would that fit into a reference frame framework yes so there's a if you could tell me if there's a connection but there's an app called Anki that helps you remember different concepts and they they talk about like a memory palace that helps you remember a completely random concepts by so trying to put them in a physical space in your mind yeah and putting them next to each other the method of loci okay yeah for some reason that seems to work really well yeah no that's a very narrow kind of application of just remembering some facts but that's a very very telling one yes exactly so it seems like you're describing a mechanism why this seems yeah so so basically the way what we think is going on is all things you know all concepts all ideas words everything you know are stored in reference frames and so if you want to remember something you have to basically navigate through a reference frame the same way a rat navigates to a Maeve in the same way my finger rat navigates to this coffee cup you are moving through some space and so what you if you have a random list of things you were asked to remember by assigning him to a reference frame you've already know very well to see your house right an idea the method of loci is you can say okay in my lobby I'm going to put this thing and then and then the bedroom I put this one I go down the hall I put this thing and then you want to recall those facts so we call this things you just walk mentally you walk through your house you're mentally moving through a reference frame that you already had and that tells you there's two things are really important about it tells us the brain prefers to store things in reference frames and that the method of recalling things or thinking if you will is to move mentally through those reference frames you could move physically through some reference frames like I could physically move through the reference name of this coffee cup I can also mentally move to the reference time the coffee cup imagining me touching it but I can also mentally move my house and and so now we can ask yourself or are all concepts toward this way there's some recent research using human subjects in fMRI and I'm gonna apologize for not knowing the name of the scientist that did this but what they did is they they put humans in this fMRI machine which was one of these imaging machines and they they gave the humans tasks to think about Birds so they had different types of birds and beverage it looked big and small and long necks and long legs things like that and what they could tell from the fMRI it was a very clever experiment get to tell when humans were thinking about the birds that the birds that the knowledge of birds was arranged in a reference frame similar to the ones that are used when you navigate in a room that these are called grid cells and there are grid cell like patterns of activity in the new your cortex when they do this so that it's a very clever experiment you know and what it basically says that even when you're thinking about something abstract and you're not really thinking about it as a reference frame it tells us the brain is actually using a reference frame and it's using the same neural mechanisms these grid cells are the basic same neural mechanism that we we propose that grid cells which in the old part of the brain the entire cortex that that mechanism is now similar mechanism is used throughout the neocortex it's the same nature preserve this interesting way of creating reference frames and so now they have empirical evidence that when you think about concepts like birds that you're using reference frames that are built on grid cells so this that's similar to the method of loci but in this case the birds are related so it makes they create their own reference frame which is consistent with bird space and when you think about something you go through that you can make the same example let's take a math mathematics all right let's say you want to prove a conjecture ok what is a conjecture conjecture is a statement you believe to be true but you haven't proven it and so it might be an equation I I want to show that this is equal to that and you have a place you have some places you start with you said well I know this is true and I know this is true and I think that maybe to get to the final proof I need to go through some intermediate results but I believe is happening is literally these equations where these points are assigned to a reference frame a mathematical reference frame and when you do mathematical operations a simple one might be multiply or divide but you might be a little applause transform or something else that is like a movement in the reference frame of the math and so you're literally trying to discover a path from one location to another location in a space of mathematics and if you can get to these intermediate results then you know your map is pretty good and you know you're using the right operations much of what we think about is solving hard problems is designing the correct reference frame for that problem figure out how to organize the information and what behaviors I want to use in that space to get me there yeah so if you dig in an idea of this reference frame whether it's the math you start a set of axioms to try to get to proving the conjecture can you try to describe maybe taking step back how you think of the reference frame in that context is is it the reference frame that the axioms are happy in is it the reference frame that might contain everything is that a changing thing so there it is you any reference frames I mean fact the way the theory the thousand brain theory of intelligence says that every single thing in the world has its own reference frame so every word has its own reference names and we can talk about this the mathematics work out this is no problem for neurons to do this but how many reference changes the coffeeCup have well it's on a table let's say you asked how many reference names could the column in my finger that's touching the coffee cup hat because there are many many copies there many many models of a coffee cup so the coffee there is no walnut model the coffee cup there are many miles of a coffee cup and you could say well how many different things can my finger learn missus it's just the question you want to ask imagine I say every concept every idea everything you've ever know about that you can say I know that thing it has a reference frame associated with him and what we do when we build composite objects we can we sign reference frames to point another reference frame so my coffee cup has multiple components to it it's got a limb it's got a cylinder it's got a handle and those things that have their own reference frames and they're assigned to a master reference frame where we just called this cup and now I have this clementa logo on it well that's something that exists elsewhere in the world it's it's own thing so it has its own reference time so we now have to say how can I sign the new mentor bogel reference frame onto the cylinder or onto the coffee cup so it's all we talked about this in the paper that came out in December this last year the idea of how you can assign reference names to reference names how neurons could do this so well my question is okay even though you mentioned reference frames a lot I almost feel it's really useful to dig into how you think of what a reference frame is I mean I was already helpful for me to understand sure you think of reference frames is something there is a lot of okay so let's just say that we're gonna have some neurons in the brain not many actually 10,000 20,000 are gonna create a whole bunch of reference frames what does it mean right what is the reference in this case first of all these reference names are different than the ones you might have be used to let you know lots of reference in its route for example we know the Cartesian coordinates XYZ that's a type of reference frame we know longitude and latitude that's a different type of reference frame if I look at a printed map you might have Colin a through a Monroe's you know one through twenty that's a different type of reference frame it's a kind of a Cartesian coordinate frame though interesting about the reference frames in the brain and we know this because these have been established through neuroscience studying the anti Rana cortex so I'm not speculating here okay this is known neuroscience in an old part of the brain the way these cells create reference frames they have no origin so what it's more like you have you have a point your appointment in some space and you give it a particular movement you can then tell what the next point should be and you can then tell what the next point would be and so on you can use this to to calculate how to get from one point to another so how do I get from being around my house to my home or how do I get my finger from the side of my cup to the top of the camp how do I get from the the axioms to the conjecture so it's a different type of reference frame and I can if you want I can describe in more detail I can paint a picture how you might want to think about that so really helpful to think it's something you can move through yeah but is there is it is it helpful to think of it as spatial in some sense or is there something definitely spatial its spatial in the mathematical sense we need to mention can it be crazy numbered well that's an interesting question in the old part of the brain the answer I know cortex they studied rats and initially it looks like oh this is just two-dimensional it's like the rat is in some box and the maze or whatever and they know where the rat is using these two-dimensional reference frames and know where it is that's right the maze we saw okay but what about what about bats that's a mammal and they fly in three-dimensional space how do they do that they seem to know where they are right so there's this is a current area of active research and it seems like somehow the rep the neurons in the in tirana cortex I can learn three-dimensional space we just to members of our team along with ela FET from MIT just released a paper this little literally last week it's on by archive where they show that you can if you the way these things work and I'm gonna get unless you want to I won't get into the detail but grid cells can represent any n-dimensional space it there's no it's it's not inherently limited you can think of it this way if you had two-dimensional is the way it works is you add as a bunch of two-dimensional slices that's the way these things work there's a whole bunch of two-dimensional models and you can just you can slice up any n-dimensional space and with two-dimensional projections so and you could all have one dimensional models it does so there's there's nothing inherent about the mathematics about the way the neurons do this which which constrain the dimensionality of the space which I think was important and so obviously I have a three dimensional map of this cup maybe it's even more than that I don't know but it's a clearly three-dimensional map of the cup I don't just have a projection of the cup and but when I think about birds or when I think about mathematics perhaps it's more than three dimensions or who knows so in terms of each individual column building up more and more information over time do you think that mechanism is well understood in your mind you've proposed a lot of architectures there is that a key piece or is it is the big piece the thousand brain theory of intelligence omble at all well I think they're both big I mean clearly the concept as a theorist the concept that's most exciting right we've had a little con it's a high-level concept is this a totally new way of thinking about other new yorker optics work so that is appealing it has all these ramifications and with that as a framework for how the brain works you can make all kinds of predictions and solve all kinds of problems now we're trying to work through many of these details right now okay how do they neurons actually do this well turns out if you think about grid cells and place cells in the old parts of the brain there's a lot of snow and about them but there's still some mysteries there's a lot of debate about exactly the details how these work and what are the signs and we have that still that same level of detail the same level concern what we spend here most of our time doing is trying to make a very good list of the things we don't understand yet that's the key part here what are the constraints it's not like oh this thing seems work we're done no it's like okay it kind of works but these are other things we know what has to do and it's not doing those yet I would say we're well on the way here I'm not done yet there's a lot of trickiness to this system but the basic principles about how different layers in the neocortex are doing much of this we understand but there's some fundamental parts that we don't understand the sums so what would you say is one of the harder open problems or one of them ones that have been bothering you Oh keeping you up at night the most oh well right now this is a detailed thing that wouldn't apply to most people okay yeah please we've talked about as if to predict what you're going to sense on this coffee cup I need to know where my finger is gonna be on the coffee cup that is true but it's insufficient think about my finger touches the edge of the coffee cup my finger can touch it at different orientations right I can rotate my finger around here and that doesn't change ice I can make that prediction and somehow so it's not just the location there's an orientation component of this as well this is known in the old parts of the brain too there's things called head Direction cells which which way the rat is facing it's the same kind of base the idea so my finger were Iraq you know in three dimensions I have a three dimensional orientation and I have a three dimensional location if I was a rat I would have it you might think it was a 2-dimensional location a two dimensional orientation or one dimensional orientation like just which way is it facing so how the the two components work together how it is that I I combine orientation right the orientation my sensor as well as the the location is a tricky problem and I think I've made progress on it though at a bigger version of that so prospective super interesting but super specific yeah it's really good there's a more general version of that do you think context matters the fact that we are in a building in North America that that we in the day and age where we have mugs I mean there's all this extra information that you bring to the table about everything else in the room that's outside of just the coffee cup how does it get yeah so Kanab you think yeah and that is a another really interesting question I'm gonna throw that under the the rubric or the name of attentional problems first of all we have this model I have many many models so there's a and also the question doesn't matter because well it matters for certain things of course it does maybe what we think of that as a coffee cup in another part of the world this commute is something totally different or maybe the our logo which is very benign in this part of the world it means something very different than another part of the world so those things do matter I think the thing the way to think about is the following one way to think about it is we have all these models of the world ok and we have modeled we model everything and as I said earlier it comes snuck it in there our models are actually we we build composite structure so every object is composed of other objects which are composed of other objects and they become members of other objects so this room is chairs and a table and a room and the walls and so on now we can just arrange them in these things a certain way you go that's the new meta conference room so so and what we do is when we go around the world and we experience the world we've I walk into a room for example the first thing I'd like say oh I'm in this room do I recognize the room then I could say oh look there's a there's a table here and I by attending to the table I'm then assigning this table in a context of the room that's on the table there's a coffee cup oh and on the table there's a logo and in the logo this is the word dementia I look in the logo there's a letter e on look it has an unusual Seraph and it doesn't actually but my pretend so the point is you your attention is kind of drilling deep in and out of these nested structures and I can pop back up and I can pop back down I can pop back up and I can pop back down so I when I attend to the coffee cup I haven't lost the context of everything else but but it's sort of nested structure so the attention filters the reference frame information for that particular period of time yes it basically a moment-to-moment you attend the subcomponents and then you can tend to sub components to sub component so you can move up and down you can move up and down then we do that all the time you're not even now that I'm aware of it I'm very conscious of it but scintilla but most people don't don't you think about this you know you don't you just walk in the room and you don't say oh I looked at the chair and I looked at the board and looked at that word on the board and I looked over here what's going on right so what percent of your day are you deeply aware of this and what part can you actually relax and just be Jeff me personally like my personal day yeah unfortunately I'm afflicted with too much of the former I [Laughter] fortunately or unfortunately yeah I don't think it's useful oh I did useful totally useful I think about this stuff almost all the time and I meant one of my primary ways of thinking is when I'm in sleep at night I always wake up in the middle of the night and then I stay awake for at least an hour with my eyes shut in a sort of a half sleep state thinking about these things I come up with answers to problems very often in that sort of half sleeping State I think about on my bike ride I think about on walks I'm just constantly thing about this I have to almost a scheduled time to not think about this stuff because it's very it's mentally taxing are you when you think about the stuffy's are you thinking introspectively like almost gonna step outside yourself and trying to figure out what is your mind doing right I do that all the time but that's not all I do I've constantly observing myself so as soon as I started thinking about grid cells for example and getting into that I started saying oh well grid cells can't mice place a sense in the world you know that's where you know where you are and essentially you know we always have a sense of where we are unless were lost and so I started at night when I got up to go to the bathroom I would start trying to do a complete with my eyes closed all the time and I would test my sense of pretty cells I would I would walk you know five feet and say okay I think I'm here am I really what's my error yeah and then I would count in my error again and see how the errors accumulate so even something as simple as getting up in the middle light or the bathroom I'm testing these theories out it's kind of fun I mean the coffee cup is an example of that too so I think I find that these sort of everyday introspections are actually quite helpful it doesn't mean you can ignore the science I mean I spend hours every day reading ridiculously complex papers that's not nearly as much fun but you have to sort of build up those constraints and the knowledge about the field and who's doing what and what exactly they think is cooperating here and then you can sit back and say okay let's try to piece this all together let's come up with some you know I I'm right in this group here people they know they just I do this all this time I come in with these introspective ideas and say well do you ever thought about this now watch well this all do this together and it's helpful it's not if as long as you don't be all you did was that then you're just making up stuff right but if you're constraining it by the reality of the neuroscience then it's really helpful so let's talk a little bit about deep learning and the successes in the apply space of neural networks the ideas of training model and data and these simple computational units you're on artificial neurons that with backpropagation as the statistical ways of being able to generalize from the training set onto data that similar to that training set so where do you think are the limitations of those approaches what do you think our strengths relative to your major efforts of constructing a theory of human intelligence yeah well I'm not an expert in this field I'm somewhat knowledgeable so odd but I love it is in just your intuition what are you well I have I have a little bit more than intuition but you're going to say like you know one of the things that you asked me do I spend all my time thing about neurons I do that's to the exclusion of thinking about things like convolutional neural networks in you but I try to stay current so look I think it's great the progress they've made it's fantastic and as I mentioned earlier it's very highly useful for many things the models that we have today are actually derived from a lot of neuroscience principles there are distributed processing systems and distributed memory systems and that's how the brain works they use things that we we might call them neurons but they're really not neurons at all so we can just they're not really in terrassa distributed processing systems and and that nature of hierarchy that came also from neuroscience and so there's a lot of things that the learning rules basically not backprop but other you know so have you int I don't know I'd be curious to say they're not in your ons at all he described in which way I mean it's some of it is obvious but I'd be curious if if you have specific ways yeah which you think are the biggest difference yeah we had a paper in 2016 called why neurons of thousands of synapses and it and if you read that paper you don't know what I'm talking about here a real neuron in the brain is a complex thing it let's just start with the synapses on it which is a connection between neurons real neurons can everywhere from five to thirty thousand synapses on the ones near the cell body the ones are too close to the the soma of the cell body those are like the ones who people model in artificial neurons there is a few hundred of those maybe they can affect the cell they can make the cell become active ninety-five percent of the synapses can't do that they're too far away so if you're actually at one of those synapses it just doesn't affect the cell body enough to make any difference any one of them individually anyone emanuelly or even if you do what mass of them what what we but what real neurons do is the following if you activate or they you get 10 to 20 of them active at the same time meaning they're all receiving an input at the same time and those 10 to 20 synapses are forty sensors within a very short distance on the dendrite like 40 microns a very small area so if you activate a bunch of these right next to each other at some distant place what happens is it creates what's called the dendritic spike and then juridic spike travels through the dendrites and can reach the soma or the cell body now when it gets there it changes the voltage which is sort of like gonna make the cell fire but never enough to make the cell fire it's sort of what we call it says we depolarize the cell you raise the voltage a little bit but not enough to do anything it's like well good is that and then it goes back down again so we proposed a theory which I'm very confident in basics are is that what's happening there is those ninety-five percent of those synapses are recognizing dozens to hundreds of unique patterns they can write you know about the 1020 nerve synapses at a time and they're acting like predictions so the neuron actually is a predictive engine on its own it it can fire when it gets enough what they call approximately input from those ones near the cell fire but it can get ready to fire from dozens to hundreds of patterns that it recognizes from the other guys and the advantage of this to the neuron is that when it actually does produce a spike in action potential it does so slightly sooner than it would have otherwise and so what could is slightly sooner well the slightly sooner part is it there's it all the neurons in the the excitatory neurons in the brain are surrounded by these inhibitory neurons and they're very fast the inhibitory neurons it's basket cells and if I get my spike out a little bit sooner than someone else I inhibit all my neighbors around me mm-hmm right and what you end up with is a different representation you end up with a reputation that matches your prediction it's a it's a sparsa representation meaning as fewest known or interactive but it's much more specific and so we showed how networks of these neurons can do very sophisticated temporal prediction basically so so this summarize this real neurons in the brain are time-based prediction engines and and they and there's no concept of this at all in artificial what we call point neurons I don't think you can mail the brain without them I don't even build intelligent it's its theme it's where large part of the time comes from it's it's these are predictive models and the time is in is there's a prior and I'm in a you know a prediction and an action and it's inherent to every neuron the neocortex so so I would say that point neurons sort of model a piece of that and not very well with that either but you know like for example synapses are very unreliable and you cannot assign any precision to them so even one digital position is not possible so the way real neurons work is they don't add these they don't change these weights accurately like artificial neural networks do they basically form new synapses and so what you're trying to always do is is detect the presence of some 10 to 20 active synapses at the same time as opposed and they're almost binary it's like because you can't really represent anything much finer than that so these are the kind of dishes and I think that's actually another essential component because the brain works on sparse patterns and all about all that mechanism is based on sparse patterns and I don't actually think you could build our real brains or machine and tell us about incorporating some of those ideas it's hard to even think about the complex that emerges from the fact that the timing of the firing matters in the brain the fact that you form new new synapses and and the I mean everything you just mentioned in the past okay trust me if you spend time on it you can get your mind around it it's not like it's no longer a mystery to me no but but sorry as a function in a mathematical way it's can you get it they're getting an intuition about what gets it excited what not as easy as there are many other types of neural networks are that are more amenable to pure analysis you know especially very simple networks you know oh I have four neurons and they're doing this can we you know the scribes are mathematically what they're doing type of thing even the complexity of convolutional neural networks today it's sort of a mystery they can't really describe the whole system and so it's different my colleague sue Burton I am on he did a nice paper on this you can get all the stuff on our website if you're interested talking about a little math properties of sparse representations and so we can't what we can do is we can tell mathematically for example why 10 to 20 synapses to recognize a pattern is the correct number it's the right number you'd want to use and by the way that matches biology we can show mathematically some of these concepts about the show why the brain is so robust to noise and error and fallout and so on we can show that mathematically as well as empirically in simulations but the system can't be analyzed completely any complex system can and so that's out of the realm but there is there are mathematical benefits and intuitions that can be derived from mathematics and we try to do that as well most most of our papers have the section about that so I think it's refreshing and useful for me to be talking to you about deep neural networks because your intuition basically says that we can't achieve anything like intelligence with artificial neural networks well not in their current form 9/2 can do it in the ultimate form sure so let me dig into it and see what your thoughts are they're a little bit so I'm not sure if you read this little blog post called bitter lesson by Richard Sutton recently recently he's a reinforcement learning pioneer I'm not sure if you familiar with him his basic idea is that all the stuff we've done in AI in the past 70 years he's one of the old school guys the the biggest lesson learned is that all the tricky things we've done don't you know they benefit in the short term but in the long term what wins out is a simple general method that just relies on Moore's Law on on computation getting faster and faster so this is what he's saying this is what has worked up to now this what has worked up to now they fear trying to build the system if we're talking about he's not concerned about intelligence concern about system that works in terms of making predictions that applied narrow AI problems right that's what there's the discussion is about that you just tried to go as general as possible and wait years or decades for the computation to make it actually do you think that is a criticism or is he saying this is the prescription of what we ought to be doing well it's very difficult he's saying this is what has worked and yes a prescription with the difficult prescription because it says all the fun things you guys are trying to do we are trying to do he's part of the community they're saying it's it's only going to be short-term gains so this all leads up to a question I guess on artificial neural networks and maybe our own biological neural networks is you think if we just scale things up significantly so take these dumb artificial neurons the point here as I like that term if we just have a lot more of them do you think some of the elements that we see in the brain may start emerging no I don't think so we can do bigger problems and of the same type I mean it's been pointed out by many people that today's convolutional no and that works aren't really much different than the ones we had quite a while ago we just they're bigger and train more and we have more label data and so on but I don't think you can get to the kind of things I know the brain can do and that we think about as intelligence by just scaling it up so I that maybe it's a good description of what's happened in the past what's happened recently with the re-emergence of artificial neural networks it may be a good prescription for what's going to happen in the short term but I don't think that's the path I've said that earlier there's an alternate path I should mention to you by the way that we've made sufficient progress on our the whole cortical theory in the last few years that last year we decided to start actively pursuing how do we get these ideas embedded into machine learning well that's it again being led by my colleague just super talked him on and he's more of a machine learning guy am more of a neuroscience guy so this is now our new is I wouldn't say our focus but it is now an equal focus here because we we need to proselytize what we've learned and we need to show how it's beneficial to - to the Machine were earlier so we're putting we have a plan in place right now in fact we just did our first paper on this I can tell you about that but you know one of the reasons I want to talk to you is because I'm trying to get more people in the machine learning the community say I need to learn about this stuff and maybe we should just think about this a bit more about what we've learned about the brain and what are those team Aetna meant - what have they done is that useful for us yeah yeah so there is there elements of all the the cortical Theory the things we've been talking about that may be useful in the short term yes in the short term yes this is the sorry to interrupt the the open question is it it there it certainly feels from my perspective that in the long term some of the ideas we've been talking about will be extremely useful yeah question is whether in the short term well this is a always that what we I would call the entrepreneurs dilemma so you have this long term vision oh we're gonna all be driving electric cars or all kind of computers or or whatever and and you're at some point in time and you say I can see that long-term vision I'm sure it's gonna happen how do I get there without killing myself you know without going out of business right that's the challenge that's the dilemma it's a really difficult thing to do so we're facing that right now so ideally what you'd want to do is find some steps along the way you can get there incremental you don't have to like throw it all out and start over again the first thing that we've done is we focus on the sparse representations so I just just in case you don't know what that means or some of the listeners don't know what that means in the brain if I have like 10,000 neurons what you would see is maybe 2% of them active at a time you don't see 50 percent you know 3 30 percent you might see 2 percent and it's always like that for any set of sensory input it doesn't matter anything just about any part of the brain but which neurons differs which neurons are active yes I take 10,000 neurons that are representing something though it's sitting there in a bullet block together it's a teeny little blocking around 10,000 there right and they're representing a location they're representing a cop they're representing the input for my sensors I don't know it doesn't matter it's representing something the way the representations occur it's always a sparse representation meaning it's a population code so which 200 cells are active tells me what's going on it's not individual cells on it's not important at all it's the population code that matters and when you have sparse population codes then all kinds of beautiful properties come out of them so the brain used the sparse population codes that we've we've written and described these benefits in some of our papers so they give this tremendous robustness to the system student brains are incredibly robust neurons are dying all the time and spasming and synapse is falling apart and you know that all the time and it keeps working so what simatai and Louise one of our other engineers here have done I've shown they're introducing sparseness into accomplished neural networks and other people thinking along these lines but we're going about it in a more principled way I think and we're showing that with you enforced sparseness throughout these convolutional neural networks in both the active the which sort of which neurons are active and the connections between them that you get some very desirable properties so one of the current hot topics in deep learning right now are C's adversarial examples so you know I can give me any deep Learning Network and I can give you a picture that looks perfect and you're gonna call it you know you're gonna say the monkey is you know an airplane that's the problem and DARPA just announced some big thing they're trying to you know have some contest for this but if you if you enforce sparse representations here many of these problems go away they're much more robust and they're not easy to fool so we've already shown some of those results it was just literally in January or February just last month we did that and you can I think it's on bio archive right now or on I cry for you can read about it but so that's like a baby step okay that's taking something from the brain we know we know about sparseness we know why it's important we know what it gives the brain so let's try to enforce that on to this what's your intuition why sparsity leads to robustness because it feels like it would be less robust so why why would you feel the Russell bust you so it just feels like if the fewer neurons are involved the more fragile that represents a there was lots of food I said it's like 200 that's a lot is that a lot is yes so here's an intuition for it this is a bit technical so for you know for engineers pyram machine land people let's be easy but all the listeners maybe not if you're trying to classify something you're trying to divide some very high dimensional space into different pieces a and B and you're trying to create some point where you say all these points in this high dimensional space are a and all these points inside dimensional space or B and if you have points that are close to that line it's not very robust it works for all the points you know about but it's it's not very robust because you just move a little bit and you've crossed over the line when you have sparse representations imagine I pick I have I'm gonna pick 200 cells active out of out of 10,000 okay so I have to nurse cells active now let's say I pick randomly another a different representation 200 the overlap between knows is going to be very small just a few I can pick millions of samples randomly of 200 ons and not one of them will overlap more than just a few so one way to think about is if I want them fool one of these representations to look like one of those other representations I can't move just one cell or two cells or three cells or four cells I have to move a hundred cells and that makes them robust in terms of further so the you mentioned sparsity well maybe the next thing yeah okay so what we have we picked one we don't know if it's going to work well yet so again we're trying to come up incremental ways to moving from brain theory to add pieces to machine learning current machine learning world and one step at a time so the next thing we're going to try to do is sort of incorporate some of the ideas of the thousand brains theory that you have many many models and that are voting now that idea is not new there's mixture models has been around for a long time but the way the brain does is a little and and the way it votes is different and the kind of way it represents and certain is different so we're just starting this work but we're going to try to see if we can sort of incorporate some of the principles of voting or principles of thousand brain theory like lots of simple models that talk to each other in this in a very certain way and can we build more machines the systems that learn faster and and also well mostly are multimodal and robust to multimodal type of issues so the one of the challenges there is you know the machine learning computer vision community has certain sets of benchmarks sets the test would based on which they compete and I would argue especially from your perspective that those benchmarks not that useful for testing the aspects that the brain is good at or intelligent they're not only testing in Georgia it's a very fine yeah and it's been extremely useful for developing specific mathematical models but it's not useful in the long term for creating intelligence so yeah you think you also have a role in proposing better tests yeah this is a very you've identified a very serious problem first of all the tests that they have are the tests that they want not the tests of the other things that we're trying to do right you know what are the so on the second thing is sometimes these two could be competitive to in these tests you have to have huge data sets and huge computing power instead you know and we don't have that here we don't have it as well as other big teams and big companies do so there's numerous issues there you know we come at it you know where our approach to this is all based on in some sense you might argue elegance we're coming at it from like a theoretical base that we think oh my god this so this is a clearly elegant this how brains work this one told uses but the machine learning world has gotten in this phase where they think it doesn't matter doesn't matter what do you think as long as you do you know point one percent better on this benchmark that's what that's all that matters and and that's a problem you know we have to figure out how to get around that that's that's a challenge for us that's it's one of the challenges we have to deal with so I agree you've identified a big issue it's difficult for those reasons but you know what you know part of the reasons I'm talking to here today is I hope I'm gonna get some machine learning people to say read those papers those might be some interesting ideas I'll show you I'm trying to doing this point one percent improvement stuff you know well that's that's why I'm here as well because I think machine learning now as a community is it a place where the next step is uh needs to be orthogonal to what has received success in the past oh you see other leaders saying this machine learning and leaders you know Geoff Hinton with his capsules idea many people have gotten up say you know we're gonna hit road but maybe we should look at the brain you know things like that so hopefully that thinking walk occur organically and then then we're in a nice position for people to come and look at our work and say well welcome you learn from these guys yeah MIT is launching a billion-dollar computing College the center on this idea so it's on this idea of what uh well the idea that you know the humanities psychology neuroscience have to work all together to get to ability s yeah Stanford just did this human-centered a I said yeah I'm a little disappointed in these initiatives because yeah you know they're they're fuckin is sort of a human side of it and it could very easily slip into how humans interact with intelligent machine interest which is nothing wrong with that but that's not that is orthogonal to what we're trying to do we're trying to say like what is the essence of intelligence I don't care I think I want to build intelligent machines that aren't emotional that don't smile at you that you know that aren't trying to tuck you in at night yeah there is that pattern that you when you talk about understanding humans is important for understanding intelligence you start slipping into topics of ethics or yeah like you said the interactive elements as opposed to no no no what's the zoom in on the brain study say what the human brain the baby the what's funny what a brain dolls does and then we can decide which parts of we want to recreate in some system but do you have that theory about what the brain does what's the point you know it's just you're gonna be wasting time right just to break you down on the artificial network side maybe you could speak to this on and that biologic and you know aside the process of learning versus the process of inference maybe you can explain to me what is there a difference between you know an artificial neural networks there's a difference between the learning stage and the inference stage do you see the brain is something different one of the one of the big distinctions that people often say I don't know how correct it is is artificial neural networks need a lot of data they're very inefficient learning do you see that as a correct distinction from the biology of the human brain that the human brain is very efficient or is that just something we deceive ourselves no it is efficient obviously we can learn new things almost instantly and so what elements do you think yeah I can talk about that you brought up two issues there so remember I talked early about the constraints we always feel well one of those constraints is the fact that brains are continually learning that's not something we said oh we can add that later that's something that was upfront had to be there from the start made our problems harder but we showed going back to the 2016 paper on sequence memory we showed how that happens how the brains infer and learn at the same time and our models do that and they're not two separate phases or two separate sets at the time I think that's a big big problem in AI at least for many applications not for all so I can talk about that there are some that gets detailed there are some parts of the neocortex in the brain where actually what's going on there's these those ease with these cycles uh they're like cycles of activity in the brain and there's very strong evidence that you're doing more of inference on one part of the phase and more of learning on the other part of the phase so the brain can actually sort of separate different populations of cells or going back and forth like this but in general I would say that's an important problem we have a you know all of our networks that we've come up with do both and it's it they're learning continuous learning networks and you mentioned benchmarks earlier well there are no benchmarks about that exactly so so we you know we have to like you know begin our little soapbox say hey by the way we yeah this is important you know and here's a mechanism for doing that but and you know but until you can prove it to someone in some you know commercial system or something's a little harder so yeah one of the things I had to linger on that is in some ways to learn the concept of a coffee cup you only need this one coffee cup and maybe some time alone in a room with it the first things is I when I was imagine I reach my hand into a black box and I'm reaching I'm trying to touch something yeah I don't know upfront if it's something I already know or if it's a new thing right and I have to I'm doing both at the same time I don't say oh let's see if it's a new thing oh let's see if it's an old thing I don't do that I as I go my brain says oh it's new or it's not new and if it's new I start learning what it is so and it by the way it starts learning from the get-go even if we couldn't recognize it so they're they're not separate problems they're in so that's the flinger the other thing you mentioned was the fast learning um so I was distorting my continuous learning but there's also fast I mean literally I can show you this coffee cup and I say here's a new coffee cup it's got the logo on it take a look at it done you done you can predict what it's going to look like you know in different positions so I can talk about that too yes in the brain the way learning occurs I mentioned this earlier but I mentioned again the way learning occurs I'm imagining a mass section of a dendrite of a neuron and I want to learn I'm gonna learn something new I'm just doesn't matter what it is I'm just gonna learn something new I I need to recognize a new pattern so what I'm gonna do I'm gonna form new synapses new synapses we're gonna rewire the brain on to that section of the dendrite once I've done that everything else that neuron has learned is not affected by it that's because it's isolated to that small section of the dendrite they're not all being added together like a point neuron so if I learn something new on this segment here it doesn't change anything occur anywhere else in that neuron so I can add something without affecting previous learning and I can do it quickly now let's talk we can talk about the quickness how it's done in real neurons you might say well doesn't it take time to form synapses yes it can take maybe an hour to form a new synapse we can form memories quicker than that and I can explain that albums too if you want but it's getting a bit neuroscience II oh that's great but is there an understanding of these every level yes so from the short-term memories in the forming uh well so this idea synaptogenesis the growth of new synapses that's well described as well understood and that's an essential part of learning that is learning that is learning okay you know back you know the going back many many years people you know as what's-his-name the psychologist who proposed heavy hem Donald Hebb he proposed that learning was the modification of the strength of a connection between two neurons people interpreted that as the modification of the strength of a synapse he didn't say that he just said there's a modification between the effect of one neuron another so synaptogenesis is totally consistent with Donald Hebb said but anyway there's these mechanisms that growth a new sense you can go online you can watch a video of a synapse growing in real time it's literally you can see this little finger it's pretty impressive yeah so that's those mechanisms are known now there's another thing that we've speculated and we've written about which is consistent with no neuroscience but it's less proven and this is the idea how do i form a memory really really quickly like instantaneously if it takes an hour to grow synapse like that's not instantaneous so there are there are types of synapses called silent synapses they look like a synapse but they don't do anything they're just sitting there it's like they do a action potential that comes in it doesn't release any neurotransmitter some parts of the brain have more of these and others for example the hippocampus has a lot of them which is where we associate most short to remember with so what we we speculated again in that 2016 paper we proposed that the way we form very quick memories very short-term memories or quick memories is that we convert silence and synapses into axis enough it's going it's like seeing a synapse there's a zero weight in a one way but the long-term memory has to be formed by synaptogenesis so you can remember something really quickly by just flipping a bunch of these guys from silent to active it's not like it's not from point one to point one five it's like doesn't do anything to it releases transmitter and if I do that over a bunch of these I've got a very quick short-term memory so I guess the lesson behind this is that most neural networks today are fully connected every neuron connects every other nerve from layer to layer that's not correct in the brain we don't want that we actually don't want that it's bad if you want a very sparse connectivity so that any neuron connects just some subset of the neurons in the other layer and it does so on a on a dendrite by dendrite segment basis so it's a very sparse elated out type of thing and and that then learning is not adjusting all these ways but learning is just saying okay connect to these 10 cells here right now in that process you know with artificial neural networks it's a very simple process of back propagation that adjusts the ways the process of synaptogenesis synaptogenesis it's even easier it's even easier is even easier that propagation requires something we it really can't happen in brains this back propagation of this error signal it really can't happen people are trying to make it happen and brain fits on a vertebrate this is this is pure heavy and learning well synaptogenesis pure have been learning it's basically saying there's a population of cells over here that are active right now and there's a population of cells over here active right now how do i form connections between those active cells and it's literally saying this guy became active this these 100 neurons here became active before this neuron became active so form connections to those ones that's it there's no propagation of error nothing all the networks we do all models we have work on almost completely on heavy and learning but in in on dendritic segments and multiple synapses at the same time so nonetheless I have turned the question that you already answered and maybe you can answer it again if you look at the history of artificial intelligence where do you think we stand how far are we from solving intelligence you said you were very optimistic yeah can you elaborate on that yeah you know it's just always the the crazy question to ask because you know no one can predict the future absolutely so I'll tell you a story I used to I used to run a different Neuroscience Institute called the red burn neuroscience tattoo and we would we would hold these symposiums we get like 35 scientists from around the world to come together and I used to ask him all the same question I would say well how long do you think it'll be before we understand his and your cortex works and everyone went around the room and they had introduced the name and they have to answer that question so I got the the typical answer was 50 to 100 years some people would say 500 years some people said never I said well your size so you know but it doesn't work like that as I mentioned earlier these are not these are step functions things happen and then bingo they happen you can't predict that I fill I've already passed a step function so if I can do my job correctly over the next five years then meaning I can proselytize these ideas I can convince other people they're right we can show that other people machine learning people should pay attention to these ideas then we're definitely in an under 20 year time frame if I can do those things if I'm not successful in that and this is the last time anyone talks to me and no one reads our papers and you know I'm wrong or something like that then then I don't know but it's it's not 50 years it's it you know it'll it'll you know the same thing about electric cars how quickly are they going to populate the world which probably takes about a 20 year span it'll be something like that but I think if I can do what I said we're starting it and of course there could be other you said step functions it could be everybody gives up on your ideas for 20 years and then all of a sudden somebody picks it up again wait that guy was on to something yeah so that would be a that would be a failure on my part right you know yeah think about Charles Babbage you know Charles Babbage invented the computer back in the eighteen hundreds and everyone forgot about it until you know but he was ahead of his time I don't think you know like as I said I recognize this is part of any entrepreneurs challenge I use it entrepreneur broadly in this case I'm not meaning like I'm building a business trying to sell something I mean I come trying to sell ideas and this is a challenge as to how you get people to pay attention to you how do you get them to give you a positive or negative feedback how do you get the people act differently based on your ideas so you know we'll see how what we do on them so you know that there's a lot of hype behind artificial intelligence currently do you uh as as you look to spread the ideas that are of neocortical theory of the things you're working on do you think there's some possibility we'll hit an AI winter once again it's certainly a possibility no don't worry about yeah well I guess do I worry about it I haven't decided yet if that's good or bad for my mission that's true yeah very true because uh it's almost like you need the the winter to refresh the palate yeah it's so it's like I want here's what you want to have it is you want like the extent that everyone is so thrilled about the current state of machine learning and AI and they don't imagine they need anything else that makes my job harder right if if everything crashed completely and every student left the field and there was no money for anybody to do anything and it became an embarrassment to talk about machine intelligence an AI that wouldn't be good for us either you want you want sort of the soft landing approach right you want enough people the senior people in AI and machine learning say you know we need other approaches we really need other approaches but damn we need two approaches maybe we should look to the brain okay let's look the brain who's got some brain ideas okay let's let's start a little project on the side here trying to do brain idea related stuff that's the ideal outcome we would want so I don't want a total winter and yet I don't want it to be sunny all the time you know so what do you think it takes to build a system with human level intelligence where once demonstrated you would be very impressed so does it have to have a body this have to have the the the c-word we used before consciousness as an entirety as a holistic sense first of all I don't think the goal is to create a machine at his human level intelligence I think it's a false goal it back to Turing I think it was a false statement we want to understand what intelligence is and then we can build intelligent machines of all different scales all different capabilities you know a dog is intelligent I don't need you know that'd be pretty good to have a dog yeah you know but what about something that doesn't look like an animal at all in different spaces so my thinking about this is that we want to define what intelligence says agree upon what makes an intelligence system we can then say ok we're now going to build systems that work on those principles or some subset of them and we can apply them to all different types of problems and the the kind of the idea it's not computing we don't ask if I take a little you know little one ship computer I don't say well that's not a computer because it's not as powerful is this you know big server over here you know no because we know that what the principles are computing are and I can apply those principles to a small problem into a big problem insane intelligence needs to get there we have to say these are the principles I can make a small one a big one I can make them distribute it I can put them on different sensors they don't have to be human like at all now you did bring up a very interesting questions about embodiment does that have to have my body it has to have some concept of movement it has to be able to move through these reference frames I talked about earlier I whether it's physically moving like I need if I'm going to have a a I that understands coffee cups it's gonna have to pick up the coffee cup and touch it and look at it with it with its eyes and hands or something equivalent to that if I have a mathematical AI maybe it needs to move through mathematical spaces I could have a virtual AI that lives in the internet and it's true its movements are traversing links and digging into files but it's got a location that it span is traveling through some space you can't have an AI that just takes some flash thing input and we call it flash different system here's a pattern Thun know its movement moving pattern moving pad and moving pad attention digging building building structure just so I figure out the model the world so some sort of embodiment whether it's physical or not has to be part of it so self-awareness in the way to be able to answer where my bring up self I was two different topics self-awareness or no the very narrow definition of self meaning knowing a sense of self enough to know where am I yeah the space was yeah yeah basically the system the system needs to know its location where each component of the system needs to know where it is in the world at that point in time so self awareness and consciousness do you think one from the perspective neuroscience and your cortex these are interesting topics solvable topics give any ideas of what why the heck it is that we have a subjective experience at all yeah I belong is it useful or is it just a side effect it's interesting to think about I don't think it's useful as a means to figure out how to build intelligent machines it's it's something that systems do and we can talk about what it is that are like well I build the system like this then it would be self-aware or and if I build it like this it wouldn't be self-aware so that's a choice I can have it's not like oh my god itself away I can't turn oh I I heard interview recently with this philosopher from Yale I can't remember his name apologize for that but he was talking about well if these computers are self-aware then it would be a crime to unplug them I'm like oh come on you know I employed myself every night go to sleep what is that a crime you know I plugged myself in again in the morning I am so people get kind of bent out of shape about this I have very different very detailed understanding or opinions about what it means to be conscious and what it means to be self-aware I don't think it's that interesting a problem you've talked about Christoph caulk you know he thinks that's the only problem I didn't actually listen to your interview with him but I know him and I know that's the thing he also thinks intelligence the cautions are disjoint so I mean it's not I don't have to have one or the other so he is I just agree with that I just totally agree with that so where's hear your thoughts the cautions were doesn't emerge from because it is so we then we have to break it down to the two parts okay because consciousness isn't one thing that's part of the problem that term is it means different things to different people and there's different components of it there is a concept of self-awareness okay that it can be very easily explained you have a model of your own body the your cortex models the things in the world and it also models your own body and and then it has a memory it can remember what you've done okay so it can remember what you did this morning can remember what you had for breakfast and so on and so I can say to you okay Lex were you conscious this morning when you know I had your you know bagel and you'd say yes I was conscious now what if I could take your brain and revert all the synapses back to the state they were this morning and then I said to you Lex were you conscious when you ate the bagel you should know and I wasn't hot just actually here's a video of eating the bagel he's saying I wasn't there I have no I that's not possible because I was I must have been unconscious at that time so we can just make this one-to-one correlation between memory of your body's trajectories through the world over some period of time a memory that and the ability to recall that memory is what you would call conscious I was conscious of that it's a self awareness um and and in any system that can recall memorize what it's done recently and bring that back and invoke it again would say yeah I'm aware I remember what I did yeah all right I got it that's an easy one although some people think that's a hard one the more challenging part of consciousness is this one that sometimes you just go by the word of quality um which is you know why does an object seem red or what is pain and why just pain feel like something why do I feel redness so what do I feel a little pain is in no way and then I could say well why does sight seems different than just hearing you know it's the same problem it's really yeah these are all dis neurons and so how is it that why does looking at you feel different than you know I'm hearing you it feels different but this is noise in my head they're all doing the same thing so that's the interesting question the best treatise I've read about this is by guy named Oh Reagan or Regan he wrote a book called why red doesn't sound like a bill it's a little it's not it's not a trade book easy read but it and and it's an interesting question take something like color color really doesn't exist in the world it's not a property of the world property the world that exists is light frequency and that gets turned into we have certain cells in the retina that respond to different frequencies different than others and so when they enter the brain you have a bunch of axons that are firing at different rates and from that we perceive color but there is no color in the brain I mean there's there's no color coming in on those synapses it's just a correlation between some some some axons and some property of frequency and that isn't even color itself frequency doesn't have a color it's just a it's just what it is so then the question is well why does it even appear to have a color at all just as you're describing it there seems to be a connection of these those ideas of reference frames I mean it just feels like consciousness having the subject assigning the feeling of red to the actual color or to the wavelength it's useful for intelligent that's a good way putting it it's useful as a predictive mechanism or useful there's a generalization I did it's a way of grouping things together to say it's useful to have a model like this yeah think about the the the there's a well-known syndrome that people who've lost a limb experience called phantom limbs and what they claim is they can have their arm is removed but they feel their arm that not only feel it they know it's there they it's there I can I know it's there they'll swear to you that it's there and then they can feel pain in the arm and feeling their finger in it they move their they move their non-existent arm behind your back then they feel the pain behind their back so this whole idea that your arm exists is a model of your brain it may or may not really exist and just like but it's useful to have a model of something that sort of correlates to things in the world so you can make predictions about what would happen when those things occur it's a little bit of a fuzzy but I think you're getting quite towards the answer there it's it's useful for the model of to express things certain ways that we can then map them into these reference frames and make predictions about them I need to spend more time on this topic it doesn't bother me do you really need to spend more time yeah yeah it does feel special that we have subjective experience but I'm yet to know why I'm just I'm just personally curious it's not it's not necessary for the work we're doing here I don't think I need to solve that problem to build intelligent machines at all not at all but there is so the the silly notion that you described briefly that doesn't seem so silly does humans is you know if you're successful building intelligent machines it feels wrong to then turn them off because if you're able to build a lot of them it feels wrong to then be able to you know to turn off the Y but just be let's let's break it down a bit as humans why do we fear death there's there's two reasons we fear death well first of all stay when you're dead doesn't matter oh okay you're doing it so why do we fear death we fear death for two reasons one is because we are programmed genetically to fear death that's a that's a survival and propagating the genes thing and we also a program to feel sad when people we know die we don't feel sad for someone we don't know dies it's people dying right now they're always come saying I'm so bad about because I don't know them but I knew them I'd feel really bad so again this these are old brain genetically embedded things that we fear death there's outside of those those uncomfortable feelings there's nothing else to worry about wait a second do you know the denial of death by becquer I don't know you know there's a thought that death is you know our whole conception of our world model kind of assumes immortality and then death is this terror that underlies it all so like well some people's world not mine but okay so what what Becker would say is that you're just living an illusion you've constructed an illusion for yourself because it's such a terrible terror the fact that what is the illusion that deathless about you still not coming to grips with the delusion of what that death is are going to happen it's not going to happen you're a mess you're actually operating you haven't even though you said you've accepted it you haven't really except in Russia guys what do you say so it sounds like it sounds like you disagree with that notion every night I go to bed it's like dying a little deaths and if I didn't wake up it wouldn't matter to me only if I knew that was gonna happen would it be bothers him if I didn't know was gonna happen how would I know know it then I would worry about my wife so imagine imagine I was a loner and I lived in Alaska and and I lived them out there and there's no animals nobody knew I existed I was just eating these roots all the time and nobody knew was there and one day I didn't wake up where what what pain in the world would there exist well so most people that think about this problem would say that you're just deeply enlightened or are completely delusional but I would say I would say that's a very light enlightened way to see the world is that that's the rational rational that's right but the fact is we don't I mean we really don't have an understanding of why the heck it is were born and why we die and what happens after well maybe there isn't a reason maybe there is so I mentioned those big problems too right you know you you interviewed max tegmark you know and there's people like that right I'm missing those big problems as well and in fact when I was young I made a list of the biggest problems I could think of first why is anything exists second why did we have the laws of physics that we have third is life inevitable and why is it here fourth is intelligence inevitable and why is it here I stopped there because I figured if you can make a truly intelligent system will be that will be the quickest way to answer the first three questions I'm serious yeah and and so I said my mission I mean I you asked me earlier my first missions understand the brain but I felt that is the shortest way to get to true machine intelligence and I want to get the true machine tells us because even if it doesn't occur in my lifetime other people will benefit from it because I think it'll occur in my lifetime but you know 20 years it's you never know and but that would be the quickest way for us to you know we can make super mathematicians we can make soup space explorers we can make super physicists brains that do these things and that can run experiments that we can't run we don't have the abilities to manipulate things and so on but we can build intelligent machines that do all those things and with the ultimate goal of finding out the answers to the other questions let me ask you know the depressing and difficult question which is once we achieved that goal do you of creating it over know of understanding intelligence do you think we would be happier more fulfilled as a species the understand intelligent understanding the answers to the big questions understanding intelligence Oh totally totally for more fun place to live you think so oh yeah I mean beside this you know terminator nonsense and and and and just think about you can think about we can talk about the risk of AI if you want I'd love to so let's uh I think world's before better knowing things we're always better than no things do you think it's better better place to work the living that I know that our planet is one of many in the solar system and the soleus is one of many of the calluses I think it's a more I I dread I used to I sometimes think like God what would be like the list three hundred years ago I'd be looking at the sky god I can't understand anything oh my god I'd be like throwing a bit of light going what's going on here well I mean in some sense I agree with you but I'm not exactly sure that I'm also a scientist so I have I share you've used but I'm not we're like rolling down the hill together oh oh what's down the hill I feel for climbing a hill whatever anything cooler getting closer to enlightenment we're climbing we're getting pulled up a hill the way you're putting our polio studies put we're pulling ourselves up the hill by our curiosity yeah Sisyphus is doing the same thing with the rock yeah yeah but okay our happiness decide do you have concerns about you know you talk about sam harris you know a musk of existential threats of intelligence no I'm not worried about exercise there are there are some things we really do need to worry about even today's things we have to worry about we have to worry about privacy and about how impacts false beliefs in the world and and we have real problems that and things to worry about with today's AI and that will continue as we create more intelligent systems there's no question you know the whole issue about you know making intelligent armament and weapons it's something that really we have to think about carefully I don't think of those as existential threats I think those are the kind of threats we always face and we all have to face them here and hope to deal with them the ie we can we could talk about what people think are the existential threats but when I hear people talking about them they all sound hollow to me they're based on ideas they're based on people who really have no idea what intelligence is and and if they knew what intelligence was they wouldn't say those things so those are not experts in the field in at home so yeah so there's two right there's so one is like super intelligence so a system that becomes far far superior in reasoning ability than us humans how is that an existential threat then so there's a lot of ways in which it could be one way as us humans are actually irrational inefficient and get in the way of of not happiness but whatever the objective function is of maximizing that objective function yeah super intelligent paperclip problem things like but so the paperclip problem but with a super intelligent yeah so we already face this threat in some sense they're called bacteria these are organisms in the world that would like to turn everything into bacteria and they're constantly morphing they're constantly changing to evade our protections and in the past they have killed huge swathes of populations of humans on this planet so if you want to worry about something that's going to multiply endlessly we have it and I'm far more worried in that regard I'm far more worried that some scientists in a laboratory will create a super virus or a super bacteria that we cannot control that is a more existential strep putting putting in its halogen thing on top of it actually seems to make it less existential to me it's like it's it limits its power is limits where it can go and limits the number of things that can do in many ways a bacteria is something you can't you can't even see so that's the only one of those problems yes exactly so the the other one just in your intuition about intelligent you think about intelligence as humans do you think of that as something if you look at intelligence on a spectrum from zero to us humans do you think you can scale that to something far superior yeah all the mechanisms with me I want to make another point here that Lex before I get there sure intelligence is the neocortex it is not the entire brain if I the goal is not to be make a human the goal is not to make an emotional system the goal is not to make a system that wants to have sex and reproduce why would I build that if I want to have a system that wants to reproduce enough sex make bacteria make computer viruses those are bad things don't do that just those are really bad don't do those things regulate those but if I just say I want to intelligent system why does it have to have any human like emotions why couldn't I does he even care if it lives why does it even care if it has food it doesn't care about those things it's just you know it's just in a trance thinking about mathematics or it's out there just trying to build the space plant you know for it on Mars it's C we don't that's a choice we make don't make human-like things don't make replicating things don't make things which have emotions just stick to the neocortex so that's that's a view actually that I shared but not everybody shares in the sense that you have faith and optimism about us years of systems humans as builders of systems got to to do not put in stupid not so this is why I mentioned the bacteria one yeah because you might say well some person's gonna do that well some person today could create a bacteria that's resistant to all the non antibacterial agents so we already have that threat we already knows this is going on it's not a new threat so just accept that and then we have to deal with it right yeah so my point has nothing to do with intelligence it intelligence is the separate component that you might apply to a system that wants to reproduce and do stupid things let's not do that and in fact it is a mystery why people haven't done that yeah my my dad is a physicist believes that the reason you so for some nuclear weapons haven't proliferated amongst evil people so one is one belief that I share is that there's not that many evil people in the world that would that that would use Spectre whether it's bacteria and you clear weapons or maybe the future AI systems to do bad so the fraction is small and the second is that it's actually really hard technically yeah so the the intersection between evil and competent is small in terms and otherwise it really annihilate humanity you'd have to have you know sort of the the nuclear winter phenomena which is not one person shooting you know or even ten bombs you'd have to have some automated system that you know detonates a million bombs or whatever many thousands we have extreme evil combined with extreme competence and it's just like only some stupid system that would automatically you know dr. Strangelove type of thing you know I mean look we could have some nuclear bomb go off in some major city in the world like no I think that's actually quite likely even in my lifetime I don't think that's on I like to think and it'd be a tragedy but it won't be an existential threat and it's the same as you know the virus of 1917 whatever it was you know the influenza these bad things can happen and the plague and so on we can't always prevent them we always to always try but we can't but they're not existential threats until we combine all those crazy things together one so on the on the spectrum of intelligence from zero to human do you have a sense of if whether it's possible to create several orders of magnitude or at least double that of human intelligence type on your cortex I think the wrong thing to say double the intelligence you break it down into different components can I make something that's a million times faster than a human brain yes I can do that could I make something that is has a lot more storage than the human brain yes I could more common more copies of comp can I make something that attaches the different sensors than human brain yes I can do that could I make something that's distributed so these people yet we talked early about that important in your cortex voting's well they don't have to be co-located why you know they can be all around the places I could do that too those are the levers I have but is it more intelligent what depends what I train it on what is it doing if it's oh here's the thing so let's say larger neocortex and or whatever size that allows for higher and higher hierarchies yeah to form right we're talking about rains in canto I could could I have something as a super physicist or a super mathematician yes and the question is once you have a super physicist will they be able to understand something do a sense that it'll be orders to make like us compared to ever understand it yeah most people cannot understand general relativity right it's a really hard thing together I mean paint in a fuzzy picture stretchy space you know yeah but the the field equations to do that in the deep intuitions are really really hard and I've tried I unable to do it is to get you know it's easy to get special relativity general that's it man that's too much and so we already live with this to some extent the vast majority of people can't understand actually what the vast majority other people actually know we're just either we don't have the effort to or we can't or it on time are just not smart enough whatever so but we have ways of communicating Einstein has spoken in a way that I can understand he's given me analogies that are useful I can use those analogies from my own work and think about you know concepts that are similar it's not stupid it's not like he's existed some other plane there's no connection to my plane in the world here so that will occur it already has occurred that's from my point that this story is it already has a kirby liveth everyday one could argue that with me crepe machine intelligence that think a million times faster than us that it'll be so far we can't make the connections but you know at the moment everything that seems really really hard to figure out in the world when you actually figure it out it's not that hard you know we can everyone most everyone can understand the multiverses and most everyone can understand quantum physics we can understand these basic things even though hardly any baby people could figure those things out yeah but really understand so only a few people really understand you need to only understand the the projections the sprinkles of the useful my example of Einstein right his general theory of relativity is one thing that very very very few people can get and what if we just said those other few people are also artificial intelligences how bad is that in some sense they right yeah they say already you mean Einstein wasn't a very normal person he had a lot of where the quirks and so the other people who work with him so you know maybe they already were sort of this astral plane of intelligence that we live with it already it's not a problem it's still useful and you know so do you think we are the only intelligent life out there in the universe I would say that intelligent life has and will exist elsewhere in the universe I'll say that there is a question about contemporaneous intelligence life which is hard to even answer when we think about relativity in the the nature of space-time you can't say what exactly is this time someplace else in the world but I think it's it's you know I do worry a lot about the the filter idea which is that perhaps intelligent species don't last very long and so we haven't been around very long you know as a technological species we've been around for almost nothing man you know what 200 years I'm like that and we don't have any data a good data point on whether it's likely they will survive or not so do I think that there have been intelligent life elsewhere in the universe almost certain that of course in the past in the future yes does it survive for a long time I don't know this is another reason I'm excited about our work is our work meaning that general Worlds of AI and I think we can build intelligent machines that outlast us and you know they don't have to be tied to earth they don't have to you know I'm not saying that recreating you know you know aliens I'm just saying well if I asked myself and this might be a good point to end on here if I asked myself you know what's special about our species we're not particularly interesting physically we're not we don't fly we're not good swimmers we're not very fast from that very strong you know it's our brain that's the only thing and we are the only species on this planet it's built the model of the world it extends beyond what we can actually sense we're the only people who know about the far side of the Moon and the other universes and I mean other other galaxies and other stars and and but what happens in the atom there's no what that knowledge doesn't exist anywhere else it's only in our heads cats don't do it dogs into a monkey's don't do it it's just on and that is what we've created that's unique not our genes it's knowledge and if I asked me what is the legacy of humanity what what what should our legacy be it should be knowledge we should preserve our knowledge in a way that it can exist beyond us and I think the best way of doing that in fact you have to do it is to has to go along with intelligent machines to understand that knowledge it's a very broad idea but we should be thinking I call it a state planning for Humanity we should be thinking about what we want to leave behind when as a species we're no longer here and that'll happen sometime it sooner or later it's gonna happen and understanding intelligence and creating intelligence gives us a better chance to prolong it does give us a better chance prolonging life yes it gives us a chance to live on other planets but even beyond that I mean our solar system will disappear one day just give enough time so I don't know I thought we'll ever be able to travel to other things but we could tell the stars but we could send Intel's machines to do that say you have a you have an optimistic a hopeful view of our knowledge of the echoes of human civilization living through the intelligence systems we create Oh totally well I think the telephone systems are created in some sense the the vessel for bring him beyond Earth or making him last beyond humans themselves so how do you feel about that that they won't be human quote-unquote human what does human our species are changing all the time human today is not the same as human just fifty years ago its what does human do we care about our genetics why is that important as I point out our genetics are no more interesting than about two Miam genetics there's no more interesting them you know monkeys genetics what we have what what's unique and what's family better I start is our knowledge art what we've learned about the world and that is the rare thing that's the thing we want to preserve its genes the knowledge the knowledge that's a really good place to end thank you so much for talking you
Rosalind Picard: Affective Computing, Emotion, Privacy, and Health | Lex Fridman Podcast #24
the following is a conversation with Rosalind Picard she's a professor at MIT director of the effective computing Research Group at the MIT Media Lab and co-founder of two companies Affectiva and in Pataca over two decades ago she launched the field of affective computing with her book of the same name this book described the importance of emotion in artificial and natural intelligence the vital role of emotional communication has to the relationship between people in general and human robot interaction I really enjoy talking with Roz over so many topics including emotion ethics privacy wearable computing and her recent research in epilepsy and even love and meaning this conversation is part of the artificial intelligence podcast if you enjoy subscribe on youtube itunes or simply connect with me on twitter at Lex Friedman spelled Fri D and now here's my conversation with Rosalind Picard more than 20 years ago you've coined the term effective computing and let a lot of research in this area since then as I understand the goal is to make the machine detect and interpret the emotional state of a human being and adapt the behavior of the machine based on the emotional state so how is your understanding of the problem space defined by effective computing changed in the past 24 years so it's the scope the application is the challenge is what's involved how is that evolved over the years yeah actually originally when I defined the term effective computing it was a bit broader than just recognizing and responding intelligently to human emotion although those are probably the two pieces that we've worked on the hardest the original concept also encompassed machines that would have mechanisms that functioned like human emotion does inside them it would be any computing that relates to arises from or deliberately influences human emotion so the human-computer interaction part is the part that people tend to see like if I'm you know really ticked off at my computer and I'm scowling at it and I'm cursing at it and it just keeps acting smiling and happy like that little paperclip used to do yeah dancing winking that kind of thing just makes you even more frustrated right and I thought that stupid thing needs to see my effect and if it's gonna be intelligent which Microsoft researchers had worked really hard on it actually had some of the most sophisticated AI in and at the time that thing's gonna actually be smart it needs to respond to me and you and we can send it very different signals so by the way just a quick interruption the Clippy maybe is in Word 95 a 98 I remember when it was born but many people do you find yourself with that reference that people recognize what you're talking about still to this point I don't expect the newest students to these days but I've mentioned it through a lot of audiences like how many of you know this clip ething and still the majority people seem to know it so Clippy kind of looks at maybe natural language processing what you were typing and tries to help you complete I think I don't even remember what Clippy was was except annoying yeah it's right some people actually liked it I miss I would hear those stories you miss it why missed the the annoyance they felt like there's a element was there somebody was there and we're in it together and they were annoying it's like it's like a puppy that just doesn't get it they Crippin up the college crime and in fact they could have done it smarter like a puppy if they had done like if when you yell yell did it or cursed at it if it had put its little ears back and its tail down and jerked off probably people would have wanted it back right but instead when you yelled it it what did it do its smiled it winged it danced right if somebody comes to my office and I yell at them they started smiling winking and dancing I'm like I never want to see you again so Bill Gates got a standing ovation when he said it was going away because people were so ticked it was so emotionally unintelligent right it was intelligent about whether you're writing a letter what kind of help you needed for that context it was completely unintelligent about hey if you're annoying your customer don't smile in their face when you do it so that kind of mismatch was something the developers just didn't think about and intelligence at the time was really all about math and language and chess and you know games problems that could be pretty well defined social emotional interaction is much more complex than chess or go or any of the games I that people are trying to solve and in order to understand that required skills that most people in computer science actually we're lacking personally well let's talk about computer science if things gotten better since since the work since the message since you've really launched the field with a lot of research work in the space I still find as a person like yourself who's deeply passionate about human beings and and yet Emin computer science there still seems to be a lack of sorry to say empathy now again as computer scientists yeah well where hasn't gotten bad let's just say there's a lot more variety among computer scientists these days it's computer scientists are much more diverse group today than they were 25 years ago and that's good we need all kinds of people to become computer scientists so that computer science reflects more what society needs and you know there's brilliance among every personality type so it need not be limited to people who prefer computers to other people how hard do you think it is how your view of how difficult it is to recognize emotion or to create a deeply emotionally intelligent interaction has it gotten easier or harder as you've explored it further and how far away away from cracking this the if you think of the Turing test solving the intelligence looking at the Turing test for emotional intelligence I think it is as difficult as I thought it was going to be I think my prediction of its difficulty is spot-on I think the time estimates are always hard because they're always a function of society's love and hate of a particular topic if society gets excited and you get you know hundreds of it you get thousands of researchers working on it for a certain application that application gets solved really quickly the general intelligence the lack of the the computers complete lack of ability to have awareness of what it's doing the fact that it's not conscious the fact that there's no signs of it becoming conscious the fact that it doesn't read between the lines those kinds of things that we have to teach it explicitly what other people pick up implicitly we don't see that changing yet there aren't breakthroughs yet that lead us to believe that that's going to go any faster which means that it's still going to be kind of stuck with a lot of limitations where it's probably only going to do the right thing and very limited narrow pre-specified context where we can pre script prescribe pretty much what's what's gonna happen there so I don't see the I it's hard to predict the date because when people don't work on it it's infinite when everybody works on it you get a nice piece of it you know well solved in a short amount of time I actually think there's a more important issue right now then the difficulty of it and that's causing some of us to put the brakes on a little bit usually we're all just like step on the gas let's go faster this is causing us to pull back and put the brakes on and that's the way that some of this technology is being used in places like China right now and that worries me so deeply that it's causing me to pull back myself on a lot of the things that we could be doing and try to get the community to think a little bit more about ok if we're gonna go forward with that how can we do it in a way that puts in place safeguards that protects people the technology were referring to is just when a computer senses the human being like the human face yeah right yeah so what there's a lot of exciting things there like forming a deep connection of the human being book so what are your worries how that could go wrong is it in terms of privacy is it in terms of other kinds of privacy so here in the US if I'm watching a video of say a political leader and and in the u.s. we're quite free as we all know to even criticize the you know the president the United States right here that's not a shocking thing it happens you know about every five seconds sorry but in China what happens if you criticize the leader of the government right and so people are very careful not to do that however what happens if you're simply watching a video and you make a facial expression that's shows a little bit of skepticism right well and you know here we're completely for you to do that in fact for free to fly off the handle and say anything we want usually I mean there are some restrictions you know when when the athlete does this as part of the national broadcast maybe the teams get a little unhappy about picking that forum to do it right but that's more question of judgment we we have these freedoms and in places that don't have those freedoms what if our technology can read your underlying affective state what if our technology can read it even non-contact what if our technology can read it without your prior consent and here in the US and my first company we started a Factiva we have worked super hard to turn away money and opportunities that try to read people's effect without their prior informed consent and even the software that is licensable you have to sign things saying you will only use it in certain ways which essentially is get people's buy-in right don't don't do this without people agreeing to it there are other countries where they're not interested in people's buy-in they're just gonna use it they're gonna inflict it on you and if you don't like it you better not scowl in the direction of any censors so one let me just comment on a small tangent do you know with the idea of adversarial examples and deep fakes and so on yeah what you bring up is actually the the in that one sense deep fake provide a comforting protection that they you can no longer really trust that the video of your face was legitimate and therefore you always have an escape clause if a government is trying if a stable balanced ethical government is trying to accuse the or something at least you have protection you could say was fake news as it's a popular term now thinking of it we we know how to go into the video and see for example your heart rate and respiration and whether or not they've been tampered with and we also can put like fake heart rate and respiration in your video now too we decided we needed to do that after we after we developed a way to extract it we decided we also needed a way to jam it Hey and so the fact that we took time to do that other step to write that was time that I wasn't spending making the machine more effectively intelligent and there's a choice and how we spend our time which is now being swayed a little bit less by this goal and a little bit more like by concern about what's happening in society and what kind of future do we want to build and as we step back and say okay we don't just build AI to build AI to make Elon Musk more money or to make Amazon Jeff Bezos more money you could gosh you know that's that's the wrong ethic why are we building it what what is the point of building AI it used to be it was driven by researchers in academia to get papers published and to make a career for themselves and to do something cool right like cuz maybe it could be done now we realize that this is enabling rich people to get vastly richer the poor are the divide is even larger and is that the kind of future that we want maybe we want to think about maybe we want to rethink AI maybe we want to rethink the problems in society that are causing the greatest inequity and rethink how to build AI that's not about a general intelligence but that's about extending the intelligence and capability that have-nots so that we closed these gaps in society do you hope that kind of stepping on the brake happens organically because I think still majority of the force behind AI is the desire to publish papers is to make money without thinking about the why do you hope it happens organically is there room for regulation is yeah great questions I prefer the you know they talk about the carrot versus the stick I definitely prefer the carrot to the stick and you know in in our free world we there's only so much stick right your find a way around it I generally think less regulation is better that said even though my position is classically carrot no stick no regulation I think we do need some regulations in this space I do think we need regulations around P protecting people with their data that you own your data not Amazon not Google I would like to see people own their own data I would also like to see the regulations that we have right now around lie detection being extended to emotion recognition in general that right now you can't use the lie detector on an employee when you're on a candidate when you're interviewing them for a job I think similarly we need to put in place protection around reading people's emotions without their consent and in certain cases like characterizing them for a job and other opportunities so I've also I also think that when we're reading a motion that's predictive around mental health that that should even though it's not medical data that that should get the kinds of protections that our medical data gets what most people don't know yet is right now with your smartphone use and if you're wearing a sensor and you want to learn about your stress and your sleep and your physical activity and how much you're using your phone and your social interaction all of that non-medical data when we put it together with machine learning now call to AI even though the founders of a I wouldn't have called it that that capability cannot only tell that you're calm right now or that you're getting a little stressed but it can also predict how you're likely to be tomorrow if you're likely to be sick or healthy happy or sad stressed or calm especially when you're tracking data over time especially when we're tracking a week of your data or more you have an optimism towards you know a lot of people on our phones are worried about this camera that's looking at us for the most part unbalanced do you are you optimistic about the benefits that can be brought from that camera that's looking at billions of us or should we be more worried I think we should be a little bit more worried about who's looking at us and listening to us the device sitting on your countertop in your kitchen whether it's you know Alexa Google home or Apple Siri these devices want to listen while they say ostensibly to help us and I think there are great people in these companies who do want to help people I let me not brand them all bad I'm a user of products from all all of these companies I'm naming all the a company's alphabet Apple Amazon they are awfully big companies right they have incredible power and you know what if what if China were to buy them right and suddenly all of that data were not part of free America but all of that data were part of somebody who just wants to take over the world and you submit to them and guess what happens if you so much as smirk the wrong way when they say something that you don't like well they have reeducation camps right that's a nice word for them by the way they have a surplus of organs for people who have surgery these days they don't have an organ donation problem because they take your blood and they know you're a match and the doctors are on record of taking organs from people who are perfectly healthy and not prisoners they're just simply not the favored ones of the government and you know that's a pretty freaky evil Society and we can use the word evil there I was born in the Soviet Union I can certainly connect to the to the worry that you're expressing at the same time probably both you and I and you very much so you know there's an exciting possibility that you can have a deep connection with a machine yeah yeah right so students who say that they you know when you list like who do you most wish you could have lunch with or dinner with right and though right like I don't like people I just like computers and one of them said to me once when I had this party at my house I want you to know this is my only social event of the year my ones okay now this is a brilliant machine learning person right and we need that kind of brilliance and machine learning and I love that computer science welcomes people who love people and people who are very awkward around people I love that this is a field that anybody could join we need all kinds of people and you don't need to be a social person I'm not trying to force people who don't like people to suddenly become social at the same time if most of the people building the AI is the future are the kind of people who don't like people we've got a little bit of a problem hold on a second so let me let me push back on it so don't you think a large percentage of the world can you know there's loneliness there is a huge problem with loneliness that's growing and so there's a longing for connection do you if you're lonely you're part of a big and growing group yes so what weren't we're in it together I guess if you're lonely you drive good you're not alone that's a good line but do you think there's uh you talked about some worry but do you think there's an exciting possibility that's something like Alexa when these kinds of tools can alleviate that loneliness in a way that other humans can't yeah yeah definitely I mean a great book can kind of alleviate loneliness very because you just get sucked into this amazing story and you can't wait to go spend time with that character right and they're not a human character there is a human behind it but yeah it can be an incredibly delightful way to pass the hours and it can meet needs even you know I I don't read those trashy romance books but somebody does right and what are they getting from this well probably some of that feeling of being there right being there in that social moment that romantic moment or connecting with somebody I've had a similar experience reading some science fiction books connecting with the character Orson Scott Card and you know just amazing writing and Ender's Game and and speaker for the dead terrible title but those kind of books that pull you into a character and you feel like you're can if you feel very social it's very connected even though it's not responding to you and a computer of course can respond to you so it can deepen it right you can have a very deep connection much more than the movie her right you know plays up right what much more I mean movie her is already a pretty pretty deep connection right well but it but it's just a movie right it's scripted it's just you know but I mean like there can be a real interaction where the character can learn and you can learn you could imagine it not just being you and and one character you can imagine a group of characters you can imagine a group of people and characters human Nai connecting where maybe a few people can't can't sort of be friends with everybody but a few people and there a ice can be friend more people there can be an extended human intelligence in there where each human can connect with more people that way but it's it's still very limited but there are just what I mean is there many more possibilities than what's in that movie so there's a tension here the one you expressed a really serious concern about privacy about how governments can misuse the information and there's a possibility of this connection so let's let's look at Alexa yeah so a personal assistance for the most part as far as I'm aware they ignore your emotion they ignore even the context or the existence of you the intricate beautiful complex aspects of who you are except maybe aspects of your voice that how but recognize this before speech recognition do you think they should move towards trying to understand your emotion all of these companies are very interested in understanding human emotion they they want more people are telling ciri every day they want to kill themselves they Apple wants to know the difference between if a person is really suicidal versus if a person is just kind of fooling around with Siri right the words may be the same the tone of voice and what surrounds those words is is pivotal to understand if they should respond in a very serious way bring help to that person or if they should kind of jokingly tease back you know ah you just wanna you know sell me for something else right like like how do you respond when somebody says that well you know you do want to err on the side of being careful and taking it seriously people want to know if the person is happy or stressed in Part B so let me give you an altruistic reason and a business profit motivated reason and there are people and companies that operate on both principles the altruistic people really care about their customers and really care about helping you feel a little better at the end of the day and it would just make those people happy if they knew that they made your life better if you came home stressed and after talking with their product you felt better there are other people who maybe have studied the way effect affects decision-making and prices people pay and they know if I should tell you like the work of Jen Lerner on heartstrings and purse strings you know if we manipulate you into a slightly sadder mood you'll pay more Yeah right you'll pay more to change your situation you'll pay more for something you don't even need to make yourself feel better so you know if they sound a little sad maybe I don't want to cheer them up maybe first I want to help them get something a little shopping therapy right that helps them which is really difficult for a company that's primarily funded on advertisement so they're encouraged to yet to get if you get you can offer you products or primarily funded value buying things from their store so I think we should be you know maybe we need regulation in the future to put a little bit of a wall between these agents that have access to our emotion and agents that want to sell us stuff maybe there needs to be a little bit more of a firewall in between those so maybe digging in a little bit on the interaction with Alexa you mentioned of course a really serious concern about like recognizing emotion if somebody is speaking of suicide or depression asan but what about the actual interaction itself do you think so if I if I you know you mentioned Clippy in being annoying what is the objective function we're trying to optimize is it minimize annoyingness or minimize or maximize happiness or both we look at human human relations I think that push and pull the tension the dance you know the annoying the flaws that's what makes it fun so is there is there a room for like what what you wanna you want to have a little push and pull think kids sparring right you know I see my sons and they one of them wants to provoke the other to be upset and that's fun and it's actually healthy to learn where your limits are to learn how to self-regulate you can imagine a game where it's trying to make you mad and you're trying to show self-control and so if we're doing a AI human interaction that's helping build resilience and self-control whether it's to learn how to not be a bully or how to turn the other cheek or how to deal with an abusive person in your life then you might need an AI that pushes your buttons right but in general do you want an AI that pushes your buttons mmm probably depends on your personality I don't I want one that's respectful that is there to serve me and that is there to extend my ability to do things I'm not looking for a rival I'm looking for a helper and that's the kind of a I put my money on your census for the majority of people in the world in order to have a rich experience that's what they're looking for as well so they're not looking if you look at the movie her spoiler alert I believe the the program the the woman in the movie her leaves the yeah portion for somebody else right because they don't want to be dating anymore right like well do use your senses if Alexis said you know what I'm actually had enough of you for a while so I'm gonna shut myself off you don't see that as like I'd say you're trash cuz I paid for you right okay yeah we we've got to remember and this is where this blending human AI as if we're equals is really deceptive because AI is something at the end of the day that my students and I are making in the lab and we're choosing what it's allowed to say when it's allowed to speak what it's allowed to listen to what it's allowed to act on given the inputs that we choose to expose it to what outputs it's allowed to have it's all something made by a human and if we want to make something that makes our lives miserable fine I'd I wouldn't invest in it as a business you know unless it's just there for self-regulation training but I think we you know we need to think about what kind of future we want and actually your question I really like the what is the objective function is it to calm people down sometimes is it to always make people happy and calm them down well there was a book about that right the brave new world you know make everybody happy take your soma if you're unhappy take your happy pill and if you refuse to take your happy pill well well threaten you by sending you to Iceland to live there I lived in Iceland three years it's a great place don't think you're so much a little TV commercial there I was a child there for a few years this wonderful place so that part of the book never scared me but but really like do we want AI to manipulate us into submission into making us happy well if you are a you know like a power obsessed sick dictator individual who only wants to control other people to get your jollies in life then yeah you want to use AI to extend your power in your scale just to force people into submission if you believe that the human race is better off being given freedom and the opportunity to do things that might surprise you then you want to use AI to extend people's ability to build you want to build AI that extends human intelligence that empowers the weak and helps balance the power between the weak and the strong not that gives more power to the strong so in in this process of empowering people in sensing people and what is your sense on emotion in terms of recognizing emotion the difference between emotion that is shown and emotion that is felt so yeah yeah emotion that is expressed on the surface through your face your body and not various other things and what's actually going on deep inside on the biological level on the neuroscience level or some kind of cognitive level yeah whoa no easy questions here boy yeah I'm sure there's no there's no definitive answer but what's your sense how far can we get by just looking at the face we're very limited when we just look at the face but we can get further than most people think we can get people think hey I have a great poker face therefore all you're ever gonna get from me is neutral well that's naive we can read with the ordinary camera on your laptop or on your phone we can read from a neutral face if your heart is racing we can read from a neutral face if your breathing is becoming irregular and showing signs of stress we can read under some conditions that maybe I won't give you details on how your heart rate variable the power is changing that could be a sign of stress even when your heart rate is not necessarily accelerating so I'm sorry from physio sensitive from the face from the color changes that you cannot even see but the camera can see that's amazing so so you can get a lot of signal but so we get things people can't see using a regular camera and from that we can tell things about your stress so if you are just sitting there with a blank face thinking nobody can read my emotion while you're wrong right so that's really interesting but that's from sort of visual information from the face that's almost like cheating your way just the physiological state of the body by being very clever with when you can do as a tional processing signal processing so that's a really impressive but if you just look at the stuff we humans can see the bulk of the smile smirks the subtle all the face so then you can hide that on your face for a limited amount of time now if you if you're just going in for a brief interview and you're hiding it that's pretty easy for most people if you are however surveilled constantly everywhere you go then it's gonna say gee you know Lex used to smile a lot and now I'm not seeing so many smiles and Roz used to you know laugh a lot and smile a lot very spontaneously and now I'm only seeing these not so spontaneous looking smiles and only when she's asked these questions you know that's some sleep we could look at that too so now I have to be a little careful - when I say we you think we can't read your emotion and we can it's not that binary what we're reading is more some physiological changes that relate to the your activation now that doesn't mean that we know everything about how you feel in fact we still know very little about how you feel your thoughts are still private you're nuanced feelings are still completely private we can't read any of that so there's some relief that we can't read that even brain ding can't read that wearables can't read that however as we read your body state changes and we know what's going on in your environment and we look at patterns of those over time we can start to make some inferences about what you might be feeling and that is where it's not just the momentary feeling but it's more your stance towards things and that could actually be a little bit more scary with certain kinds of governmental control freak people who want to know more about are you on their team or are you not and getting that information through overtime so you're saying there's a lot of things by looking at the change over time yeah so you've done a lot of exciting work both in computer vision and physiological sense like wearables what do you think is the best modality for what's the the the best window into the emotional soul like is it the face is that the voice it's a body you want to know without you know everything is informative everything we do is informative so for health and well-being and things like that define the wearable physio techno some measuring physiological signals is the best for health based stuff so here I'm gonna answer empirically with data and studies we've been doing we've been doing studies now these are currently running with lots of different kinds of people but where we've published data and I can speak publicly to it the data are limited right now to New England college students so that's a small group among New England college students when they are wearing a wearable like like the impact embrace here that's measuring skin conductance movement temperature and when they are using a smartphone that is collecting their time of day of when they're texting who they're texting their movement around it their GPS the weather information based upon their location and when it's using machine learning and putting all of that together and looking not just it right now but looking at your rhythm of behaviors over about a week when we look at that we are very accurate of forecasting tomorrow's stress mood and happy sad mood and health and when we look at which pieces of that are most useful first of all if you have all the pieces you get the best results if you have only the wearable you get the next best results and that's still better than 80% accurate at forecasting tomorrow's levels it's not exciting because the wearable stuff with physiological information it feels like it violates privacy less than the non-contact face based methods yeah it's it's interesting I think what people sometimes don't you know it's fine the early days people would say oh wearing something or giving blood is invasive right whereas a camera is less invasive because it's not touching you I think on the contrary the things that are not touching you or maybe the scariest because you don't know when they're on or off and you don't know when people and you don't know who's behind it right a wearable depending upon what's happening to the data on it if it's just stored locally or if it's streaming and what it is being attached to in in a sense you have the most control over it because it's also very easy to just take it off take it off right now it's not sensing me so if I'm uncomfortable with what it's sensing now I'm free yeah right if I'm comfortable with what it's sensing then and I happen to know everything about this one what it's doing with it so I'm quite comfortable with it then I'm you know I have control I'm comfortable control is one of the biggest factors for an individual in reducing their stress if I have control over it if I know all there is to know about it then my stress is a lot lower and I'm making an informed choice about whether to wear it or not or winter wear it or not I wanna wear it sometimes maybe not others right this is that control that I'm with you that control even if you had the ability to turn it off yeah is a really important and we need to maybe you know if there's regulations maybe that's number one to protect is people's ability to it's easy to opt out as to opt in right so you've studied a bit of neuroscience as well how if looking at our own minds of the biological stuff or the neurobiological the neuroscience to get the signals in our brain helped you understand the problem in the approach of effective computing so originally I was the computer architect that I was building hardware and computer designs and I wanted to build ones that work like the brain so I've been studying the brain as long as I've been studying how to build computers have you figured out anything yet it's so amazing you know they used to think like oh if you remove this chunk of the brain and you find this function goes away well that's the part of the brain that did it and then later they realize if you remove this other chunk of the brain that function comes back and oh no we really don't understand it brains are so interesting and changing all the time and able to change in ways that will probably continue to surprise us when we were measuring stress you may know the story where we found an unusual big skin conductance pattern on one wrist in one of our kids with autism and in trying to figure out how on earth you could be stressed on one wrist and not the you know that like how can you get sweaty on one wrist right when you when you get stressed but that sympathetic fight-or-flight response like you he kind of should like sweat more in some places than others but not more on one wrist than the other that didn't make any sense we learned that what had actually happened was a part of his brain had unusual electrical activity and that caused an unusually large sweat response on one wrist and not the other and since then we've learned that seizures caused this unusual electrical activity and depending where the seizure is if it's in one place and it's staying there you can have a big electrical response we can pick up with a wearable at one part of the body you can also have a seizure that spreads over the whole brain generalized grand mal seizure and that response spreads and we can pick it up pretty much anywhere as we learned this and then later built embrace that's now FDA cleared for seizure detection we have also built relationships with some of the most amazing doctors in the world who not only help people with unusual brain activity or epilepsy but some of them are also surgeons and they're going in and they're implanting electrodes not just to momentarily read the strange patterns of brain activity that we'd like to see return to normal but also to read out continuously what's happening in some of these deep regions of the brain during most of life when these patients are not seizing most of the time they're not seizing most of the time they're fine and so we are now working on mapping those deep brain regions that you can't even usually get with EEG scalp electrodes because the changes deep inside don't reach the surface but interesting when some of those regions are activated we see a big skin conductance response who would have thunk it right like nothing here but something here in fact right after seizures that we think are the most dangerous ones that precede what's called pseudo sudden unexpected death in epilepsy there's a period where the brain waves go flat and it looks like the person's brain has stopped but it hasn't the activity has has gone deep into a region that can make the cortical activity look flat like a quick shutdown signal here it can unfortunately cause breathing to stop if it progresses long enough before that happens we see a big skin conductance response in the data that we have the longer this flattening the bigger our response here so we have been trying to learn you know initially like why why are we getting a big response here when there's nothing here well it turns out there's something much deeper so we can now go inside the brains of some of these individuals fabulous people who usually aren't seizing and get this data and start to map it so that's active research that we're doing right now with with top medical partners so this this wearable sensor this looking skin conductance can capture sort of the ripples of the complexity of what's going on in our brain so you this this little device you have a hope that you can start to get the signal from the from the interesting things happening in the brain yeah we've already published the strong correlations between the size of this response and the flattening that happens after words and unfortunately also in a real suit up case where the patient died because the well we don't know why we don't know if somebody was there it would have definitely prevented it but we know that most students happen when the person's alone and in this suit up is an acronym su DEP and it stands for the number two cause of years of life lost actually among all neurological disorders stroke is number one suit up as number two but most people haven't heard of it actually I'll plug my TED talk it's on the front page of Ted right now that talks about this and we hope to change that I hope everybody who's heard of SIDS and stroke will now hear of suit up because we think in most cases it's preventable if people take their meds and aren't alone when they have a seizure not guaranteed to be preventable there are some exceptions but we think most cases probably are so you have this embrace now in the version two wristband right for epilepsy management that's the one that's FDA approved yes and which is kind of weird weird yes that's okay it essentially means it's approved for marketing got it just a side note how difficult is that to do it's essentially getting FDA it's organized computer science technology it's so agonizing it's much harder than publishing multiple papers and top medical journals yeah we published peer-reviewed top medical journal Neurology best results and that's not good enough for the FDA is that system so if we look at the peer review of medical journals there's flaws the strengths is the FDA approval process how does it compare to the peer review process is it have a strengthened I think we peer review over FDA any day but is that a good thing is that a good thing for FDA are you saying does it stop some amazing technology from getting through yeah it does the FDA performs a very important good role in keeping people safe they keep things they put you through tons of safety testing and that's wonderful and that's great I'm all in favor of the safety testing sometimes they put you through additional testing that they don't have to explain why they put you through it and you don't understand why you're going through it and it doesn't make sense and that's very frustrating and maybe they have really good reasons and they just would it would do people a service to articulate those reasons be more transparent so as part of them Pataca you have sensors so what kind of problems can we crack what kind of things from seizures to autism - I think I've heard you mentioned depression and what kind of things can we alleviate can we detect what's your hope of what how we can make an world a better place with this wearable tech I would really like to see my you know fellow brilliant researchers step back and say you know what are what are the really hard problems that we don't know how to solve that come from people maybe we don't even see in our normal life because they're living in the poorer places they're stuck on the bus there they can't even afford the uber or the lifts or the data plan or all these other wonderful things we have that we keep improving on meanwhile there's all these folks left behind in the world and they're struggling with horrible diseases with depression with epilepsy with diabetes with just awful stuff that maybe a little more time and attention hanging out with them and learning what are their challenges in life what are their needs how do we help them have job skills how do we help them have a hope and a future and a chance to have the great life that so many of us building technology have and then how would that reshape the kinds of AI that we build how would that reshape the new you know apps that we build or the maybe we need to focus on how to make things more low-cost and green instead of thousand-dollar phones I mean come on you know why can't we be thinking more about things that do more with less for these books quality of life is not related to the cost of your phone you know it's not something that you know it's been shown that what about seventy-five thousand dollars of income and Happiness is the same okay however I can tell you you get a lot of happiness from helping other people and get a lot more than seventy-five thousand dollars buys so how do we connect up the people who have real needs with the people who have the ability to build the future and build the kind of future that truly improves the lives of all the people that are currently being left behind so let me return just briefly and a point maybe in movie her so do you think if we look farther into the future he says so much of the benefit from making our technology more empathetic to us human beings would make them better tools empower us make make our lives better well if we look farther into the future do you think we'll ever create an AI system that we can fall in love with and loves us back on the level that is similar to human to human interaction like in the movie her or beyond I think we can simulate it and ways that could you know sustain engagement for a while would it be as good as another person I don't think so for if you're used to like good people now if you've just grown up with nothing but abuse and you can't stand human beings can we do something that helps you there that gives something through a machine yeah but that's pretty low bar right if you've only encountered pretty awful people if you've encountered wonderful amazing people we're nowhere near building anything like that and I'm I would not bet on building it I would bet instead on building the kinds of AI that helps all helps kind of raise all boats that helps all people be better people helps all people figure out if they're getting sick tomorrow and it helps give them what they need to stay well tomorrow that's the kind of AI want to build that improves human lives not the kind of AI that just walks on The Tonight Show and people go wow look how smart that is you know really like and then it goes back in a box you know so on that point if we continue looking a little bit into the future do you think an AI that's empathetic and does improve our lives need to have a physical presence of body and even let me cautiously say the C word consciousness and even fear of mortality so some of those human characteristics do you think he needs to have those aspects or can it remain simply a machine learning tool that learns from data of behavior that that learns to make us based on previous patterns feel better or doesn't need those elements of consciousness and it depends on your goals if you're making a movie it needs a body it needs a gorgeous body it needs to act like it has consciousness it needs to act like it has emotion right because that's what sells that's what's gonna get me to show up and enjoy the movie okay in real life does it need all that well if you've read Orson Scott Card Ender's Game speaker for the dead you know it could just be like a little voice in your earring right and you could have an intimate relationship and it could get to know you and it doesn't need to be a robot but that doesn't make this compelling of a movie right I mean we already think it's kind of weird when a guy's looks like he's talking to himself on the train you know even though it's earbuds so we have embodied is more powerful embodied when you compare interactions with an embodied robot versus a video of a robot versus no robot the robot is more engaging the robot gets our attention more the robot when you walk in your house is more likely to get you to remember to do the things that you asked it to do because it's kind of got a physical presence you can avoid it if you don't like it it could see here avoiding it there's a lot of power to being embodied there will be embodied AIS they have great power and opportunity and potential there will also be a eyes that aren't embodied that just our little software assistants that help us with different things that may get to know things about us will they be conscious there will be attempts to program them to make them appear to be conscious we can already write programs that make it look like what do you mean of course I'm aware that you're there right I mean it's trivial to say stuff like that it's it's easy to fool people but does it actually have conscious experience like we do nobody has a clue how to do that yet that seems to be something that is beyond what any of us knows how to build now will it have to have that I think you can get pretty far with a lot of stuff without it will we accord it rights well that's more a political game that it is a question of real consciousness yeah can you go to jail for turning off Alexa is what that's the question for an election maybe a few decades well Sophia robots already been given rights as a citizen in Saudi Arabia right even before women have full rights then the robot was still put back in the box to be shipped to the next place where it would get a paid appearance right yeah dark and almost comedic if not absurd so I've heard you speak about your journey and finding faith sir uh and how you discovered some wisdoms about life and beyond from reading the Bible ma say that you said scientists who often assume that nothing exists beyond what can be currently measured materialism materialist and scientism yes in some sense this assumption enables the near term scientific method assuming that we can uncover the mysteries of this world by the mechanisms of measurement that we currently have but we easily forget that we've made this assumption so what do you think we missed out on by making that assumption that hmm it's fine to limit the scientific method to things we can measure and reason about and reproduce that's fine I think we have to recognize that sometimes we scientists also believe in things that happen historically you know like I believe the Holocaust happened I can't prove events from past history scientifically you prove them with historical evidence right it was the impact they had on people with eyewitness testimony and and things like that so a good thinker recognizes that science is one of many ways to get knowledge it's not the only way and there there's been some really bad philosophy and bad thinking recently you can call it scientism where people say science is the only way to get to truth and it's not it just isn't there are other ways that work also like knowledge of love with someone you don't you don't prove your love through science right so history philosophy love a lot of other things in life show us that there's more ways to gain knowledge and truth if you're willing to believe there is such a thing and I believe there is than science I do I am a scientist however and in my science I do limit my science to the things that the scientific method can can do but I recognize that it's myopic to say that that's all there is right there's just like you listed there's all the why questions and really we know for being honest with ourselves the percent of what we really know is is basically zero relative to the full mystery of this measure theory a set of measure zero if I have a finite amount of knowledge which I do so you said that you believe in truth so let me ask that old question what do you think this thing is all about life on Earth life the universe and everything I did everything was Douglas Adams yeah 42 my favorite number my street address my husband right yes - the exact same number for our house we got to pick it there's a reason we picked 42 yeah so is it just 40 tours there's do you have other words that you can put around it well I think there's a grand adventure and I think this life is a part of it I think there's a lot more to it than meets the eye and the heart and the mind and the soul here I think we we see but through a glass dimly in this life we see only a part of all there is to know if if people haven't read the the Bible they should if they consider themselves educated and you could read proverbs and find from end s wisdom in there that cannot be scientifically proven but when you read it there's something in you like like a musician knows when the instruments played right and it's beautiful there's something in you that comes alive and knows that there's a truth there that like your strings are being plucked by the master instead of by me right playing when I pluck it but probably when you play it sound spectacular right and when you when you encounter those truths there's something in you that sings and knows that there is more than what I can prove mathematically or program a computer to do don't get me wrong the math is gorgeous the computer programming can be brilliant it's inspiring right we want to do more none of the squashes my desire to do science or to get knowledge through science I'm not I'm not dissing the science at all I grow even more in awe of what the science can do because I'm more in awe of all there is we don't know and really at the heart of science you have to have a belief that there's truth that there's something greater to be discovered and some scientists may not want to use the face word but it's faith that drives us to do science it's faith that there is truth that there's something to know that we don't know that it's worth knowing that it's worth working hard and that there is meaning that there is such a thing as meaning which by the way science can't prove either we have to kind of start with some assumptions that there's things like truth and meaning and these are really questions philosophers own right this is their space of philosophers and theologians at some level so these are things science you know if we when people claim that science will tell you all truth that's there's a name for that it's it's its own kind of face it's scientism and it's very myopic yeah there's a much bigger world out there to be explored in in ways that science may not at least for now allow us to explore yeah and there's meaning and purpose and hope and joy and love and all these awesome things that make it all worthwhile too I don't think there's a better way to end it right thank you so much for talking today pleasure great questions you
Rajat Monga: TensorFlow | Lex Fridman Podcast #22
the following is a conversation with Rajat manga he's an engineering director of Google leading the tensorflow team tensorflow is an open source library at the center of much of the work going on in the world and deep learning both the cutting edge research and the large-scale application of learning based approaches but is quickly becoming much more than a software library it's now an ecosystem of tools for the deployment of machine learning in the cloud on the phone in the browser on both generic and specialized hardware tbu GPU and so on plus there's a big emphasis on growing a passionate community of developers Raja Jeff Dean and a large team of engineers at Google brain are working to define the future of machine learning with tensorflow 2.0 which is now in alpha I think the decision to open-source tensorflow is a definitive moment in the tech industry it showed that open innovation can be successful and inspire many companies to open-source their code to publish and in general engage in the open exchange of ideas this conversation is part of the artificial intelligence podcast if you enjoy it subscribe on youtube itunes or simply connect with me on Twitter at Lex Friedman spelled Fri D and now here's my conversation with Roger manga you were involved with Google brain since its start in 2011 with Jeff Dean it started with this belief the proprietary machine learning library and turn into tensorflow in 2014 the open source library so what were the early days of Google brain like what were the goals the missions how do you even proceed forward once there's so much possibilities before you it was interesting back then you know when I started or I needed you even just talking about it the idea of deep learning was interesting and intriguing in some ways it hadn't yet taken off but it held some promise that had shown some very promising and early results I think that the idea where Andrew and Jeff had started was what if we can take this what people are doing in research and scale it to what Google has in terms of the compute power and also put that kind of data together what does it mean and so far the results had been if you scale the compute scale the data it does better and would that work and so that that was the first year or two can we prove that outright and with disbelief and we started the first year we got some early wins which which is always great what were the ones like what was the ones where you were there's some problems to this this is gonna be good I think they're too early wins were one was speech that we collaborated very closely with the speech research team who was also getting interested in this and the other one was on images where we you know the cat paper as we call it that was covered by a lot of folks and the birth of Google brain was around neural networks that was so who was declaring from the very beginning that that was the whole mission so what would in terms of scale what was the sort of dream of what this could become like what were there echoes of this open-source tensorflow community that might be brought in was there a sense of TP use was there a sense of like machine learning is not going to be at the core the entire company is going to grow into that direction yeah I think so that was interesting in like if I think back to 2012 or 2011 and first was can we scale it in in the year or so we had started scaling it to hundreds and thousands of machines in fact we had some runs even going to 10,000 machines and all of those shows great promise in terms of machine learning at CoCo the good thing was Google's been doing machine learning for a long time deep learning was new but as we scale this up pretty sure that yes that was possible and it was going to impact lots of things we started seeing real products wanting to use this again speech was the first there were image things that photos came out of in many other products as well so that was exciting as we went into with that a couple of years externally also academia started to you know there was lots of push on okay deep learning is interesting we should be doing more and so on and so by 2014 we were looking at okay this is a big thing it's gonna grow and not just internally externally as well yes maybe Google's ahead of where everybody is but there's a lot to do so I wanted this start to make sense and come together so the decision to open-source I was just chatting with the Chris flattener about this the decision go open-source with tons of flow I would say so for me personally seems to be one of the big seminal moments and all the software engineering ever I think that's a when a large company like Google decides to take a large project that many lawyers might argue has a lot of IP just decide to go open-source with it and in so doing lead the entire world and saying you know what open innovation is is is a pretty powerful thing and it's okay to do that that was I mean that's an incredible incredible moment in time so do you remember those discussions happening the other open source should be happening what was that like I would say I think so the initial idea came from Jeff who was a big proponent of this I think it came off of two big things one was research by his view at a research group we were putting all our research out there if you wanted to we were building on others research and we wanted to push the state of the art forward and part of that was to share the research that's how I think deep learning and machine learning has really grown so fast so the next step was okay now word software help with that and it seemed like they were existing a few libraries out there they are hoping one torch being other and a few others but they were all done by academia and the level was was significantly different the other one was from a software perspective Google had done lots of software or that we'd used internally you know and we published papers often there was an open source project that came out of that that somebody else picked up that paper and implemented and they were very successful back then it was like okay there's Hadoop which has come off of tech that we've built we know the tech we've built is very better for a number of different reasons we've you know invested a lot of effort intact and turns out we have Google cloud and we are now not really providing our tech but we are saying okay we have BigTable which was thought is no thing we're going to now provide HBase api's on top of that which isn't as good but that's what everybody's used to so there's there's like can we make something that is better and really just provide helps the community in lots of ways but it also helps push the write a good standard forward so how does cloud fit into that there's a tensorflow open source write library and how does the fact that you can use so many of the resources that Google provides and the cloud fit into that strategy so tensile flow itself is open and you can use it anywhere right and we want to make sure that continues to be the case on Google cloud we do make sure that there's lots of integrations with everything else and we want to make sure that it works really really well there so you're leaving the tensorflow effort you tell me the history and the timeline of transfer flow project in terms of major design decisions so like the open source decision but really you know what to include and not there's this incredible ecosystem that I'd like to talk about there's all these parts but what if you just some sample moments that defined what tensorflow eventually became through its I don't know if you were a lot to say history when it's just but in deep learning everything moves so fast in just a few years is already history yes yes so looking back we were building tensor flow I guess we open sourced it in 20 15 November 2015 we started on it in summer of 2014 I guess and somewhere like three to six late 2014 by then we had decided that okay there's a high likelihood we'll open source it so we started thinking about that and making sure we're heading down that path at that point by that point we had seen a few you know lots of different use cases at Google so there were things like okay yes you want to run in a large scale in the data center yes we need to support different kind of hardware we had GPUs at that point we had our first TPU at that point er was about to come out you know roughly around that time so the design sort of included those we had started to push on mobile so we were running models on mobile at that point people were customizing chord so we wanted to make sure tensorflow could support that as well so that that sort of became part of that overall design when you say mobile you mean like pretty complicated algorithms running on the phone that's correct so so then you have a model that you deploy on the phone and run it their authority at that time there was the ideas of running machine learning on the phone that's correct we already had a couple of products that were doing that by then in those cases we had basically customized handcrafted code or some internal libraries that were using so I was actually at Google during this time in a parallel I guess University but we were using piano and cafe yeah we did we was there some degree to which you were bouncing I like trying to see what cafe was offering people trying to see what Theano was offering that you want to make sure you're delivering on whatever that is perhaps the Python part of thing maybe did that influence any design decisions um totally so when we built this belief and some of that was in parallel with some of these libraries coming up I mean Theano itself is older but we were building this belief focused on our internal thing because our systems were very different by the time we got to this we looked at a number of libraries that were out there Tiano there were folks in the group who had experience with torch with Lua there were folks here who had seen cafe I mean actually Yangcheng was here as well there's one other libraries I think we looked at a number of things might even have looked at China back then I'm trying to remember if across there in fact the I we did discuss ideas around okay should we have a graph or not and they were so supporting all these together was definitely you know there were key decisions that we wanted we we had seen limitations in our priors just believe things a few of them were just in terms of research was moving so fast we wanted the flexibility we want the hardware was changing fast we expected to change that so that those probably were two things and yeah I think the flexibility in terms of being able to express all kinds of crazy things was definitely a big one then so what the the graph decisions without with moving towards tensorflow 2.0 there's more by default will be eager execution so sort of hiding the graph a little bit you know because it's less intuitive in terms of the way people develop and so on what was that discussion like with in terms of using graphs it seemed its kind of the Theano way they seemed the obvious choice so I think where it came from was are like this belief had a graph like thing as well a much more simple it wasn't a general graph it was more like a straight line you know thing more like what you might think of cafe I guess in that sense but the graph was and we always were cared about the production stuff like even with disbelief we were deploying a whole bunch of stuff in production so graph did come from that when we thought of okay should we do that in Python and we experiment with it some ideas where it looked a lot simpler to use but not having a graph went okay how do you deploy now so that was probably what triggered the balance for us and eventually we ended up with a graph and I guess the question there is did you I mean the production seems to be the really good thing to focus on but did you even anticipate the other side of it where there could be what is it what are the numbers something crazy a forty 1 million downloads yep I mean was that even like a possibility in your mind that there would be as popular as it became so I think we did see a need for this a lot from the research perspective and like early days of keep learning in some is 41 million oh I don't think I imagined this number then there it seemed like there's a potential future where lots more people would be doing this and how do we enable like I would say this kind of growth I probably started seeing somewhat after the open-sourcing there was like okay you know deep learning is actually growing way faster for a lot of different reasons and we are in just the right place to push on that and leverage that earned and delivered on a lot of some things that people want so what changed once the open source like how you know this incredible amount of attention from a global population of developers what how did the project start changing I don't you actually remember it during those times I know looking now there's really good documentation there's an ecosystem of tools there's a community of law is a YouTube channel now yeah it's very very community driven back then I guess 0.1 version is that the version I think we called two point six or five something like what changed leading into 1.0 it's interesting you know I think we've gone through a few things there when we started our twin we first came out people love the documentation we have because it was just a huge step up from everything else because all of those were academic projects people doing you know we don't think about documentation I think what that changed was instead of deep learning being a research thing some people who were just developers could now certainly take this out and do some interesting things with it right who had no clue what machine learning was before then and that I think really changed how things started to scale up in some ways and pushed on it over the next few months as we looked at you know how do we stabilize things as we look at not just researchers now we want stability people who aren't apply things that's how we started planning for minato and there are certain needs for that perspective and so again documentation comes up designs more kinds of things to put that together and so that was exciting to get back to a stage where more and more enterprises wanted to buy in and really get behind that and I think post one not oh and you know with the next few releases that enterprise adoption also started to take off I would say between the initial release and whatnot oh it was okay researchers of course then a lot of hobby is an early interest people excited about this who started to get on board and then over the one knotek's thing lots of enterprises i imagine anything that's you know below 1.0 get some pressure to be and rise probably want something that's stable exactly and uh do you have a sense now the tensorflow misses day like it feels like the deep learning in general is extremely dynamic field as so much is changing do you have uh and doesn't fall it's been growing incredibly you have a sense of stability at the helm of it I know you're in the midst of it but yeah it's it's I think in the midst of it it's often easy to forget what in enterprise wines and what some of the people on that side one they're still people running models that are three years old four years old so inception is still used by tons of people just even last night fifty is what couple of years over now or more but tons of people who use tag and they're fine they don't need the last couple of bits of performance or quality they want some stability and things that just work and so there is value in providing that with that kind of stability and in making it really simpler because that allows a lot more people to access it and then there's the research crowd which wants okay they want to do these crazy things exactly like you're saying right not just deep learning in the straight-up models that used to be there they warned RN ends and even are an enzyme a B or there are transformers now and now it needs to combine with RL and Gans and so on so so there's definitely that area that like the boundary that's shifting and pushing the state of the art but I think there's more and more of the past arts much more stable and even stuff that was two three years old is very very usable by lots of people so that makes her that part makes it all easier so I imagine maybe you can correct me if I'm wrong one of the biggest use cases is essentially taking something like resna 50 and doing some kind of transfer learning on a very particular problem that you have it's basically probably what majority of the world does and you want to make that as easy as possible that's right so I would say for the hobbyist perspective that's the most common case right in fact the apps on phones and stuff that you'll see the early ones that's the most common case I would say there a couple of reasons for that one is that everybody talks about that it looks great on slides yeah that's a virtual presentation you know exactly what enterprises wine is that is part of it but that's not the big thing enterprises really have data that they want to make predictions on this is often what they used to do with the people who are doing M L was just regression models linear regression logistic regression linear models or maybe gradient booster trees and so on some of them still benefit from deep learning but they weren't that that that's the bread and butter like the structured data and so on so depending on the audience you look at their little bit different and they just have I mean the best of enterprise probably just has a very large data set or deep learning can probably shine that's correct right and then they I think the other pieces that they weren't again it with 2.0 or that developer summit we put together is there the whole tensorflow extended piece which is the entire pipeline they care about stability across doing their entire thing they want simplicity across the entire thing I don't need to just train a model I need to do that every day again over and over again I wonder to which degree you have a role in I don't know so I teach a course on deep learning and I have people like lawyers come up to me and say you know say one is machine learning gonna enter legal the legal around the same thing in all kinds of disciplines immigration insurance often when I see what it boils down to is these companies are often a little bit old-school in the way they organize the day so the data is just not ready yet it's not digitized if you also find yourself being in the role of an evangelist for like let's get organized your data folks and then you'll get the big benefit of tensorflow do you get those have those conversations so yeah yeah I you know I get all kinds of questions there from okay what can I do what do I need to make this work right - do we really need deep learning I mean they're all these things I already used this linear model why would this help I don't have enough data or let's say you know or I want to use machine learning but I have no clue where to start so I'd really start to all the way to the experts who wise but very specific things it's interesting is there a good answer is it boils down to oftentimes digitizing data so whatever you want automated though whatever date you want to make prediction based on you have to make sure that it's in an organized form you'd like with it within in the sense of like ecosystem there's now you're providing more and more data sets and more pre training models are you finding yourself also the organizer of data sets yes I think the tensorflow data sets that we just released that's definitely come up people want these data sets can we organize them and can we make that easier so that's that's definitely one important thing the other related thing I would say is I often tell people you know what don't think of the most fanciest thing that the newest model that you see make something very basic work and then you can improve it there's just lots of things you can do in there yeah I start with the basic truth one of the big things that makes it makes tensorflow even more accessible was the appearance whenever that happened of Karass the Cara standard sort of outside of tents of no I think it was Karis on top of the a no at first only and then Karis became on top of tensorflow do you know when Cara shows to also add 10 Sefolosha back and who was the was it just the community that drove that initially do you know if there was discussions conversations yeah so Francis started the Charis project before he was at Google and the first thing was Tiana would I don't remember if that was after tensorflow was created or way before and then at some point ray intense flow started becoming popular there were enough similarities that he decided to okay create this interface and input tense flows the back end I believe that might still have been before he joined Google so I you know we're not really talking about that he decided on his own and thought that was interesting and relevant to the community in fact I didn't find out about him being at Google until a few months after he was here he was working on some research ideas and doing Kerris on his nights and weekends project and things so he wasn't like part of the texture flow he didn't join in the joint research and he's doing some amazing here's some papers on that on research he's done he's a great researcher as well and at some point we realized oh he's he's doing this good stuff people seem to like the API and he's right here so we talked to him and he said okay why don't I come over to your team and work with you for a quarter and let's make that integration happen and we talked to his manager and he said sure my quarters fine and that quarter's been something like two years now so Karis got integrated into tensorflow like in a deep way yeah and now with 2.0 tensorflow 2.0 sort of Karass is kind of the recommended way for a beginner to interact with testify which makes that initial sort of transfer learning or the basic use cases even for an enterprise super simple right that's good that's right so what was that decision like that seems like a I it's kind of a bold decision as well we did spend a lot of time thinking about that one we had a bind of API somewhere by us there was a parallel layers API that we were building and when we decided to do caris in parallel so they were like okay two things that we are looking at and the first thing we was trying to do is just have them look similar like be as integrator as possible share all of that stuff they were also like three other API is that others had built over time because we didn't have a standard one but one of the messages that we keep kept hearing from the community okay which one do we use and they kept seeing like okay here's a model in this one and here's a model in this one which should I pick so that that's sort of like okay we had to address that straight on with 2.0 the whole idea is you need to simplify you had to pick one based on where we were we were like okay let's see what's what are the what do the people like and caris was clearly one that lots of people loved there were lots of great things about it so we settled on that organically that's kind of the best way to do it which it was great because it was surprising there were less to sort of bring in and outside I mean there was a feeling like Karis might be almost like a competitor and this is a certain kind of to tensorflow and in a sense it became an empowering element of tensorflow that's right yeah it's interesting how you can put two things together which don't which can align iron in this case I think Francois the team and I you know a bunch of us have chatted and I think we we all want to see the same kind of things we all care about making it easier for the huge set of developers out there and that makes a difference so python has grid over in Rossum who until recently held the position of benevolent dictator for life right so there's a huge successful open source project like tensorflow need one person who makes a final decision so you did a pretty successful tensorflow dev summit just now last couple of days there's clearly a lot of different new features being incorporated and amazing ecosystem so on who's a how are those design decisions made is there is there a btfl intensive flow and or is it more distributed in organic I think it's it's some more different I would say I've always been involved in the key design directions but there are lots of things that are distributed where there number of people Martin Rick being one who is really driven a lot of our open source stuff a lot of the api's in there there a number of other people who have been you know pushed and been responsible for different parts of it we do have regular design reviews over the last year we've really spent a lot of time opening up to the community and adding transparency we're setting more processes in place so RFC's special interest groups really grow that community and and scale that I think that kind of scale that ecosystem is in I don't think we could scale with having me as the saloon decision-maker yeah so yeah the growth of that ecosystem maybe you can talk about a little bit first of all when I started with Andre karpati when he first had come that j/s the fact that you can train in your network in the browser's in that JavaScript was incredible yep so now tensorflow jas is really making that a serious like a legit thing a way to operate whether it's in the back end or the front end then there's the tensorflow extended like you mentioned there's a stencil for light for mobile and all of it as far as I can tell it's really converging towards being able to you know save models in the same kind of way you can move around you can train on the desktop and then move it to mobile and so on thickness is that cohesiveness so he may be give me whatever I missed a bigger overview of the mission of the ecosystem that's trying to be built and where is it moving forward yeah so in short the way I like to think of this is our goals to enable machine learning and in a couple of ways you know one is if you have lots of exciting things going on in ml today we started with deep learning but we now support a bunch of other algorithms too so so one is to on the research side keep pushing on the state-of-the-art can we you know how do we enable researchers to build the next amazing thing so Bert came out recently you know it's great that people are able to do new kinds of research and there are lots of you know amazing research that happens across the world so that's one direction the other is how do you take that across all the people outside who want to take that research and do some great things with it and integrate it to build real products to have a real impact on people and so if that's the other axes in some ways you know at a high level one way I think about it is there a crazy number of compute devices across the world and we often used to think of ml and training and all of this as okay something you do either in the work station or the data center or cloud but we see things running on the phones we see things running on really tiny chips I mean we had some demos at the developer summit and so the way I think about this ecosystem is how do we help get machine learning on every device that has the compute capability and that continues to grow in and so in some ways this ecosystem is looked at you know various aspects of tagged and grown over time to cover more of those and we continue to push the boundaries in some areas we've built more tooling and things around that to help you I mean the first tool we started was ten support you wanted to learn just the training piece the effects or tensorflow extended to really do your entire ml pipelines if you're you know care about all that production stuff but then going to the edge going to different kinds of things and it's not just us now if you're a place where there are lots of libraries being built on top so there are some for research may be things like pens flow agent certain the probability that started as research things or for researchers for focusing on certain kinds of algorithms but they're also being deployed or produced by you know production folks and some have come from within Google just teams across Google who wanted to build these things others have come from just the community because there are different pieces that different parts of the community care about and I see our goal as enabling even that right it's not we cannot and won't build every single thing that just doesn't make sense but if we can enable others to build the things that they care about and there's a broader community that cares about that and we can help encourage that and that that's great that really helps the entire ecosystem not just those one of the big things about 2.0 that we're pushing on is okay we have these so many different pieces right how do we help make all of them work well together so there are few key pieces there that we're pushing on one being the core format in there and how we share the models themselves through save model and Martin's flow hub and so on and you know a few of the pieces that we really put this together I was very skeptical that that's you know intensive for j/s came out I didn't seem or deep-learning j/s yeah it was the first it seems like technically very difficult project as a standalone it's not as difficult but as a thing that integrates into the ecosystems is very difficult so yeah I mean there's a lot of aspects of this you're making look easy but and the technical side how many challenges have to be overcome here a lot and still have to be yes that's the other question here too there are lots of steps to a training leave iterated over the last few years so there's lot we've learned I yeah often when things come together well things look easy that's exactly the point it should be easy for the end user but there are lots of things that go behind that if I think about still challenges ahead there are you know we have a lot more devices coming on board for example from the hardware perspective how do we make it really easy for these vendors to integrate with something like tensorflow right so there's a lot of compiler stuff that others are working on there things we can do in terms of our API is and so on that we can do as we you know tensorflow started as a very monolithic system and to some extent it still is there are less lots of tools around it but the core is still pretty large in monolithic one of the key challenges for us to scale that out is how do we break that apart with clear interfaces it's you know in some ways its software engineering 101 but for a system that's now four years old I guess or more and that's still rapidly evolving and that we're not slowing down with it's hard to you know change and modify and really break apart it's sort of like as people say right it's like changing the engine with a car running or fake professor that's exactly what we're trying to do so there's a challenge here because the downside of so many people being excited about tensorflow and becoming to rely on it in many of their applications is that you're kind of responsible it's the technical debt you're responsible for previous versions to some degree still working so when you're trying to innovate I mean it's probably easier to just start from scratch every few months absolutely so do you feel the pain of that a 2.0 does break some back compatibility but not too much it seems like the conversion is pretty straightforward and do you think that's still important given how quickly deep learning is changing can you just the things that don't you've learned can you just start over or is there pressure to not it's it's a tricky balance so if it was just a researcher writing a paper who a year later will not look at that code again sure it doesn't matter there are a lot of production systems that rely on tensor flow port at Google and across the world and people worry about this I mean they're these systems run for a long time so it is important to keep that compatibility and so on and yes it does come with a huge cost there's we have to think about a lot of things as we do new things and make new changes I think it's a trade-off right you can you might slow certain kinds of things down but the overall value you're bringing because of that is is much bigger because it's not just about breaking the person yesterday it's also about telling the person tomorrow that you know what this is how we do things you're not going to break you and you come on part because there are lots of new people who are also going to come on board hey you know one way I like to think about this and I always push the team to think about as well when you want to do neat things you want to start with a clean slate design with a clean slate in mind and then we'll figure out how to make sure all the other things work and yes we do make compromises occasionally but unless you've designed with the clean slate and not worried about that you'll never get to a good place I was brilliant so even if you're do you are responsible when you're in an idea stage when you're thinking of new it just put all that behind you yeah that's right okay that's really really well put so I have to ask this because a lot of students developers ask me how I feel about pie tours for successful so I've recently completely switched my research group to tensorflow I wish everybody would just use the same thing and tensile force as close to that I believe as we have but do you enjoy competition so testify was leading in many ways on many dimensions in terms of the ecosystem terms the number of users momentum power production level so on but you know a lot of researchers and now also using PI torch do you enjoy that kind of competition or do you just ignore it and focus on making tencel flow the best that it can be so just like research or anything people are doing right it's great to get different kinds of ideas and when we started with tensorflow like I was saying earlier one it was very important for us to also have production in mind we didn't want just research right and that's why we chose certain things now pi torch came along and said you know what I only care about research this is what I'm trying to do what's the best thing I can do for this and it started iterating and said okay I don't need to worry about drafts let me just run things I don't care if it's not as fast as it can be but let me just you know make this part easy and there are things you can learn from that right they they again had the benefit of seeing what had come before but also exploring certain different kinds of spaces and they had some good things there you know building on say things like chain and so on before that so competition is definitely interesting it made us you know this is an area that we had thought about like I said you know very early on over time we had revisited this a couple of times should we add this again at some point we said you know what here's it seems like this can be done well so let's try it again and week that's how you know we started pushing on eager execution how do we combine those two together which has finally come very well together in 2.0 but it took us a while to get all the things together and so on so let me I mean ask put another way I think eager execution is a really powerful thing those at it you think he wouldn't have been you know Muhammad Ali versus Frazier right you think it wouldn't have been added as quickly if pi torch wasn't there it weight might have taken longer no long yeah it was I mean we dried some radiance of that before so I'm sure it would ever happen but it might have taken longer I'm grateful that tensorflow responds in the way they did it's doing some incredible work last couple years what are the things that we didn't talk about are you looking forward in 2.0 that it comes to mind so we talked about some of the ecosystem stuff making it easily accessible to Karis Iker execution is there other things that we missed yeah I would say one is just where 2.0 is and you know with all the things that we've talked about I think as we think beyond that there are lots of other things that it enables us to do and that we are excited about so what it's setting us up for ok the hair these really clean api's we've cleaned up the surface for what the users warned what it does also allows us to do a whole bunch of stuff behind the scenes once we've we are ready with 2.0 so for example intensive flow with graphs and all the things you could do you could always get a lot of good performance if you spent the time to tune it right and we have clearly shown that lots of people do that the 2.0 with these API is where we are we can give you a lot of performance just with whatever you do you know if you're because we see please it's much cleaner we know most people are going to do things this way we can really optimize for that and get a lot of those things out of the box and it really allows us you know both for single machine and distributed and so on really explore other spaces behind the scenes after you know 2.0 in the future versions as well so right now the team is really excited about that that over time I think we'll see that the other piece that I was talking about in terms of just restructuring the monolithic thing into more pieces and making it more modular I think that's going to be really important for a lot the other people in the ecosystem are their organizations and so on that wanted to build things can you elaborate a little bit what you mean by making tons of flow more ecosystem more modular so the way it's organized today is there's one there are lots of repositories in the tensorflow organization at github the core one where we have cleanser flow it has the execution engine it has you know the key backends for CPUs and GPUs it has the work to do distributed stuff and all of these just work together in a single library or binary there's no way to split them apart easily when there are some interfaces but they're not very clean in a perfect world you would have clean interfaces where okay I want to run it on my fancy cluster with some custom networking just implement this and do that I mean we kind of support that but it's hard for people today I think as we are starting to see more interesting things in some of these paces having that clean separation will really start to help and and again going to the large size of the ecosystem in the different groups involved they're enabling people to evolve and push on things more independently just allows it to scale better and by people you mean individual developers and I know organization and organizations that's so the hope is that everybody sort of major I don't know Pepsi or something uses like major corporations go to tensorflow do this kind of yeah if you look at enterprise like Pepsi or these I mean a lot of them are already using tensorflow they are not the ones that do the development or changes in the core some of them do but a lot of them don't I mean they cut small pieces there are lots of these some of them being let's say hardware vendors who are building their custom hardware and they want their own Vsauce or some of them being bigger companies say IBM I mean they're involved in some of our special interest groups and they see a lot of users who want certain things and they want to optimize for that so folks like that often a tourist vehicle companies perhaps exactly yes so yeah like I mentioned tensorflow has been downloaded 41 million 50,000 commits almost 10,000 pool requests and 1,800 contributors so I'm not sure if you can explain it but Oh what does it take to build a community like that what if in retrospect what do you think what is the critical thing that allowed for this growth to happen and how does that growth continue yeah uh yeah that's a interesting question I wish I had all the answers there I guess so you could replicate it I I think there's a there number of things that need to come together right a one you know just like any new thing it is about there's a sweet spot of timing what's needed you know does it grow with what's needed so in this case for example tensa flow is not just grown because it was a good tool it's also grown with the growth of deep learning itself so those factors come into play other than that though I think just hearing listening to the community what they're - what they need being open to like in terms of external contributions we've spent a lot of time in making sure we can accept those contributions well we can help the contributors in in adding those putting the right process in place getting the right kind of community welcoming them and so on like over the last year we've really pushed on transparency that that's important for an open source project people want to know where things are going and they're like okay here's a process where you can do that here RFC's and so on so thinking through there are lots of community aspects that come into that you can really work on as a small project it's may be easy to do because there's like two developers in and you can do those as you grow putting more of these processes in place thinking about the documentation thinking about what to developers care about what kind of tools would they want to use one of these come into planting so one of the big things I think that feeds the tensorflow fire is people building something on tensorflow and you know some implement a particular architecture that does something cool useful and they put it at that and github and so it just feeds this this growth these have a sense that with 2.0 and 1.0 that there may be a little bit of a partitioning like there's a Python two and three but there'll be a code base and in the older versions of test fall there will not be as compatible easily or any pretty confident that this kind of conversion it's pretty natural and easy to do so we're definitely working on hard to make that very easy to do there's lots of tooling that we talked about at the developer summit this week and really continue to invest in that tooling it's you know when you think of these significant version changes that's always a risk and we we are really pushing hard to make that transition very very smooth I I think so so at some level people want to move and they see the value in the new thing they don't want to move just because it's a new thing and some people do it but most people want a really good thing and I think over the next few months as people start to see the value will F&T see that shift happening so I'm pretty excited and confident that we will see people moving as you said earlier this field is also moving rapidly so that'll help because we can do more things and you know the new things will clearly happen into point X so people who have lots of good reasons to move so what do you think that's the 43.0 looks like is that is there it's everything's happening so crazily that even at the end of this year seems impossible to plan for or is it possible to plan for the next five years I I think it's tricky there are some things that we can expect in terms of okay change yes change is going to happen I are there some good things going to stick around and something's not going to stick around I would say that the basics of deep learning the you know say convolution models or the basic kind of things they'll probably be around in some form still in five years will rln ganz stay very likely based on where they are we have new things probably but those are hard to predict and some directionally some things that we can see is you know and things that we're starting to do right with some of our projects right now is just 2.0 combining you could execution in in graphs where we starting to make it more like just your natural programming language you're not trying to program something else similarly with surfer tensorflow we're taking that approach can you do something roundup right so some of those ideas seem like okay that's the right direction in five years we expect to see more in that area other things we don't know is will hardware accelerators be the same will we be able to train with four bits instead of 32 bits and I think the TPU side of things is exploring that I mean GPUs already on version three it seems that the evolution of TPU and tensorflow are sort of their Co evolving almost in terms of both are learning from each other and from the community and from the applications where the biggest benefit is achieved that's right you've been trying to sort with with ego with carrots to make tensorflow as accessible and easy to use as possible what do you think for beginners is the biggest thing they struggle with have you encountered that or is basically what Karis is solving is that eager like we talked about yeah for for some of them like you said right beginners want to just be able to take some image model they don't care if it's inception the rest net or something else and do some training or transfer learning on the kind of model being able to make that easy is important so I in some ways if you do that by providing them simple models would say in hub or so on and they don't care about what's inside that box but they want to be able to use it so we are pushing on I think different levels if you look at just a component that you get which has the layers already smushed in the beginners probably just want that then the next step is okay look at building layers with players if you go out to research then they are probably writing custom layers themselves they don't live so there's a whole spectrum there and then providing the pre-trained models seems to really decrease the time from are you trying to start so you could basically in a collab notebook achieve what you need so basically answering my own question because I think what tensorflow delivered on recently is this trivial for beginners so I was just wondering if there was other pain points you tried to ease but I'm not sure there would know that those are probably the big ones every night I see high schoolers doing a whole bunch of things now it's pretty amazing it's it's both amazing and terrifying yes in a sense that when they grow up it's some incredible ideas will be coming from them so there's certainly a technical aspect to your work but you also have a management aspect to your role with tensorflow leading the project large number of developers and people so what do you look for in a good team what do you think you know Google has been at the forefront of exploring what it takes to build a good team and tensorflow is one of the most cutting-edge technologies in the world so in this context what do you think makes for a good team it's definitely something I think a fair bit about I think in terms of you know the team being able to deliver something well one of the things that's important is a cohesion across the team so being able to execute together and doing things it's not an end like at this scale an individual engineer can only do so much there's a lot more that take they can do together even though we have some amazing Superstars across Google and in the team but there's you know often the way I see it as the product of what the team generates is very larger than the whole or you know the individual put together and so how do we have all of them work together the culture of the team itself hiring good people is important but part of that is it's not just that okay we hire one smart people and throw them together and let them do things it's also people have to care about what they're building people have to be motivated for the right kind of things that's often an important factor and you know finally how do you put that together with a somewhat unified vision of where we want to go so are we all looking in the same direction or what's going on over and sometimes it's a mix Google's a very bottom-up organization in some sense also research even more so and that's how we started but as we've become this larger product and ecosystem I think it's also important to combine that well with mix if ok here's the direction you want to go in there is exploration we'll do around that but let's keep staying in that direction not just all over the place and is there a way you monitor the health of the team sort of like is is there way you know you did a good job he was good like I mean you're sort of you're saying nice things but it's sometimes difficult to determine yeah how aligned yes because it's not binary it's nothing it's it's there's tensions and complexities and so on and the other element of visit the mesh is superstars you know there's so much even at Google such a large percentage of work is done by individual superstars too so there's a yeah and sometimes those superstars could be against the dynamic of a team and those those tensions and it was that has the I mean I'm sure in telephone might be a little bit easier because the mission of the project is so mr. beautiful year at the cutting edge was exciting yeah when have you had struggle with that has there been challenges there are always people challenges in different kinds of fairs that bad said I think we've been what's good about getting people who care and are you know have the same kind of culture and that's Google in general to a large extent but also like you said given that the project has had so many exciting things to do there's been room for lots of people to do different kinds of things and grow which which does make the problem a bit easier I guess yeah and it allows people depending on what they're doing if there's room around them and that's fine but yes we do it we do care about whether superstar an art that they need to work well with the team across Google that's interesting to hear so it's like superstar not the productivity broadly is about the team yeah yeah yeah I mean they might add a lot of value but if they're putting the team then that's a problem so in hiring engineers it's so interesting right the hiring process what do you look for how do you determine a good developer or a good member of her team from just a few minutes or hours together again no magic answers I'm sure yeah yeah I mean Google has a hiding process that we've refined over the last 20 years I guess and that you've probably heard and seen a lot about so we do work with the same hiring process and that that's really helped for a mean particular I would say in addition to the the core technical skills what does matter is their motivation in what they want to do because if that doesn't align well with their you want to go that's not going to lead to long-term success for either with them or the team and I think that becomes more important the more senior the person is but it's important at every level like even the junior most engineer if they're not motivated to do well at what they're trying to do however smart they are it's going to be hard for them to succeed there's the Google hiring process touch on that passion so like trying to determine because I think as far as I understand maybe you can speak to it that the Google hiring process sort of helps in the initial like determines the skill set there is your puzzle solving ability problem solving ability good but like I'm not sure but it seems that the determining whether the person is like fire inside them yeah that burns to do anything really doesn't really matter it's just some cool stuff I'm going to do it that I don't know is that something that ultimately ends up and when they have a conversation with you or once it gets closer to the sales so one of the things we do have as part of the process is just a culture fit like part of the interview process itself in addition to just the technical skills in each engineer or whoever the interviewer is is supposed to rate the person on the culture and the culture fit with Google and so on so that is definitely part of the process now there are various kinds of projects and different kinds of things so there might be variants and if the kind of culture you want there and so on and yes that does vary so for example tensorflow has always been a fast-moving project and we want people who are comfortable with that but at the same time now for example we are at a place where we are also very full-fledged product and we want to make sure things that work really really work right you can't cut corners all the time so that balancing that out in finding the people who are the right fits for fit for those is important in anything those kind of things do vary a bit across projects and teams and product areas across Google and so you'll see some differences there in the final checklist but a lot of the core culture it comes along with just the engineering excellence and so on what is the hardest part of your job I think you pick I guess it's it's fun I would say right hard yes I mean lots of things at different times I think that that does vary so let me clarify that difficult things are fun yeah when you solve them right yes it's fun in that in that sense I I think the key to a successful thing across the board and you know in this case it's a large ecosystem now but even a small product is striking that fine balance across different aspects of it sometimes that's how fast do you go versus how perfect it is sometimes it's how do you involve this huge community who do you involve where you reside okay now is not a good time to involve them because it's not the right fit sometimes it's saying no to certain kinds of things those are often they're hard decisions some of them you make quickly because you don't have the time some of them you get time to think about them but they're always hard so on both both choices are pretty good it's that those decision what about deadlines this is do you find tensorflow to be driven by deadlines to a degree that a product might or is there still a balance to where I mean it's less deadline you had the dev summits yeah they came together incredibly didn't look like there's a lot of moving pieces and so on so that did that deadline make people rise to the occasion releasing that's the flow 2.0 alpha yeah I'm sure that was done last minute as well I mean likely there's up to the yes up to the up to the last point yes again you know it's one of those things that's a you need to strike the good balance there's some value that deadlines spring that does bring a sense of urgency to get the right things together instead of you know getting the perfect thing out you need something that's good and works well and the team definitely did a great job in putting that together so it was very amazed and excited by how that came together that said across there we try not to put artificial deadlines we focus on key things that are important figure out what that how much of it's important and and we are developing in the open what you know internally and externally everything is available to everybody so you can pick and look at where things are we do releases at a regular cadence so fine if something doesn't necessarily end up at this month it'll end up in the next release in a month or two and that's okay but we want to get like keep moving as fast as we can in these different areas because we can iterate and improve one things sometimes it's okay to put things out that aren't fully ready if you make sure it's clear that okay this is experimental but it's out there if you want to try and give feedback that's very very useful I think that quick cycle and quick iteration is important that's what we often focus on rather than here's a deadline where you get everything else it's to point now is there pressure to make that stable or like for example WordPress 5.0 just came out with it and there was no pressure - is that it was a lot of build up dates to deliver - way too late but and they said okay well but we're gonna release a lot of updates really quickly to improve it this do you see Tesla photo 2.0 in that same kind of way or is there this pressure - once it hits 2.0 once you get to the release candidate and then you get to the final that that's gonna be the stable thing so it's going to be stable in just like when not expose ver every API that's there it's gonna remain and work it doesn't mean we can't change things in under the covers it doesn't mean we can't add things so there's still a lot more to for us to do and recon did you have more razors so in that sense there still I don't think we'd be done in like two months when we released this I don't know if you can say but is there you know there's not external deadlines for tensorflow 2.0 but is there internal deadlines the artificial are otherwise that you try and just set for yourself is or is it whenever it's ready so we want it to be a great product right and that's a big important piece for us tensorflow is already out there we have you know 41 million downloads for one Dalek so it's not like but you have to have it it is yeah yeah exactly so it's not like all a lot of the features that we've you know really polishing and putting them together out there we don't have to rush that just because so in that sense we want to get it right and really focus on that that said we have said that we are looking to get this out in the next few months in the next quarter and we you know as far as possible we let me try to make that happen yeah my favorite line was spring is a relative yes spoken like a true developer so you know something I'm really interested in and your previous line of work is before test for you let a team at Google on search ads I think this is like this is a very interesting topic on every level on a technical level because that their best ads connect people to the things they want and need yep and and that they're worse they're just these things that annoy the heck out of you to the point of ruining the entire user experience of whatever you're actually doing and so they have a bad rap I guess and so at the end the other end so that this connecting users to the thing they need to want is a beautiful opportunity for machine learning to shine like huge amounts of data that's personalized and you've got a map to the thing they actually want won't get annoyed so what have you learned from this Google that's leading the world in this aspect what have you learned from that experience and what do you think is the future of ads take you back to the yeah of that but yes it's been a while but I totally agree with what you said I think the search ads the way it was always looked at and I believe it still is is it's an extension of what search is trying to do and the goal is to make the information and make the words information accessible that's it's not just information but it may be products or you know other things that people care about and so it's really important for them to align with what the users need and you know the in search ads there's a minimum quality level before that ad would be shown if you don't have an ad that hits that quality but it will not be shown even if we have it and okay maybe we lose some money there that's fine that that is really really important noting that that is something I really liked about being there advertising is a key part I mean it as a model it's been around for ages right it's it's not a new model it's it's been adapted to the web and you know became a core part of search and in many other search engines across the world I I do hope you know like I said there are aspects of ads that are annoying and I go to a website and if it just keeps popping in out in my face not to direct me let me read that that's going to be knowing clearly so I I hope we can strike that balance between showing a good ad where it's valuable to the user and provides the monetization to the to the you know service and this might be searched this might be a website all of these they they do need the monetization for them to provide that service but if it's done in a good balance between showing just some random stuff that's distracting versus showing something that's actually valuable so do you see it moving forward as to continue being a model that you know that funds businesses like Google that's a it's a significant revenue stream because that's one of the most exciting things but also limiting things in the Internet is nobody wants to pay for anything yeah and advertisements again coupled at their best are actually really useful not annoying to continue do you see that continuing and growing and improving or is there GC sort of more netflix type models where you have to start to pay for content I think it's a mix I think it's gonna take a long wait for everything to be paid on the internet if at all probably not I mean I think there's always going to be things that are sort of monetized with things like ads but over the last few years I would say we've definitely seen that transition towards more paid services across the web and people are willing to pay for them because they do see the value and I mean Netflix is a great example and we have YouTube doing things people pay for the apps they buy more people I find are willing to pay for newspaper content for the the good news websites across the web that wasn't the case a few years even a few years ago I would say and I just see that change in myself as well and just lots of people around me so definitely hopeful like real transition to that mix model where maybe you get to try something out for free maybe with ads but then there's a more clear revenue model like that sort of helps go beyond that so speaking of revenue how is it that a person can use the TPU and a Google call app for free so what's the I guess the question is what's the future of tensorflow in terms of empowering say teacher class of 300 students and they amassed by MIT what is going to be the future of that being able to do their homework intensive flow like why are they going to train these networks right right what's that future look like with TP use with cloud services and so on I think a number of things that I mean any tensile flow open source you can run it whatever you can write on your desktop and your desktops always keep getting more powerful so maybe you can do more my phone is like I don't know how many times more powerful than my first desktop probably trained on your phone though yeah right so in that sense the power you have in your handles is is a lot more clouds are actually very interesting from say students or our courses perspective because they make it very easy to get started I mean colab the great thing about is go to a website and it just works no installation needed nothing to you know you're just just there and if things are working that's really the power of cloud as well and so I do expect that to grow again you know collab is a free service it's great to get started to play with things to explore things that said you know with free you can only get so much UV yeah so just like we were talking about you know free versus Karen yeah there are there are services you can pay for and get a lot more great so the final complete beginner interested in machine learning intensive flow what should I do probably start going to our website and playing there's a lot of tests for that organs start clicking on things yep check our tutorials and guides their stuff you can just click there and go to a collab and do things no installation needed you can get started right there ok awesome project thank you so much for talking about thank you like songs great you
Chris Lattner: Compilers, LLVM, Swift, TPU, and ML Accelerators | Lex Fridman Podcast #21
the following is a conversation with Chris flattener currently he's a senior director of Google working on several projects including CPU GPU TPU accelerators for tensorflow swift for tensorflow and all kinds of machine learning compiler magic going on behind the scenes he's one of the top experts in the world on compiler technologies which means he deeply understands the intricacies of how hardware and software come together to create efficient code he created the LLVM compiler infrastructure project and the clang compiler he led major engineering efforts at Apple including the creation of the Swift programming language he also briefly spent time at Tesla as vice president of auto pilot software during the transition from autopilot Hardware 1 to hardware 2 when Tesla essentially started from scratch to build an in-house software infrastructure for autopilot I could have easily talked to Chris for many more hours compiling code down across the levels abstraction is one of the most fundamental and fascinating aspects of what computers do and he is one of the world experts in this process it's rigorous science and it's messy beautiful art this conversation is part of the artificial intelligence podcast if you enjoy it subscribe on youtube itunes or simply connect with me on twitter at Lex Friedman spelled Fri D and now here's my conversation with Chris Ladner what was the first program you've ever written my first program back and when was it I think I started as a kid and my parents got a basic programming book and so when I started it was typing out programs from a book and seeing how they worked and then typing them in wrong and trying to figure out why they were not working right that kind of stuff so basic what was the first language that you remember yourself maybe falling in love with like really connecting with I don't know I mean I feel like I've learned a lot along the way and each of them have a different special thing about them so I started in basic and then went like gw-basic which was the thing back in the DOS days and then upgrade to QBasic and eventually quick basic which are all slightly more fancy versions of Microsoft basic made the jump to Pascal and start doing machine language programming and assembly in Pasco which was really cool through Pascal was amazing for its day eventually going to C C++ and then kind of did lots of other weird things I feel like you took the dark path which is the you could you could have gone Lisp yeah you've got a higher-level sort of functional philosophical hippy route instead you went into like the dark arts of the straight straight in the machine straight to toys so started with basic task element assembly and then wrote a lot of assembly and why eventually I eventually did small talk and other things like that but that was not the starting point but so what what is this journey to see is that in high school is that in college that was in high school yeah so and then that was it was really about trying to be able to do more powerful things than what Pascal could do and also to learn a different world so he was really confusing me with pointers and the syntax and everything and it took a while but Pascal is much more principled in various ways sees more I mean it has its historical roots but it's it's not as easy to learn with pointers there's this memory management thing that you have to become conscious of is that the first time you start to understand that there's resources that you're supposed to manage well so you have that in Pascal as well but in Pascal these like the carrot instead of the star and there's some small differences like that but it's not about pointer arithmetic and and and see it you end up thinking about how things get laid out in memory a lot more and so in Pascal you have allocating and deallocating and owning the the memory but just the programs are simpler and you don't have to well for example Pascal has a string type and so you can think about a string instead of an array of characters which are consecutive in memory so it's a little bit of a higher level abstraction so let's get into it let's talk about LLVM si Lang and compilers sure so can you tell me first what I love the a messy laying our and how is it that you find yourself the creator and lead developer one of the most powerful compiler optimization systems than used today sure so I guess they're different things so let's start with what is a compiler it's a is that a good place to start what are the phases of a compiler where the parts yeah what is it so what does even a compiler are used for so the way the way I look at this is you have a two sided problem of you have humans that need to write code and then you have machines that need to run the program that the human wrote and for lots of reasons the humans don't want to be writing in binary and want to think about every piece of hardware and so at the same time that you have lots of humans you also have lots of kinds of hardware and so compilers are the art of allowing humans to think of the level of abstraction that they want to think about and then get that program get the thing that they wrote to run on a specific piece of hardware and the interesting and exciting part of all this is that there's now lots of different kinds of hardware chips like x86 and PowerPC and arm and things like that but also high-performance accelerators for machine learning and other things like that or also just different kinds of hardware GPUs these are new kinds of hardware and at the same time on the programming side of it you have your basic UFC you have JavaScript you have Python you so if you have like lots of other languages that are all trying to talk to the human in a different way to make them more expressive and capable and powerful and so compilers are the thing that goes from one to the other no and then from the very beginning end to end and so you go from what the human wrote and programming languages end up being about expressing intent not just for the compiler and the hardware but the programming languages job is really to to capture an expression of what the programmer wanted that then can be maintained and adapted and evolved by other humans as well as by the interpreter by the compiler so so when you look at this problem you have on one hand humans which are complicated and you have hardware which is complicated until compilers typically work in multiple phases and so the software engineering challenge that you have here is try to get maximum reuse out of the the amount of code that you write because this these compilers are very complicated and so the way it typically works out is that you have something called a front-end or a parser that is language specific and so you'll have a C parser and that's what clang is or C++ or JavaScript or Python or whatever that's the front-end then you'll have a middle part which is often the optimizer and then you'll have a late part which is hardware specific and so compilers end up there's many different layers often but these three big groups are very common in compilers and what LLVM is trying to do is trying to standardize that middle and last part and so one of the cool things about LLVM is that there are a lot of different languages that compile through to it and so things like swift but also julia rust clang for C C++ objective-c like these are all very different languages and they can all use the same optimization infrastructure which gets better performance and the same code generation for structure for hardware support and so LVM is really that that layer that is common that all these different specific compilers can use and is that is it a standard like a specification or is it literally an implementation it's an implementation and so it's I think there's a couple different ways of looking at write because it depends on what which angle you're looking at it from LVM ends up being a bunch of code okay so it's a bunch of code that people reuse and they build compilers with we call it a compiler infrastructure because it's kind of the underlying platform that you build a concrete compiler on top of but it's also a community and the LVM community is hundreds of people that all collaborate and one of the most fascinating things about LVM over the course of time is that we've managed somehow to successfully get harsh competitors in the commercial space to collaborate on shared infrastructure and so you have Google and Apple you have AMD and Intel you've Nvidia and d on the graphics side you have prey and everybody else doing these things and like all these companies are collaborating together to make that shared infrastructure really really great and they do this not other businesses or heart but they do it because it's in their commercial interests of having really great infrastructure that they can build on top of and facing the reality that it's so expensive that no one company even the big companies no one company really wants to implement it all themselves expensive or difficult both that's a great point because it's also about the skill sets right and these the skill sets are very hard hard to find how big is the LLVM it always seems like with open-source projects the kind you know LLVM open source yes it's open source it's about it's 19 years old now so it's fairly old it seems like the magic often happens within a very small circle of people yes I'd like at least the early birth and whatever yes so the LVM came from a university project and so I was at the University of Illinois and there it was myself my advisor and then a team of two or three research students in the research group and we built many of the core pieces initially I then graduated went to Apple and Apple brought it to the products first in the OpenGL graphics stack but eventually to the C compiler realm and eventually built clang and eventually built Swift in these things along the way building a team of people that are really amazing compiler engineers that helped build a lot of that and so as it was gaining momentum and as Apple was using it being open source in public and encouraging contribution many others for example at Google came in and started contributing and some cases Google effectively owns clang now because it cares so much about C++ and the evolution of that that ecosystem and so it's a vesting a lot in the C++ world and the tooling and things like that and so likewise Nvidia cares a lot about CUDA and so CUDA uses clang and uses LVM for for graphics and GPGPU and so when you first started as a master's project I guess did you think is gonna go as far as it went were you uh crazy ambitious about it no seems like a really difficult undertaking a brave one yeah no it was nothing like that so I mean my goal when I went to University of Illinois was to get in and out with the non thesis masters in a year and get back to work so I was not I was not planning to stay for five years and and build this massive infrastructure I got nerd sniped into staying and a lot of it was because Elvin was fun I was building cool stuff than learning really interesting things and facing will suffer engineering challenges but also learning how to work in a team and things like that I had worked at many companies as interns before that but it was really a different a different thing to have a team of people that were working together and try and collaborate in version control and it was it was just a little bit different like I said I just talked to Don Knuth and he believes that 2% of the world population have something weird with their brain that they're geeks they understand computers to connect with computer he put it exactly 2 percent okay so this specific guy is very specific he says I can't prove it but it's very empirical II there is there something that attracts you to the idea of optimizing code and he seems like that's one of the biggest coolest things about oh yeah that's one of the major things it does so I got into that because of a person actually so when I was in my undergraduate I had an advisor or a professor named Steve Bechtel and he I went to this little tiny private school we there were like seven or nine people in my computer science department students in my in my class so it was a very tiny very very small school it was a kind of a wart on the side of the math department kind of a thing at the time I think it's evolved a lot in the many years since then but but Steve egg Dahl was a compiler guy and he was super passionate and he his passion rubbed off on me and one of the things I like about compilers is that they're large complicated software pieces and so one of the culminating classes is that many computer science departments at least at the time did was to say that you take algorithms and data structures in all these core classes but then the compilers class was one of the last classes you take because it pulls everything together and then you work on one piece of code over the entire semester and and so you keep building on your own work which is really interesting it's also very challenging because in many classes if you don't get a project done you just forget about it and move on to the next one and get your you know your B or whatever it is but here you have to live with the decisions you make and continue to reinvest in it and I really like that and and so I did a extra study project within the following semester and he was just really great and he was also a great mentor in a lot of ways and so from from him and from his advice he encouraged me to go to graduate school I wasn't super excited about going grad school I wanted the master's degree but I didn't want to be in academic and but like I said I kind of got tricked into saying and was having a lot of fun and I definitely do not regret it what aspects of compilers were the things you connected with so LVM there's also the other part this is really interesting if you're interested in languages is parsing and you know just analyzing like yeah analyzing language breaking it out parsing so on was that interesting to you were you more engine optimization for me it was more so I'm I'm not really a math person I can do math I understand some bits of it when I get into it but math is never the thing that that attracted me and so a lot of the parser part of the compiler has a lot of good formal theories that dawn for example knows quite well still waiting for his book on that but but the but I just like building a thing and and seeing what it could do and exploring and getting it to do more things and then setting new goals and reaching for them and and with in the case of component in the case of LVM when I start work on that my research advisor that I was working for was a compiler guy and so he and I specifically found each other because we're both interested in compilers and so I started working with them and taking his class and a lot of LLVM initially was it's fun implementing all the standard algorithms and all the all the things that pea have been talking about and were well-known and they were in the the curricula for Advanced Studies and compilers and so just being able to build that was really fun and I was learning a lot by instead of reading about it just building and so I enjoyed that so he said compositor these complicated systems can you even just with language tried to describe you know how you turn a C++ program yes into code like what are the hard parts why is this hard so I'll give you examples of the hard parts Illinois so C++ is a very complicated programming way which is something like 1400 pages in the spec so people as possible as crazy complicated paas what makes a language complicated in terms of what's syntactically like us so it's what they call syntax so the actual how the character is arranged yes it's also semantics how it behaves it's also in the case of C++ there's a huge amount of history C was supposed to build on top of C you play that forward and then a bunch of suboptimal in some cases decisions were made and they compound and then more and more and more things keep getting added to C++ and it will probably never stop but the language is very complicated from that perspective and so the interactions between subsystems is very complicated there's just a lot there and when you talk about the front end one of the major challenges which clang as a project the the C C++ compiler that I built I and many people built one of the challenges we took on was we looked at GCC ok GCC at the time was like a really good industry standardized compiler that had really consolidated a lot of the other compilers in the world and was was a standard but it wasn't really great for research the design was very difficult to work with and it was full of global variables and other other things that made it very difficult to reuse in ways that it wasn't originally designed for and so with claying one of the things what we wanted to do is push forward on better user interface so make error messages that are just better than GCC's and that that's actually hard because you have to do a lot of bookkeeping in an efficient way today I'll do that we want to make compile-time better and so compile-time is about making it efficient which is also really hard when you're keeping track of extra information we wanted to make new tools available so refactoring tools and other analysis tools the the GCC never supported also leveraging the extra information we kept but enabling those new classes the tools that then get built into IDs and so that's been one of the one of the areas that clang has really helped push the world forward in the tooling for C and C++ and things like that but C++ and the front-end piece is complicated and you have to build syntax trees and you have to check every rule in the spec and you have to turn that back into an error message to the humor humor that the human can understand when they do something wrong but then you start doing the what's called lowering so going from C++ in the way that it represents code down to the machine and when you do that there's many different phases you go through often there are I think LLVM something like 150 different what are called passes in the compiler that the code passed passes through and these get organized in very complicated ways which affect the generated code in performance and compile time and many of the things what are they passing through so after you do the clang parsing what's what's the graph what does it look like what's the data structure here yeah so in in the parser it's usually a tree and it's called an abstract syntax tree and so the idea is you you have a node for the plus that the human wrote in their code or the function call you'll have a node for call with the function that they call and the arguments they pass things like that this then gets lowered into what's called an intermediate representation and intermediate representations are like LVM has one and there it's a it's what's called a control flow graph and so you represent each operation in the program as a very simple like this is gonna add two numbers this is gonna multiply two things maybe we'll do a call but then they get put in what are called blocks and so you get blocks of these straight line operations or instead of being nested like in a tree it's straight line operation and so there's a sequence and ordering to these operations and then in the block we're outside the block that's within the block and so it's a straight line sequence of operations within the block and then you have branches like conditional branches between blocks and so when you write a loop for example in a syntax tree you would have a four node like for a for statement and I see like language you'd out a four node and you have a pointer to the expression for the initializer a pointer to the expression for the increment a pointer to the expression for the comparison a pointer to the body okay and these are all nested underneath it in a control flow graph you get a block for the code that runs before the loop so the initializer code then you have a block for the body of the loop and so the the body of the loop code goes in there but also they increment and other things like that and then you have a branch that goes back to the top and a comparison and branch that goes out and so it's more of a assembly level kind of representation but the nice thing about this level of representation is it's much more language independent and so there's lots of different kinds of languages with different kinds of you know JavaScript has a lot of different ideas of what is false for example and all that can stay in the front end but then that middle part can be shared across all those how close is that intermediate representation to ten yuan that works for example is they are they because everything described as a kind of neural network graph right yeah that's all we need a neighbor's or what they're they're quite different in details but they're very similar and idea so one of the things that normal networks do is they learn representations for data at different levels of abstraction right and then they transform those through layers right so the compiler does very similar things but one of the things the compiler does is it has relatively few different representations or a neural network often as you get deeper for example you get many different representations in each you know layer or set of ops is transforming between these different representations and compiler often you get one representation and they do many transformations to it and these transformations are often applied iteratively and for programmers there's familiar types of things for example trying to find expressions inside of a loop and pulling them out of a loop so if they execute four times or a fine redundant computation or find constant folding or other simplifications turning you know 2 times X into X shift left by one and and things like this or all all all the examples of the things that happen but compilers end up getting a lot of theorem proving and other kinds of algorithms that try to find higher-level properties of the program that then can be used by the optimizer cool so what's like the biggest bang for the buck with optimization what's there yeah well no not even today at the very beginning the 80s I don't know but yeah so for the 80s a lot of it was things like register allocation so the idea of in in a modern like a microprocessor what you'll end up having is you're having memory which is relatively slow and then you have registers relatively fast but registers you don't have very many of them ok and so when you're writing a bunch of code you're just saying like compute this put it in temporary variable compute those compute this compute this put in temporary well I have a loop I have some other stuff going on well now you're running on an x86 like a desktop PC or something well it only has in some cases some modes eight registers right and so now the compiler has to choose what values get put in what registers at what points in the program and this is actually a really big deal so if you think about you have a loop and then an inner loop to execute millions of times maybe if you're doing loads and stores inside that loop then it's gonna be really slow but if you can somehow fit all the values inside that loop and registers now it's really fast and so getting that right requires a lot of work because there's many different ways to do that and often what the compiler ends up doing is it ends up thinking about things in a different representation than what the human wrote all right you wrote into X well the compiler thinks about that as four different values each which have different lifetimes across the function that it's in and each of those could be put in a register or memory or different memory or maybe in some parts of the code recomputed instead of stored and reloaded and there are many of these different kinds of techniques that can be used so it's adding almost like a time-dimension - it's trying to trying to optimize across time so considering when when you're programming you're not thinking and yeah absolutely and so the the RISC era made thing this so so RISC chips Ras see the the risks risk chips as opposed to sisk chips the risk chips made things more complicated for the compiler because what they ended up doing is ending up adding pipelines to the processor where the processor can do more than one thing at a time but this means that the order of operations matters a lot and so one of the classical compiler techniques that you use is called scheduling and so moving the instructions around so that the processor can act like keep its pipelines full instead of stalling and getting blocked and so there's a lot of things like that that are kind of bread and butter a compiler techniques have been studied a lot over the course of decades now but the engineering side of making them real is also still quite hard and you talk about machine learning this is this is a huge opportunity for machine learning because many of these algorithms are full of these like hokey hand-rolled heuristics which work well on specific benchmarks we don't generalize and full of magic numbers and you know I hear there's some techniques that are good at handling that so what would be the if you were to apply machine learning to this what's the thing you try to optimize is it ultimately the running time you can pick your metric and there's there's running time there's memory use there's there's lots of different things that you can optimize for code code size is another one that some people care about in the embedded space is this like the thinking into the future or somebody actually been crazy enough to try to have machine learning based parameter tuning for optimization of compilers so this is something that is I would say research right now there are a lot of research systems that have been applying search in various forums and using reinforcement learning is one form but also brute force search has been tried for a quite a while and usually these are in small small problem spaces so find the optimal way to code generate a matrix multiply for a GPU write something like that where we say there there's a lot of design space of do you unroll uppsala do you execute multiple things in parallel and there's many different confounding factors here because graphics cards have different numbers of threads and registers and execution ports and memory bandwidth and many different constraints to interact in nonlinear ways and so search is very powerful for that and it gets used in in certain ways but it's not very structured this is something that we need we as an industry need to fix these set ATS but like so have there been like big jumps and improvement and optimization yeah yeah yes since then what's yeah so so it's largely been driven by hardware so hartwell hardware and software so in the mid-90s Java totally changed the world right and and I'm still amazed by how much change was introduced by the way or in a good way so like reflecting back Java introduced things like it all at once introduced things like JIT compilation none of these were novel but it pulled it together and made it mainstream and and made people invest in it JIT compilation garbage collection portable code safe code say like memory safe code like a very dynamic dispatch execution model like many of these things which had been done in research systems and had been done in small ways and various places really came to the forefront really changed how things worked and therefore changed the way people thought about the problem javascript was another major world change based on the way it works but also on the hardware side of things multi-core and vector instructions really change the problem space and are very they don't remove any of the problems that composers faced in the past but they they add new kinds of problems of how do you find enough work to keep a four-wide vector busy right or if you're doing a matrix multiplication how do you do different columns out of that matrix in at the same time and how do you maximally utilize the the arithmetic compute that one core has and then how do you take it to multiple cores and how did the whole virtual machine thing change the compilation pipeline yeah so so what what the java virtual machine does is it splits just like I've talked about before where you have a front-end that parses the code and then you have an intermediate representation that gets transformed what Java did was they said we will parse the code and then compile to what's known as Java bytecode and that bytecode is now a portable code representation that is industry-standard and locked down and can't change and then the the back part of the compiler the the does optimization and code generation can now be built by different vendors okay and Java bytecode can be shipped around across the wire its memory safe and relatively trusted and because of that it can run in the browser and that's why it runs in the browser yeah right and so that way you can be in you know again back in the day you would write a Java applet and you use as a little as a web developer you'd build this mini app that run a web page well a user of that is running a web browser on their computer you download that that Java bytecode which can be trusted and then you do all the compiler stuff on your machine so that you know that you trust that that was that a good idea a bad idea it's great idea I mean it's great idea for certain problems and I'm very much believe for the technologies itself neither good nor bad it's how you apply it you know this would be a very very bad thing for very low levels of the software stack but but in terms of solving some of these software portability and transparency your portability problems I think it's been really good now Java ultimately didn't win out on the desktop and like there are good reasons for that but it's been very successful on servers and in many places it's been a very successful thing over over decades so what has been ll VMs and ceilings improvements in optimization that throughout its history what are some moments we get set back I'm really proud of what's been accomplished yeah I think that the interesting thing about LLVM is not the innovations in compiler research it has very good implementations of various important algorithms no doubt and and a lot of really smart people have worked on it but I think that the thing was most profound about LLVM is that through standardization it made things possible too otherwise wouldn't have happened okay and so interesting things that have happened with LVM for example sony has picked up lv m and used it to do all the graphics compilation in their movie production pipeline and so now they're able to have better special effects because of LVN that's kind of cool that's not what it was designed for right but that's that's the sign of good infrastructure when it can be used in ways it was never designed for because it has good layering and software engineering and it's composable and things like that just where as you said it differs from GCC yes GCC is also great in various ways but it's not as good as a infrastructure technology it's it's you know it's really a C compiler or it's or it's a fortunate compiler it's not it's not infrastructure in the same way is it now you can tell I don't know what I'm talking about because I'm sick eep saying si Lang you can you could always tell when a person is close by the way pronounce something I'm I don't think have I ever used Clank entirely possible have you well so you've used code it's generated probably so clang is an Alabama used to compile all the apps on the iPhone effectively and the OS is it compiles Google's production server applications let's use to build my GameCube games and PlayStation 4 and things like that I was a user I have but just everything I've done that I experienced for Linux has been I believe always GCC yeah I think Linux still defaults to GCC and is there a reason for that there's a big it's a combination of technical and social reasons many GC likes developers do you do use clang but the distributions for lots of reasons use GCC historically and they've not switched yeah that and it's just anecdotally online it seems that LLVM has either reached the level GCC or superseded on different features or whatever the way I would say it is that there was there so close it doesn't matter yeah exactly like there's a slightly better in some way slightly worse than otherwise but it doesn't actually really matter anymore that level so in terms of optimization breakthroughs it's just been solid incremental work yeah yeah which which is which describes a lot of compilers there are the hard thing about compilers in my experience is the engineering the software engineering making it so that you can have hundreds of people collaborating on really detailed low-level work and scaling that and that's that's really hard and that's one of the things I think Alabama's done well and that kind of goes back to the original design goals with it to be modular and things like that and incidentally I don't want to take all the credit for this right I mean some of the the best parts about LLVM is that it was designed to be modular and when I started I would write for example a register allocator and then some a much smarter than me would come in and pull it out and replace it with something else that they would come up with and because it's modular they were able to do that and that's one of the challenges with what GCC for example is replacing subsystems is incredibly difficult it can be done but it wasn't designed for that and that's one of the reasons the LVM has been very successful in the research world as well but in the in the community sense Widow van rossum right from Python just retired from what is it benevolent dictator for life right so in managing this community of brilliant compiler folks is there that did it at for a time at least following you to approve things oh yeah so I mean I still have something like an order of magnitude more patches in LVM than anybody else and many of those I wrote myself but he's still right I mean you still he's still close to the two though I don't know what the expression is to the metal you still write code yes alright good not as much as I was able to in grad school but that's important part of my identity but the way the LLVM has worked over time is that when I was a grad student I could do all the work and steer everything and review every patch and make sure everything was done exactly the way my opinionated sense felt like it should be done and that was fine but I think scale you can't do that right and so what ends happening as LVM has a hierarchical system of what's called code owners these code owners are given the responsibility not to do all the work not necessarily to review all the patches but to make sure that the patches do get reviewed and make sure that the right things happening architectural e in their area and so what you'll see is you'll see that for example hardware manufacturers end up owning the the the hardware specific parts of their their their hardware that's very common leaders in the community that have done really good work naturally become the de facto owner of something and then usually somebody else's like how about we make them the official code owner and then and then we'll have somebody to make sure the whole patch does get reviewed in a timely manner and then everybody's like yes that's obvious and then it happens right and usually this is a very organic thing which is great and so I'm nominally the top of that stack still but I don't spend a lot of time reviewing patches what I do is I help negotiate a lot of the the technical disagreements that end up happening and making sure that the community as a whole makes progress and is moving in the right direction and and doing that so we also started a non-profit and six years ago seven years ago it's times gone away and the nonprofit the the LVM foundation nonprofit helps oversee all the business sides of things and make sure that the events that the Elven community has are funded and set up and run correctly and stuff like that but the foundation is very much stays out of the technical side of where where the project was going right sounds like a lot of it is just organic just yeah well and this is Alabama is almost twenty years old which is hard to believe somebody point out to me recently that LVM is now older than GCC was when Olivia started right so time has a way of getting away from you but the good thing about that is it has a really robust really amazing community of people that are in their professional lives spread across lots of different companies but it's a it's a community of people that are interested in similar kinds of problems and have been working together effectively for years and have a lot of trust and respect for each other and even if they don't always agree that you know we're we'll find a path forward so then in a slightly different flavor of effort you started at Apple in 2005 with the task of making I guess LLVM production ready and then eventually 2013 through 2017 leading the entire developer tools department we were talking about LLVM Xcode Objective C to Swift so in a quick overview of your time there what were the challenges first of all leading such a huge group of developers what was the big motivator dream mission behind creating Swift the early birth of it's from objective-c and so on and Xcode well yeah so these are different questions yeah I know what about the other stuff I'll stay I'll stay on the technical side then we could talk about the big team pieces yeah that's okay sure so he has to really oversimplify many years of hard work via most started joined Apple became a thing we became successful and became deployed but then there was a question about how how do we actually purse the source code so LVM is that back part the optimizer and the code generator and Alvin was really good for Apple as it went through a couple of hundred transitions I joined right at the time of the Intel transition for example and 64-bit transitions and then the transition to almost the iPhone and so LVM was very useful for some of these kinds of things but at the same time there's a lot of questions around developer experience and so if you're a programmer pounding out at the time of objective-c code the error message you get the compile time the turnaround cycle the the tooling and the IDE were not great we're not as good as it could be and so you know as as I occasionally do I'm like well okay how hard is it to write a C compiler and so I I'm not gonna commit to anybody I'm not gonna tell anybody I'm just gonna just do it on nice and weekends and start working on it and then you know I built up in C there's a thing called the preprocessor which people don't like but it's actually really hard and complicated and includes a bunch of really weird things like try graphs and other stuff like that that are they're really nasty and it's the crux of a bunch of the perform issues in the compiler and I'm started working on the parser and kind of got to the point where I'm like ah you know what we could actually do this this everybody saying that this is impossible to do but it's actually just hard it's not impossible and eventually told my manager about it and he's like oh wow this is great we do need to solve this problem oh this is great we can like get you one other person to work with you on this you know and slowly a team is formed and it starts taking off and c++ for example huge complicated language people always assume that it's impossible to implement and it's very nearly impossible but it's just really really hard and the way to get there is to build it one piece at a time incrementally and and there that was only possible because we were lucky to hire some really exceptional engineers that that knew various parts of it very well and and could do great things Swift was kind of a similar thing so Swift came from we were just finishing off the first version of C++ support in M clang and C++ is a very formidable and very important language but it's also ugly in lots of ways and you can't influence C++ without thinking there has to be a better thing right and so I started working on Swift again with no hope or ambition that would go anywhere just uh let's see what could be done let's play around with this thing it was you know me in my spare time not telling anybody about it kind of a thing and it made some good progress I'm like actually it would make sense to do this at the same time I started talking with the senior VP of software at the time a guy named Burt Ron stole a and Burt Ron was very encouraging he was like well you know let's let's have fun let's talk about this and he was a little bit of a language guy and so he helped guide some of the the early work and encouraged me and like got things off the ground and eventually I've told other to like my manager and told other people and and it started making progress the the complicating thing was Swift was that the idea of doing a new language is not obvious to anybody including myself and the tone at the time was that the iPhone was successful because of objective-c right Oh interesting in a practice site of or just great because it and and you have to understand that at the time Apple was hiring software people that loved Objective C right and it wasn't that they came despite Objective C they loved Objective C and that's why they got hired and so you had a software team that the leadership and in many cases went all the way back to next where Objective C really became real and so they quote-unquote grew up writing Objective C and many of the individual engineers all were hired because they loved Objective C and so this notion of okay let's do new language was kind of heretical in many ways right meanwhile my sense was that the outside community wasn't really in love with Objective C some people were and some of the most outspoken people were but other people were hitting challenges because it has very sharp corners and it's difficult to learn and so one of the challenges of making Swift happen that was totally non-technical is the the social part of what do we do like if we do a new language which at Apple many things happen that don't ship right so if we if we ship it what what what is the metrics of success why would we do this why wouldn't we make Objective C better if object C has problems let's file off those rough corners and edges and one of the major things that became the reason to do this was this notion of safety memory safety and the way Objective C works is that a lot of the object system and everything else is built on top of pointers and C Objective C is an extension on top of C and so pointers are unsafe and if you get rid of the pointers it's not Objective C anymore and so fundamentally that was an issue that you could not fix safety or memory safety without fundamentally changing the language and so once we got through that part of the mental process and the thought process it became a design process of saying okay well if we're gonna do something new what what is good like how do we think about this and what do we like and what are we looking for and that that was a very different phase of it so well what are some design choices early on and Swift like we're talking about braces are you making a type language or not all those kinds of things yeah so some of those were obvious given the context so a types language for example objective sees a typed language and going with an untyped language wasn't really seriously considered we wanted we want the performance and we wanted refactoring tools and other things like that to go with type languages quick dumb question yeah was it obvious I think it would be a dumb question but was it obvious that the language has to be a compiled language not and yes that's not a dumb question earlier I think late 90s Apple is seriously considered moving its development experience to Java but this was started in 2010 which was several years after the iPhone it was when the iPhone was definitely on an upward trajectory and the iPhone was still extremely and is still a bit memory constrained right and so being able to compile the code and then ship it and then have having standalone code that is not JIT compiled was is a very big deal and it's very much part of the apple value system now javascript is also a thing right I mean it's not it's not that this is exclusive and technologies are good depending on how they're applied right but in the design of Swift saying like how can we make Objective C better right Objective C is statically compiled and that was the contiguous natural thing to do just skip ahead a little bit now go right back just just as a question as you think about today in 2019 yeah in your work at Google if tons of phone so on is again compilations static compilation the right there's still the right thing yes so the the funny thing after working on compilers for a really long time is that and one of this is one of the things that LVM has helped with is that I don't look as comp compilations being static or dynamic or interpreted or not this is a spectrum okay and one of the cool things about Swift is that Swift is not just statically compiled it's actually dynamically compiled as well and it can also be interpreted that nobody's actually done that and so what what ends up happening when you use Swift in a workbook for example in Calabria and Jupiter is it's actually dynamically compiling the statements as you execute them and so let's gets back to the software engineering problems right where if you layer the stack properly you can actually completely change how and when things get compiled because you have the right abstractions there and so the way that a collab workbook works with Swift is that we start typing into it it creates a process a UNIX process and then each line of code you type in it compiles it through the Swift compiler there's the front end part and then sends it through the optimizer JIT compiles machine code and then injects it into that process and so as you're typing new stuff it's putting it's like squirting a new code and overriding and replacing an updating code in place and the fact that it can do this is not an accident like Swift was designed for this but it's an important part of how the language was set up and how it's layered and and this is a non-obvious piece and one of the things with Swift that was for me a very strong design point is to make it so that you can learn it very quickly and so from a language design perspective the thing that I always come back to is this UI principle of progressive disclosure of complexity and so in Swift you can start by saying print quote hello world quote right and there's no /n just like Python one line of code no main no no header files no header files no public static class void blah blah blah string like Java has right so one line of code right and you can teach that and it works great they can say well let's introduce variables and so you can declare a variable with far so VAR x equals four what is a variable you can use xx plus one this is what it means then you can say we'll have a control flow well this is what an if statement is this is what a for statement is this is what a while statement is then you can say let's introduce functions right and and many languages like Python have had this this kind of notion of let's introduce small things and they can add complex then you can introduce classes and then you can add generics I'm against the Swift and then you can in modules and build out in terms of the things that you're expressing but this is not very typical for compiled languages and so this was a very strong design point and one of the reasons that Swift in general is designed with this factoring of complexity in mind so that the language can express powerful things you can write firmware in Swift if you want to but it has a very high-level feel which is really this perfect blend because often you have very advanced library writers that want to be able to use the the nitty-gritty details but then other people just want to use the libraries and work at a higher abstraction level it's kind of cool that I saw that you can just enter a probability I don't think I pronounced that word enough but you can just drag in Python it's just a string you can import like I saw this in the demo yeah I'm pointing out but like how do you make that happen yeah what's what's up yeah say is that as easy as it looks or is it yes that's not that's not a stage magic hack or anything like that then I I don't mean from the user perspective I mean from the implementation perspective to make it happen so it's it's easy once all the pieces are in place the way it works so if you think about a dynamically typed language like Python right you can think about it as in two different ways you can say it has no types right which is what most people would say or you can say it has one type right and you could say has one type and it's like the Python object mm-hmm and the Python object gets passed around and because there's only one type its implicit okay and so what happens with Swift and Python talking to each other Swift has lots of types right has a raise and it has strings and all like classes and that kind of stuff but it now has a Python object type right so there is one Python object type and so when you say import numpy what you get is a Python object which is the numpy module then you say NPRA it says okay hey hey Python object I have no idea what you are give me your array member right okay cool it just it just uses dynamic stuff talks to the Python interpreter and says hey Python what's the daughter a member in the that Python object it gives you back another Python object and now you say parentheses for the call and the arguments are gonna pass and so then it says hey a Python object that is the result of NPR a call with these arguments right again calling into the Python interpreter to do that work and so right now this is all really simple and if you if you dive into the code what you'll see is that the the Python module and Swift is something like twelve hundred lines of code or something is written in pure Swift it's super simple and it's and it's built on top of the c interoperability because just talks to the Python interpreter but making that possible required us to add two major language features to Swift to be able to express these dynamic calls and the dynamic member lookups and so what we've done over the last year is we've proposed implement standardized and contributed new language features to the Swift language in order to make it so it is really trivial right and this is one of the things about Swift that is critical to this but for tens flow work which is that we can actually add new language features and the bar for adding those is high but it's it's what makes it possible so you know Google doing incredible work on several things including tensorflow the test flow 2.0 or whatever leading up to 2.0 has by default in 2.0 has eager execution in yet in order to make code optimized for GPU or TP or some of these systems computation needs to be converted to a graph so what's that process like what are the challenges there yeah so I I'm tangentially involved in this but the the way that it works with autograph is that you mark your your function with the decorator and when Python calls that that decorator is invoked and then it says before I call this function you can transform it and so the way autograph works is as far as I understand as it actually uses the Python parser to go parse that turn into a syntax tree and now apply compiler techniques to again transform this down into tensor photographs and so it you can think of it as saying hey I have an if statement I'm going to create an if node in the graph like you say TF conned you have a multiply well I'll turn that into multiply node in the graph and that becomes the street transformation so word is the Swift for tensor for come in which is you know parallels you know for one swift is a interface like Python is an interface test flow but it seems like there's a lot more going on in just a different language interface there's optimization methodology yeah so so the tensor float world has a couple of different what I'd call front-end technologies and so Swift and Python and go and rust and Julia and all these things share the tensor flow graphs and all the runtime and everything that's later again and so vertex flow is merely another front end for tensor flow I'm just like any of these other systems are there's a major difference between I would say three camps of technologies here there's Python which is a special case because the vast majority of the community efforts go into the Python interface and python has its own approaches for automatic differentiation it has its own api's and all this kind of stuff there's Swift which I'll talk about in a second and then there's kind of everything else and so the everything else are effectively language bindings so they they call into the tense flow runtime but they're not they usually don't have automatic differentiation or they usually don't provide anything other than API is that call the C API is intensive flow and so they're kind of wrappers for that Swift is really kind of special and it's a very different approach Swift 4/10 below that is is a very different approach because there we're saying let's look at all the problems that need to be solved in the fullest of the tensorflow compilation process if you think about it that way because tensorflow is fundamentally a compiler it takes models and then it makes them go faster on hardware that's what a compiler does and it has a front end it has an optimizer and it has many backends and so if you think about it the right way or in in if you look at it in a particular way like it is a compiler okay and and so Swift is merely another front-end but it's saying in the the design principle is saying let's look at all the problems that we face as machine learning practitioners and what is the best possible way we can do that given the fact that we can change literally anything in this entire stack and python for example where the vast majority of the engineering and an effort has gone into its constrained by being the best possible thing you can do with the Python library like there are no Python language features that are added because of machine learning that I'm aware of they added a matrix multiplication operator with that but that's as close as you get and so with Swift you can you it's hard but you can add language features to the language and there's a community process for that and so we look at these things and say well what is the right division of labor between the human programmer and the compiler and Swift has a number of things that shift that balance so because it's a because it has a type system for example it makes certain things possible for analysis of the code and the compiler can automatically build graphs for you without you thinking about them like that's that's a big deal for a programmer you just get free performance you get clustering infusion and optimization and things like that without you as a programmer having to manually do it because the compiler can do it for you automatic to frenchie ation there's another big deal and it's I think one of the key contributions of the Swift for tensorflow project is that there's this entire body of work on automatic differentiation that dates back to the Fortran days people doing a tremendous amount of numerical computing and Fortran used to write these what they call source-to-source translators where you where you take a bunch of code shove it into a mini compiler and push out more Fortran code but it would generate the backwards passes for your functions for you the derivatives and so in that work in the 70s a true master of optimizations a tremendous number of techniques for fixing numerical instability and other other kinds of problems were developed but they're very difficult to port into a world where in eager execution you get an opt by op at a time like you need to be able to look at an entire function and be able to reason about what's going on and so when you have a language integrated automatic differentiation which is one of the things that the Swift project is focusing on you can open open all these techniques and reuse them and in familiar ways but the language integration piece has a bunch of design room in it and it's also complicated the other piece of the puzzle here that's kind of interesting is TP use at Google yes so you know we're in a new world with deep learning it's constantly changing and I imagine without disclosing anything I imagine you know you're still innovating on the TP you front - indeed so how much sort of interplay xur between software and hardware in trying to figure out how to gather move towards at an optimal solution there's an incredible amount so our third generation of TP use which are now 100 petaflop syn a very large liquid cooled box in a virtual box with no cover and as you might imagine we're not out of ideas yet the the great thing about TP use is that they're a perfect example of hardware/software co.design and so it's a bet it's about saying what hardware do we build to solve certain classes of machine learning problems well the algorithms are changing like the hardware it takes you know some cases years to produce right and so you have to make bets and decide what is going to happen and so and what is the best way to spend the transistors to get the maximum you know performance per watt or area per cost or like whatever it is that you're optimizing for and so one of the amazing things about TP use is this numeric format called b-flat 16b float16 is a compressed 16-bit floating-point format but it puts the bits in different places in numeric terms it has a smaller mantissa and a larger exponent that means that it's less precise but it can represent larger ranges of values which in the machine learning context is really important and useful because sometimes you have very small gradients you want to accumulate and very very small numbers that are important to to move things as you're learning but sometimes you have very large magnitude numbers as well and be float16 is not as precise the mantissa is small but it turns out the machine learning algorithms actually want to generalize and so there's you know theories that this actually increases generate the ability for the network to generalize across data sets and regardless of whether it's good or bad is much cheaper at the hardware level to implement because the area and time of a multiplier is N squared in the number of bits in the mantissa but it's linear with size of the exponent connected to solar big deal efforts here both on the hardware and the software side yeah and so that was a breakthrough coming from the research side and people working on optimizing network transport of weights across a network originally and trying to find ways to compress that but then it got burned into silicon and it's a key part of what makes CPU performance so amazing and and and great TPS have many different aspects of the important but the the co.design between the low-level compiler bits and the software bits and the algorithms is all super important and it's a this amazing try factor that only Google do yeah that's super exciting so can you tell me about MLI our project previously this the secretive one yeah so EMA lair is a project that we announced at a compiler conference three weeks ago or something at the compilers for machine learning conference basically if again if you look at tensorflow as a compiler stack it has a number of compiler algorithms within it it also has a number of compilers that get embedded into it and they're made by different vendors for example Google has xla which is a great compiler system NVIDIA has tensor RT Intel has n graph there's a number of these different compiler systems and they're very hardware specific and they're trying to solve different parts of the problems but they're all kind of similar in a sense of they want to integrate with tensorflow no test flow has an optimizer and it has these different code generation technologies built in the idea of NLR is to build a common infrastructure to support all these different subsystems and initially it's to be able to make it so that they all plug in together and they can share a lot more code and can be reusable but over time we hope that the industry will start collaborating and sharing code and instead of reinventing the same things over and over again that we can actually foster some of that that you know working together to solve common problem energy that has been useful in the compiler field before beyond that mor is some people have joked that it's kind of LVM to it learns a lot about what LVM has been good and what LVM has done wrong and it's a chance to fix that and also there are challenges in the LLVM ecosystem as well where LVM is very good at the thing was designed to do but you know 20 years later the world has changed and people are trying to solve higher-level problems and we need we need some new technology and what's the future of open source in this context very soon so it is not yet open source but it will be hopefully you still believe in the value of open source in kazakh oh yeah absolutely and I that the tensorflow community at large fully believes an open-source so I mean that's there is a difference between Apple where you were previously in Google now in spirit and culture and I would say the open sourcing intensive floor was a seminal moment in the history of software because here's this large company releasing a very large code base as the open sourcing what are your thoughts on that I'll happy or not were you to see that kind of degree of open sourcing so between the two I prefer the Google approach if that's what you're saying the Apple approach makes sense given the historical context that Apple came from but that's been 35 years ago and I think the Apple is definitely adapting and the way I look at it is that there's different kinds of concerns in the space right it is very rational for a business to to care about making money that fundamentally is what a business is about right but I think it's also incredibly realistic to say it's not your string library that's the thing that's going to make you money it's going to be the amazing UI product differentiating features and other things like that that you built on top of your string library and so keeping your string library proprietary and secret and things like that isn't maybe not the the important thing anymore right or before platforms were different right and even 15 years ago things were a little bit different but the world is changing so Google strikes very good balance I think and I think the tensorflow being open source really changed the entire machine learning field and it caused a revolution in its own right and so I think it's amazing for amazingly forward-looking because I could have imagined and I was an at Google time but I could imagine the different contacts and different world where a company says machine learning is critical to what we're doing we're not going to give it to other people right and so that decision is a profound a profoundly brilliant insight that I think has really led to the world being better and better for Google as well and has all kinds of ripple effects I think it is really I mean you can't understate Google does adding that how profound that is for software is awesome well and it's been in again I can understand the concern about if we release our machine learning software are our competitors could go faster from the other hand I think that open sourcing test flow has been fantastic for Google and I'm sure that obvious was that that that decision was very non obvious at the time but I think it's worked out very well so let's try this real quick yeah you were at Tesla for five months as the VP of auto pilot software you led the team during the transition from each hardware one hardware to I have a couple questions so one first of all to me that's one of the bravest engineering decisions undertaking so like undertaking really ever in the automotive industry to me software wise starting from scratch it's a really brave a decision so my one question is there's always that like what was the challenge of that do you mean the career decision of jumping from a comfortable good job into the unknown or that combined so the at the individual level you making that decision and then when you show up you know it's a really hard engineering process so you could just stay maybe slow down say hardware one or that those kinds of decisions just taking it full-on let's let's do this from scratch what was that like well so I mean I don't think Tesla has a culture of taking things slow insights how it goes and one of the things that attracted me about Tesla is it's very much a gung-ho let's change the world let's figure it out kind of a place and so I have a huge amount of respect for that Tesla has done very smart things with hardware one in particular and the harder one design was originally designed to be very simple automation features in the car for like traffic aware cruise control and things like that and the fact that they were able to effectively feature creep it into lane holding and and a very useful driver assistance features is pretty astounding particularly given the details of the hardware hardware to built on that a lot of ways and the challenge there was that they were transitioning from a third party provided vision stack to an in-house built vision stack and so for the first step which I mostly helped with was getting onto that new vision stack and that was very challenging and there were it was time critical for various reasons and it was a big leap but it was fortunate that built on a lot of the knowledge and expertise and the team that had built harder ones driver assistance features so you spoke in a collected and kind way about your time at Tesla but it was ultimately not a good fit Elon Musk we've talked on his podcast several guests the course he almost continues to do some of the most bold and innovative engineering work in the world at times at the cost some of the members of the test the team what did you learn about this working in this chaotic world Leon yeah so I guess I would say that when I was at Tesla I experienced and saw vert the highest degree of turnover I'd ever seen in a company my which was a bit of a shock but one of the things I learned and I came to respect is that Elon is able to attract amazing talent because he has a very clear vision of the future and he can get people to buy into it because they want that future to happen right and the power of vision is something that I have a tremendous amount of respect for and I think that Elon is fairly singular in the world in terms of the things he's able to get people to believe in and it's it's a very it's very there many people who stay on the street corner and say ah we're gonna go to Mars right but then but then there are a few people that can get other others to buy into it and believe in build the path and make it happen and so I respect that I don't respect all of his methods but but I have a huge amount of respect for that you've mentioned in a few places including in this context working hard what does it mean to work hard and when you look back at your life what are what were some of the most brutal periods of having to really sort of put everything you have into something yeah good question so working hard can be defined a lot of different ways so a lot of hours and so that's that is true the thing to me that's the hardest is both being short-term focused on delivering and executing and making a thing happen while also thinking about the longer-term and trying to balance that right because if you are myopically focused on solving a task and getting that done and only think about that incremental next step you will miss the next big hill you should jump over - right and so I've been really fortunate that I've been able to kind of oscillate between the two and historically at Apple for example that was made possible because I was able to work some really amazing people and build up teams and leadership structures and and allow them to grow in their careers and take on responsibilities thereby freeing up me to be a little bit crazy and thinking about the next thing and so it's it's a lot of that but it's also about you know with the experience you make connections that other people don't necessarily make and so I think that is that's a big part as well but the bedrock is just a lot of hours and you know that's that's okay with me there's different theories on work-life balance and my theory for myself which I do not project on to the team but my theory for myself is that you know I I wanted love what I'm doing and work really hard and my purpose I feel like and my goal is to change the world and make it a better place and that's that's what I'm really motivated to do so last question LLVM logo is a dragon you know you explain that this is because dragons have connotations of power speed intelligence it can also be sleek elegant and modular till you remove them the modular part what is your favorite dragon-related character from fiction video or movies so those are all very kind ways of explaining it that you wanna know the real reason it's a dragon well yeah so there is a seminal book on compiler design called the dragon book and so this is a really old now book on compilers and so the the dragon logo for LVM came about because at Apple we kept talking about LLVM related technologies and there's no logo to put on a slide it's sort of like what do we do and somebody's like well what kind of logo should a compiler technology have and I'm like I don't know I mean the Dragons or the dragon is the best thing that that we've got and you know Apple somehow magically came up with the logo and and it was a great thing and the whole community rallied around it and and then it got better as other graphic designers got involved but that's that's originally where it came from story is they're dragons from fiction that you connect with for that Game of Thrones Lord of the Rings that kind of thing Lord of the Rings is great I also like role-playing games and things like in computer role-playing games and so Dragons often show up in there but but really comes back to to to the book oh no we need we need a thing yeah and hilariously one of the one of the the funny things about LLVM is that my wife who's amazing runs the the LVM foundation and she goes to Grace Hopper and it's trying to get more women involved in the she's also compiler engineer so she's trying to get other other women to get interested in compilers and things like this and so she hands out the stickers and people like the LVM sticker because a game of thrones and so sometimes culture has this whole effect to like get the next generation if hilar engineers engaged with the cause okay awesome Chris thanks so much for time great talking with you you
Oriol Vinyals: DeepMind AlphaStar, StarCraft, and Language | Lex Fridman Podcast #20
the following is a conversation with Ariane Vinnie Alice he's a senior research scientist at google deepmind and before that he was a Google brain and Berkeley his research has been cited over 39,000 times he's truly one of the most brilliant and impactful minds in the field of deep learning he's behind some of the biggest papers and ideas and AI including sequence the sequence learning audio generation image captioning neural machine translation and of course reinforcement learning he's a lead researcher of the Alpha Star project creating an agent that defeated a top professional at the game of StarCraft this conversation is part of the artificial intelligence podcast if you enjoy it subscribe on youtube itunes or simply connect with me on twitter at Lex Friedman spelled Fri D and now here's my conversation with Arielle Minnie Alice you spearheaded the deepmind team behind alpha star that recently beat a top professional player Starcraft so you have an incredible wealth of work and deep learning in a bunch of fields but let's talk about Starcraft first let's go back to the very beginning even before alpha star before deep mine before deep learning first what came first for you a lot for programming or a love for videogames I think for me it definitely came first the drive to play videogames I really liked computers I didn't really code much but what I would do is I would just mess with the computer break it and fix it that was the level of skills I guess that I gained in my very early days I mean when I was 10 or 11 and then I really got into video games especially Starcraft actually the first version I spent most of my time just playing kind of pseudo professionally as professionally as you could play back in 98 in Europe which was not a very main scene like that what's called nowadays eSports right of course in the 90s so how did you get into StarCraft what was your favorite race how do you develop how did you develop your skill what was your strategy all that kind of thing so as a player I tended to try to play not many games not to kind of disclose the strategies that I kind of developed and I like to play random actually not in competitions but just to I think in StarCraft there's well there's three main races and I found it very useful to play with all of them and so I would choose random many times even sometimes in tournaments to gain skill on the three races because it's not how you play against someone but also if you understand the race because you play it you also understand what's annoying what then when you're on the other side what to do to annoy that person to try to gain advantages here and there and so on so I actually played random although I must say in terms of favorite race I really liked zerk I was probably best at Zerg and that's probably what I tend to use towards the end of my career year before starting University so let's step back a little bit could you try to describe Starcraft to people that may never have played video games especially the massively online variety right so craft so Starcraft is a real-time strategy game and the way to think about Starcraft perhaps if you understand a bit chess is that there are there's a board which is called map or or or the gallic the map where people play against each other there's obviously many ways you can play but the most interesting one is the one versus one setup where you just play against someone else or even the built in AI right the wizard put a system that can play the game reasonably well if you don't know how to play and then in this board you have again pieces like in chess but these pieces are not there initially like they are in chess you actually need to decide to gather resources to decide which pieces to build so in a way you're starting almost with no pieces you start gathering resources in StarCraft there's minerals and gas that you you can gather and then you must decide how much do you want to focus for instance on gathering more resources or starting to build units or pieces and then once you have enough pieces or maybe like attack you know a good attack composition then you go and attack the other side of the map and now the other main difference with chess is that you don't see the other side of the map so you're not seeing the moves of the enemy it's what we call partially observable so as a result you must not only decide Trading of economy versus building your own units but you also must decide whether you want to scout to gather information but also by scouting you might be giving away some information that you might be hiding from the enemy so there's a lot of complex decision-making all in real-time there's also unlike chess this is not a turn-based game you play basically all the time continuously and thus some skill in terms of speed and accuracy of clicking is also very important and people that train for these really play this game at an amazing skill I've seen many times these and if you can witness this life is really really impressive so in a way it's kind of a chess where you don't see the other side of the board you're building your own pieces and you also need to gather resources to basically get some money to build other buildings pieces technology and so on from the perspective of the human player the difference between that and chess or maybe that and a game like turn-based strategy like heroes a might of magic is that there's an anxiety because you have to make these decisions really quickly and if you are not actually aware or what decisions work it's a very stressful balance the if there's everything you describe is actually quite stressful difficult to balance for a mature human player I don't know if it gets easier at the professional level like if they're fully aware what they have to do but at the amateur level there's this anxiety oh crap I'm being attacked oh crap I have to build up resource oh I have to probably expand and all these the time the real-time strategy aspect is really stressful and computation I'm sure difficult we'll get into it but for me battlenet so Starcraft was released in 98 20 years ago which is hard to believe and Blizzard battlenet with Diablo 96 came out and to me it might be a narrow perspective but it changed online gaming and perhaps society forever yeah but I may have made way too narrow viewpoint but from your perspective can you talk about the history of gaming over the past 20 years is this how transformational how important is this line of games right so I think I I kind of was an active gamer whilst this was developing the internet I'm online gaming so for me that the way it came was I played other games strategy related I play a bit of common and conquer and then I played Warcraft 2 which is from Blizzard but at the time I didn't know I didn't understand about what Blizzard was or anything Warcraft 2 was just a game which is which was actually very similar to start off in many ways it's also real-time strategy game and where there's orcs and humans so there's only two races throughs offline and it was offline right so I remember a friend of mine came to to school say oh there's this new cool game called Starcraft and I just said all these sounds like just a copy of Warcraft 2 until I kind of installed it and at the time I am from Spain so we didn't have internet like very good internet right so there was for us a Starcraft became first kind of an offline experience where you kind of start to play these missions right you play against some sort of scripted things to develop the story of the characters in the game and then later on I start playing against the butene AI and I thought it was impossible to defeat it then eventually you defeat one and you can actually break n7 built in the eyes at the same time which also felt impossible but actually it's not that hard to beat seven built-in eyes at once so once we achieve that also we discovered that we could play as I said internet wasn't that great but we could play with the land right on like basically against each other if we were in the same place because you could just connect machines with like cables right so we started playing in LAN mode and again you know as a group of friends and it was really really like much more entertaining than playing against the eyes and later on as internet was starting to develop and being a bit faster and more reliable then it's when I started experiencing battlenet which is these amazing universe not only because of the fact that you can play the game against any way anyone in the world but you can also get to know more people you just get exposed to now like this vast variety of it's kind of a bit when the chats came about right there was a chat system you could play against people where you could also chat with people not only about soccer but about anything and that became a way of life for kind of two years and obviously then it became like an elite exploded didn't mean that I started to play more seriously going to tournaments and so on so forth do you have a sense and a societal sociological level what whole part of society that many of us are not aware of and it's a huge part of society which is gamers I mean every time I come across that in YouTube or streaming sites I mean this is a huge number of people play games religiously do you have a sense of those folks especially now that you've returned to that realm a little bit on the high side yeah so in fact I even after soccer if I actually played World of Warcraft which is mainly the main sort of online world or in presence that you get to interact with lots of people so I played that for a little bit it was to me it was a bit less stressful than Starcraft because winning was kind of a given you just put in this world and you can always complete missions but I think it was actually the social aspect of especially Starcraft first and then games like World of Warcraft really shaped me in a very interesting ways because you had you get to experience it's just people you wouldn't usually interact with right so even nowadays I still have many Facebook friends from the area where I played online and their ways of thinking is even political they just don't we don't live in like we don't interact in there in the real world but we were connected by basically fiber and that way I actually get to understand a bit better that we live in a diverse world and these were just connections that were made by because you know I happened to go in a city in a virtual city as a priest and I met these you know this warrior and we became friends and then we start like playing together right so I think it's it's it's transformative and more and more and more people are more aware of it I mean it's it's becoming quite mainstream but back in the day as you were saying in 2000 2005 even it was very still very strange thing to do especially in in Europe I think there were exceptions like Korea for instance it was amazing like that that everything happened so early in terms of cyber cafes like it's if you go to Seoul it's a city that back in the day Starcraft was kind of you could be a celebrity by playing Starcraft but this was like 99 2000 right it's not like recently so um yeah it's quite it's quite interesting to to look back and and yeah I think it's changing society that the same way of course like technology and social networks and so on are also transforming things and a quick tangent let me ask you're also one of the most productive people in your particular chosen passion and path in life and yet you're also appreciate and enjoy video games do you think it's possible to do to enjoy video games in moderation someone told me that you could choose two out of three when I was playing video games you could choose having a girlfriend playing video games or studying and I think for the most part it was relatively true these things do take time games like stark if you take the game pretty seriously and you wanna study it then you obviously will dedicate more time to it and I definitely took gaming and obviously studying very seriously I loved learning science and etc so to me especially when I started University undergrad I kind of step off Starcraft I actually fully stopped playing and then wall of war curve was a bit more casual you could just connect online and I mean it was it was fun but I as I said that was not as much time investment as it was for me in StarCraft ok so let's get into alpha star what are the your behind the team so deep mine has been working on Starcraft and released a bunch of cool open-source agents and so on the past few years but alpha star really is the moment where the first time you beat a world-class player so what are the parameters of the challenge in the way that alpha star took it on and how did you and David and the rest of deepmind team get into it consider that you can even beat the best in the world or top players I think it all started in back in 2015 actually I'm lying I think it was 2014 when the mine was acquired by Google and I at the time was at Google brain which is it it was in California in California we had this summit where we got together the two groups so Google brain and google deepmind got together and we gave a series of talks and given that they were doing deep reinforcement learning for games I decided to bring up part of my past which I had developed at Berkeley like this thing which we called Berkeley over mine which is really just a Starcraft one but right so I about that and I remember them is just came to me and said well maybe not now it's it's perhaps a bit too early but you should just come to the mine and do this again with deep reinforcement learning right and at the time it sounded very science-fiction for for several reasons but then in 2016 when I actually moved to London and joined in mind transferring from brain it became apparent that because of the alphago moment and kind of Blizzard reaching out to us to say wait like do you want the next challenge and also me being full-time at deep mine it's a sort of kind of all these came together and then I was I went to to air vine in California to the Blizzard headquarters to just chat with them and try to explain how would it all work before you do anything and the approach has always been about the learning perspective right so in in Berkeley we did a lot of rule-based you know conditioning and or if you have more than three units then go attack and if the other has more units than me I retreat and so on and so forth and of course the point of deep reinforcement learning deep learning machine learning in general is that all these should be learned behavior so that kind of was the DNA of the project since its inception in 2016 where we just didn't even have an environment to work with and so these that's how it all started really so if you go back to a conversation with damage or even in your own head how far away did you because that's we're talking about Atari games we're talking about go which is kind of if you're honest about it really far away from Starcraft in a well now that you've beaten it maybe you could say it's close but is it's much it seems like Starcraft is way harder than go philosophically in mathematically speaking so how far away did you did you think you were do you think it's 2019 in 18 you could be doing as well as you have yeah when I when I kind of thought about okay I'm gonna dedicate know a lot of my time and focus on this and obviously I do a lot of different research in deep learning so spending time on it I mean I really had to kind of think there's gonna be something good happening out of this so really I thought well this sounds impossible and it probably is impossible to do the full thing like the all like the full game where you play one versus one and it's only a neural network playing and so on so it really felt like I just didn't even think it was possible but on the other hand I could see some stepping stones like towards that goal clearly you could define subproblems in StarCraft and sort of dissect it a bit and say okay here is a part of the game here's another part and also obviously the fact so this was really also critical to me the fact that we could access human replays right so Blizzard was very kind and in fact they open-source these for the whole community where you can just go and it's not every single Starcraft game ever played but it's a lot of them you can just go and download and every day they will you can just query a dataset and say well give me all the games that were played today and given my kind of experience with language and sequences and supervised learning I thought well that's definitely gonna be very helpful and something quite unique now because ever before we had such a large dataset of replays of people playing the game at this scale of such a complex video game right so that to me was a precious resource and as soon as I knew that Blizzard was able to kind of give these to the community I started to feel positive about something non-trivial happening but but I also thought the full thing like really no rules no no single line of code that tries to say well I mean if you see this unit will the detector all these not having any of these specializations seemed really really really difficult to me I do also like that Blizzard was teasing or even trolling you sort of almost yeah pulling you in into this really difficult challenge they have any aware and that what's what's the interest from the perspective of Blizzard except just curiosity yeah I think Blizzard has really understood and and really bring bring forward these competitiveness of eSports in games the Starcraft really kind of sparked a lot of like something that almost was never seen especially as I was saying he back in Korea so they just probably thought well this is such a pure 1vs1 setup that it would be great to see if something that can play Atari or go and then later on chess could could even tackle these kind of complex realtime strategy game right so for them they wanted to see first obviously whether it was possible if the game they created was in a way solvable to some extent and I think on the other hand they also are a pretty modern company that innovates a lot so just starting to understand AI for them to how to bring AI into games is not is not AI for games but get a games for AI oh right I mean both ways I think can work and you up we obviously did mine use games for AI right to drive AI progress but Blizzard might actually be able to do and many other companies to start to understand done to the opposite so I think that is also something they can get out of these and they definitely we have brainstorm a lot about about these right but one of the interesting things to me about Starcraft and Diablo and these games that blizzards created is the task of balancing classes for example sort of making the game fair from the starting point and then let's skill the term and the outcome is there uh I mean can you first comment there's three races Zerg Protoss and Terran I don't know if I've ever said that out loud is that how you pronounce it Terran yeah yeah I don't think I've ever seen personally interact with anybody about Starcraft it's funny so they seem to be pretty balanced I wonder if the AI the work that you're doing with the Alpha star would help balance them even further is that something you think about is that something that Blizzard is thinking about right so so balancing when you add a new unit or a new spell type is obviously possible given that you can always train or retrain at scale some agent that might start using that in unintended ways but I think actually if you understand how StarCraft has kind of co-evolved with players in a way I think it's actually very cool the ways that many of the things and strategies that people came up with right so I think it's we've seen it over and over in StarCraft that Blizzard comes up with maybe a new unit and then some players get creative and do something kind of unintentional or something that Blizzard designers that just simply didn't test or think about and then after that becomes kind of mainstream in the community Blizzard patches the game and and then they kind of maybe weaken that strategy or or make it actually more interesting but a bit more balanced so these kind of continual talk between players and Blizzard is is kind of what has defined them actually in actually most games that the in in stykera but also in World of Warcraft they would do that there are several classes and it would be not good that everyone plays absolutely the same race or and so on right so I think they they do care about balancing of course and they do a fair amount of testing but it's also beautiful to to also see how players get creative anyways and I mean whether I can be more creative at this point I don't think so right I mean it's just sometimes something so amazing happens like I remember back in in the days like you have these drop ships that could drop rivers and that was actually not thought about that you could drop this unit that has this what's called Splash Damage that would basically eliminate all the enemies workers at once no one thought that you could actually put them and really early game do that kind of damage and then we know things change in the game but I don't know I think there's it's quite an amazing exploration process from both sides players and Blizzard alike well it's it's almost like a reinforcement learning exploration but a the scale of humans that play that play Blizzard games is almost on the scale of a large-scale deepmind RL experiment I mean if you look at the numbers that's I mean you're talking about I don't know how many games but hundreds of thousands of games probably a month yeah I mean so that you could it's almost the same as running RL agents what an aspect of the problem of StarCraft II things the hardest is it the like you said the imperfect information is it the fact they have to do long term planning is it the real time aspects we have to do stuff really quickly is it the fact that a large action space that you can do so many possible things or is it you know in the game theoretic sense there is no Nash equilibria at least you don't know what the optimal strategy is because there's way too many options right what's is there something that stands out is just like the hardest the most annoying thing so when we sort of looked at the problem and start to define permit like the parameters of it right what are the observations what are the actions it became very apparent that you know that the very first barrier that one would hit in StarCraft would be because of the action space being so large and as not being able to search like you could in in chess or or go even though the search space is vast the main problem that we identified was that of exploration right so without any sort of human knowledge or human prior if you think about StarCraft and you know how deep reinforcement learning is algorithm works work which is essentially by issuing random actions and hoping that they will get some wins sometimes so they could learn so if you think of the of the action space in StarCraft almost anything you can do in the early game is bad because any action involves taking workers which are mining minerals for free that's something that the game does automatically sends them to mine and you would immediately just take them out of mining and send them around so just thinking how how is it gonna be possible to to get to understand the these concepts but but even more like expanding right there's there's these buildings you can place in other locations in the map to gather more resources but the location of the building is important and you have to select a worker send it walking to that location build the building wait for the building to be built and then put extra workers there so they start mining that just that feels like impossible if you just randomly click to produce that state desirable state that then you could hope to learn from because eventually that may yield to an extra win right so for me the exploration problem and due to the actions pace and the fact that there's not really turns there's so many turns because the game essentially peaks at 22 times per second if you mean that's how they keep discretize sort of time obviously you always have to discretize time there's not no such thing as real time but it's really a lot of time steps of things that could go wrong and that definitely felt a priori like the hardest you mentioned many good ones I think partial observability and the fact that there is no perfect strategy because of the partial observability those are very interesting problems we start seeing more and more now in terms of us we solve the previous ones but the core problem to me was exploration and solving it has been basically kind of the focus and how we saw the first breakthroughs so exploration you know in a multi hierarchical way so like 22 times the second exploration is a very different meaning than it does in terms of should I gather resources early or should I wait or so on so how do you solve the long-term let's talk about the internals of alpha stuff so first of all how do you represent the state of the game as an input right how do you then do the long term sequence modeling how do you build a policy right also what's the architecture like so alpha star has obviously several components but everything passes through what we call the policy which is a neural network and that's kind of the beauty of it there is I could just now give you a neural network and some weights and if you fed the right observations and you understood the actions the same way we do you would have basically the agent playing the game there's absolutely nothing else needed other than those weights that were trained now the first step is observing the game and we've experimented with a few alternatives the one that we currently use mixes both spatial sort of images that you would process from the game that is the zoomed out version of the of the map and also assume the inversion of the camera or the screen as we call it but also we give to the agent the list of units that it sees more of as a set of objects that it can operate on that is not necessarily required to use it and we have versions of the game that play well without this set vision that is a bit not like how humans perceive the game but it certainly helps a lot because it's a very natural way to encode the game is by just looking at all the units that there are there they have properties like health position type of unit whether it's my unit or the enemies and that's sort of is kind of the the summary of the state of the of the game note that list of units or set of units that you see all the time well that's pretty close to the way humans see it again why do you say it's not isn't it you're saying the exactness of it is not yeah other humans the exactness of it is perhaps not the problem I guess maybe the problem if you look at it from how actually humans play the game is that they play with a mouse on a keyboard and a screen and they don't see sort of a structured object with all the units what they see is what they see on the screen right yes so remember that there's a certain interrupt there's a plot that you showed with camera base where you do exactly that right move around and that seems to converge it to similar performance yeah I think that's what I we're kind of experimenting with what's necessary or not but using the set so actually if you look at research in computer vision where it makes a lot of sense to treat images as two-dimensional arrays there is actually a very nice paper from Facebook I think I forgot who the authors but I think it's part of gaming's has group and what they do is they take an image which is this two-dimensional signal and they actually take pixel by pixel and scramble the image as if it was just a list of pixels and crucially they encode the position of the pixels with at the XY coordinates and this is just kind of a new architecture which we incidentally also use in StarCraft called the transformer which is a very popular paper from last year which yielded very nice result in machine translation and if you actually believe in this kind of or it's actually a set of pixels as long as you encode X Y it's okay then you you could argue that the list of units that we see is precisely that because we have each unit as a kind of pixel if you will and then there XY coordinates so in that perspective we without knowing it we use the same architecture that was shown to work very well on Pascal on image net and so on so the interesting thing here is putting it in that way it starts to move it towards the way you usually work with language so what and especially with your expertise and work in language it seems like there's echoes of a lot of the way you would work with natural language in the way you've approached alpha star right what's does that help with the long term sequence modeling there somehow exactly so so now that we understand what an observation for a given time step is we need to move on to say well there's going to be a sequence of such observations and an agent will need to given all that it's seen not only the current time step but all that it's seen why because there is partial observability we must remember whether we saw a worker going somewhere for instance right because then there might be an expansion and the top right of the map so given that what you must then think about is there is the problem of given all the observations you have to predict the next action and not only given all the observations but given all the observations and given all the actions you've taken predict the next action and that sounds exactly like machine translation where and that's exactly how kind of I saw the problem especially when you are given supervised data or replays from humans because the problem is exactly the same you're translating essentially a prefix of observations and actions onto what's going to happen next which is exactly how you would train a model to translate or to generate language as well right you have a certain prefix you must remember everything that comes in the past because otherwise you might start having noncoherent text and the same architectures we're using LST MS and transformers to operate on across time to kind of integrate all that's happening in the past those architectures that work so well in translation or language modeling are exactly the same than what the agent is using to issue actions in the game and the way we train in moreover for imitation which is step one of alpha studies take all the human experience and try to imitate it much like you try to imitate translators that translated many pairs of sentences from French to English say that sort of principle applies exactly the same it's you mightyou it's almost the same code except that instead of words you have a slightly more complicated objects which are the observations and the actions are also a bit more complicated that than award is there a self play component into so once you run out of imitation right so so indeed you can bootstrap from human replays but then the agents you get are actually not as good as the humans you imitated right so how do you imitate well we take humans from 3,000 MMR and hire 3,000 MMR is just a metric of human skill and 3,000 MMR might be like 50% percentile right so it's just Kevin average human what's that so maybe quick pause MMR's ranking scale the matchmaking rating yeah for players so street uh remember there's like a master and a grandmaster with 3000 so 3,000 is is pretty bad I think it's kind of gold level it just sounds really good relative to chess I think oh yeah I know the the rating is the best in the world are at 7,000 mm so 3,000 it's a bit like Eloy indeed right so 3,300 just allows us to not filter a lot of the data so we like to have a lot of data in deep learning as you probably know so we take these kind of 3,500 and above but then we do a very interesting trick which is we tell the neural network what level they are imitating so we say these replay you're gonna try to imitate to predict the next action for all the actions that you're gonna see is a 4,000 mm our replay this one is a 6,000 mm our replay and what what's cool about this is then we take this policy that is being trained from human and then we can ask it to play like a 3,000 mm our player by setting a bit saying well okay play like a 3,000 mm our player or play like a 6,000 mm our player and you actually see how the policy behaves differently it gets worse economy if you play like a gold level player it does less actions per minute which is the number of clicks or number of actions that you will issue in a whole minute and it's very interesting to see that it kind of imitates the skill level quite well but if we ask you to play like a 6000 mm our player we tested of course these policies to see how well they do they actually beat all the built-in ai's that these are put in the game but they're nowhere near 6000 mm our players right they might be maybe around gold level platinum perhaps so there's still a lot of work to be done for the policy to truly understand what it means to win so far we only asked them ok here is the screen and that's what happened on the game until this point what would the next action be that we asked you know we asked a pro - now say all this you're gonna click here or here or there and the point is experiencing experiencing wins and losses is very important to then start to refine otherwise the policy can get loose can can just go off policy as we call it that's so interesting you can at least hope eventually to be able to control a policy approximately to be a some MMR level that's that's so interesting especially given that you have ground truth for a lot of these cases right can ask your personal questions or what's your mmm well I haven't played Starcraft 2 so I am unranked oh is the kind of Lois League okay so I used to play Starcraft the first one and but you haven't seriously played so the best player we have a deep mind is about five thousand MMR which is high masters is not at Grandmaster level Grandmaster level would be the top 200 players in a certain region like Europe or America or Asia but for me it would be hard to say I am very bad at the game I actually played alpha star a bit too late and it bit me I remember the whole team was Oreo you should play yeah and I was it looks like it's not so good yet and then I remember I kind of got busy and waited an extra week and I played and it really beat me very badly was it I've heard that feel it's not an amazing feelings amazing yeah I mean obviously I tried my best and I try to also impress my because I actually played the first game so I'm still pretty good at micromanagement um the promise I just don't understand Starcraft 2 I understand Starcraft and when I played Starcraft I probably was consistently like for for a couple years top 32 in Europe so I was decent but at the time we didn't have this kind of MMR system as as well established so it would be hard to know what what it was back then so what's the difference in interface between alpha star and Starcraft and a human player and Starcraft is there any significant differences between the way they both see the game I would say the way they see the game there's a few things that are just very hard to simulate the main one perhaps which is obvious in hindsight is what's called cloaked units which are invisible units so in StarCraft you can make some units that you need to have a particular kind of unit to detect it so these units are invisible if you cannot detect them you cannot target them so they would just you know destroy your buildings or kill your workers but despite the fact you cannot target the unit there's a shimmer that as a human you observe I mean you need to train a little bit you need to pay attention but you would see this kind of space-time as space-time like distortion and you wouldn't know okay there art yeah yeah it's like a wave thing yeah she's kinda storsch I don't like it that's really like the Blizzard term is shimmer shimmer and so this shimmer professional players actually can see it immediately they understand it very well but it's still something that requires certain amount of attention and and and it's kind of a bit annoying to deal with whereas four alpha star in terms of vision it's very hard for us to simulate sort of you know are you looking at these pixel in the screen and so on so um the only thing we can do is we there is a unit that's invisible over there so alpha star would know that immediately obviously still obeys the rules you cannot attack the unit you must have a detector and so on but it's it's kind of one of the main things that it just doesn't feel there's there's a very proper way I mean you could imagine are you you don't have high present know exactly where it is or sometimes you see it sometimes you don't but it's it's just really really complicated to get it so that everyone would agree oh that's that's the best way to simulate this right you know it seems like a perception problem it is a perception problem so so the only problem is people are you ask what's the difference between how humans perceive the game I would say they wouldn't be able to tell a shimmer immediately as it appears on the screen whereas alpha star in principal sees it very sharply right it seems okay it sees that the beat turned from zero to one meaning there's now a unit there although you don't know the unit or you don't know it you know you know that you cannot attack it and so on God so that from from a vision standpoint that probably is the one that is kind of the most obvious one then there are things humans cannot do perfectly even professionals which is they might miss a detail or they might have not seen a unit and obviously as a computer if there's a corner of the screen that turns green because a unit enters the field of view that can go into the memory of the agent the lsdm and Percy's there for a while for whatever for however long is relevant right and in terms of action it seems like the rate of action from an alpha star is comparative if not slower than professional players but is there but it's more precise as well right so so that's that's a very like that's really probably the one that is causing us more issues for a couple of reasons right the first one is Starcraft has been an AI environment for quite a few years in fact I mean I was participating in the very first competition back in 2010 and there's really not been a kind of a very clear set of rules how the actions per minute the rate of actions that you can issue is and as a result these agents or bots that people build in a kind of almost very cool way they do like 20,000 40,000 actions per minute now now to put this in perspective a very good professional human my du 300 to 800 actions per minute they might not be as precise that's why the range is a bit tricky to to identify exactly I mean 300 actions per minute precisely is probably realistic 800 is probably not but you see humans doing a lot of actions because they warm up and they kind of select things and spam and so on just so that when they need they have the accuracy so we came into this by not having kind of a standard way to say well how do we measure whether an agent is at human level or not on the other hand we had a huge advantage which is because we do imitation learning agents turned out to act like humans in terms of rate of actions even Precision's and imprecisions of actions in the supervise policy you could see all this you could see how agents like to spam click to move here if you played specially Diablo you would know what I mean I mean you just duck like spam or a move here Murphy a move here you're doing literally like maybe five actions in two seconds but these actions are not very meaning meaningful one would have sufficed so on the one hand we start from this imitation policy that is at the ballpark of the actions per millions of humans because it acts statistically trying to imitate humans so we see these very nicely in the curves that we showed in the blog post like this these actions per minute and the distribution looks very human-like but then of course as self play kicks in and and that's the part we haven't talked too much yet but of course the agent must played getting itself to improve then there's almost no guarantees that these actions will not become more precise or even the rate of actions is going to increase over them so what we did and this is probably kind of the first attempt that we thought was reasonable is we looked at the distribution of actions for humans for certain windows of time and just to give a perspective because I guess I mentioned that some of these agents that are programmatic let's call them they do 40,000 actions per minute professionals as I said to 300 to 800 so what we looked is we look at a distribution over professional gamers and we took reasonably high actions per minute but we kind of identify certain cat offs after which if even if the agent wanted to act these actions would be dropped but the problem is this cutoff is probably set a bit too high and what ends up happening even though the games and when we ask the professionals and the gamers by by and large they feel like it's playing human-like there are some Asians that developed maybe slightly too high a PMS which is actions per minute combined with the precision which made people sort of start discussing a very interesting issue which is should we have limited these should we just let it loose and see what cool things it can come up with right so this is in itself an extremely interesting question but the same way that modeling the shimmer would be so difficult modeling absolutely all the details about muscles and precision and an tiredness of humans would be quite difficult right so we're here in kind of innovating in this sense of okay what could be maybe the next iteration of putting more rules that makes the agents more human-like in terms of restrictions yeah putting constraints that more constraints yeah that's really interesting that's really innovative so one of the constraints you put on your on yourself or at least focused in is on the protoss race as far as I understand can you tell me about the different races and how they so protoss terran and the Zerg how do they compare how do they interact why did you choose Protoss in right there is in the dynamics of the game you seen from a strategic perspective so Protoss so in stacked of there are three races indeed in the demonstration we saw only the Protoss race so maybe let's start with that one Protoss is kind of the most technologically advanced race it has units that are expensive but powerful right so in general you want to kind of conserve your units as you go attack so you wanna and and then you want to utilize these tactical advantages of very fancy spells and so on so forth and at the same time they're kind of people say like they're they're a bit easier to play perhaps right but that I actually didn't know I mean I just talked to now a lot to the players that we we work with TLO and mana and they said oh yeah Protoss is actually people think is actually one of the easiest races so perhaps the easier that doesn't mean that it's you know obviously professional players excel at the three races and there's never like a race that dominates for a very long time anyway so if you look at the top and 100 in the world is there one race that dominates that list it would be hard to know because it depends on the regions I think it's pretty equal in terms of distribution and Blizzard wants it to be equal right they don't want they wouldn't want one race like Protoss to not be representative in the top place so definitely like they tried it to be like the balance right so then maybe the opposite race of Protoss is zerk dirt is a race where you just kind of expand and take over as many resources as you can and they have a very high capacity to regenerate their units so if you have an army it's not that valuable in terms of losing the whole army is not a big deal azarkh because you can then rebuild it and given that you generally accumulate a huge Bank of resources zurk steep it will play by a prayer applying a lot of pressure may be losing their whole army but then rebuilding it quickly so although of course every race I mean there is never I mean they're pretty diverse I mean there's some unity insert that are technologically advanced and they do some very interesting spells and there's some units in proto's that are less valuable and you could lose a lot of them and rebuild them and it wouldn't be a big deal all right so maybe I'm missing out maybe I'm gonna say some dumb stuff but just summary of strategy so first there's collection of a lot of resources right so that's one option the other one is expanding so building other basins then the other is obviously attack ability building units and attacking with those units and then I don't know what else there is maybe there is the different timing of attacks like to attack early attack ready what are the different strategies that emerged that you've learned about I've read the a bunch of people are super happy that you guys have apparently that alpha star apparently is discovered that's really good too what is it saturate oh yeah they're mind-numbing online yeah the mineralogist yeah yeah the in it for greedy amateur players like myself that's always been a good strategy you just build up a lot of money and it just feels good it's just accumulate and accumulate so thank you for discovering that yeah validating all of us but is there other strategies that you discovered interesting yeah unique to to this game yeah so if you look at the kind of not being a stack of two player but of course Starcraft and Starcraft 2 and realtime strategy games in general are very similar I would classify perhaps the openings of the game they're very important and generally I would say there's two kinds of openings one that's a standard opening that's generally how players find sort of a balance between risk and economy and building some you needs early on so that they could defend but they're not to expose basically but also expanding quite quickly so this is would be kind of a standard opening and within a standard opening then you what you do choose general is what technology are you aiming towards so there's a bit of rock-paper-scissors of you could go for spaceships or you could go for invisible units or you could go for I don't know like massive units that attack against certain kinds of units but they're weak against others so standard openings themselves have some choices like rock-paper-scissors style of course if you Scout and you're good at guessing what the opponent is doing then you can plane as an advantage because if you know you're gonna play rock I mean I'm gonna play paper obviously so you can imagine that normal standard games in StarCraft looks like a continuous rock paper scissor game where you guess what the distribution of rock paper and scissor is from the enemy and reacting accordingly to try to beat it or you know put the paper out before he kind of changes his mind from rock to scissors and then you would be in a weak position so sorry to pause on that I didn't realize this element cuz I know it's true but poker I know I looked at Lambrakis you're this is so you're also estimating trying to guess the distribution to better and better estimate the distribution what the Epona's likely to be doing yeah I mean as a player you definitely want to have a belief state over what's up on the other side of the map and when your belief state becomes inaccurate when you start having that serious doubts whether he's gonna play something that you must know that's when you scout you wanna then gather information right it's improving the accuracy of the belief or improving the belief state part of the loss that you try to optimize or is it just in a side effect it's implicit but you could explicitly model it and it would be quite good that's probably predicting what's on the other side of the map but so far it's all implicit the lot there's no no additional reward for predicting the enemy so there's these standard opening x' and then there's what people call which is very interesting and alpha star sometimes really likes this kind of cheese these cheese's what they are is kind of an all-in strategy you're gonna do something sneaky you're gonna hide enemies as hide your own buildings close to the enemy base or you're gonna go for hiding your technological buildings so that you do invisible units and the enemy just cannot react to detect it and does lose the game and there's quite a few of these cheeses and variants of them and there is where actually the belief state becomes even more important because if I spelled your base and I see no buildings at all any human prayer knows something's up they might know well you're hiding something close to my base should I build suddenly a lot of units to defend should I actually block my ramp with workers so that you cannot come and destroy my base so there's all these is happening and defending against Jesus is extremely important and in the alpha star League many agents actually develop some cheesy strategies and in the games we saw against elo and mana two out of the ten agents were actually doing these kind of strategies which are cheesy strategies and then there's a baron of cheesy strategy which is called all-in so an all-in strategy is not perhaps as drastic as oh I'm gonna build cannons on your base and then bring all my workers and try to just disrupt your base and game over or GG as we say in StarCraft umm there's these kind of very cool things that you can align precisely at a certain time mark so for instance you can generate exactly ten unit composition that is perfectly five of these type five of disorder type and align the upgrade so that at four minutes and a half let's say you have these ten units and the upgrade just finished and at that point that army is really scary and unless the enemy really knows what's going on if you push you might then have an advantage because maybe the enemy is doing something more standard it expanded too much it developed too much economy and any trade-off badly against having defenses and the enemy will lose but it's called all-in because if you don't win then you're gonna lose so you see players that do these kind of strategies if they don't succeed game is not over I mean they still have a place and they still gathering minerals but they will just gg out of the game because they know well game is over I gambled and I failed so if we start entering the game theoretic aspects of the game it's really rich and it's really that's why it also makes it quite entertaining to watch even if I don't play I still enjoy watching the game but the agents are trying to do this mostly implicitly but one element that we improved in self plays creating the alpha star League and the alpha star League is not pure self play it's trying to create different personalities of agents so that some of them will become cheese cheese agents some of them might become very economical very greedy like getting all the resources but then being maybe early on they're gonna be weak but later on they're gonna be very strong and by creating this personality of agents which sometimes it just happens naturally that you can see kind of an evolution of agents that given the previous generation they trained against all of them and then they generate kind of the count the perfect counter to that distribution but these these agents you must have them in the populations because if you don't have them you're not covered against these things right it's kind of you wanna you wanna you know create all sorts of the opponents that you will find in the wild so you can be exposed to these cheeses early aggression later aggression more expansions dropping units in your base from the side all these things and pure self play is getting a bit stack at finding some subset of these but not all of these so the alpha star League is a way to kind of do an example of agents that they're all playing and in a league much like people play on battlenet right they play you play against someone who does a new cool strategy and you immediately oh my god I want to try it I want to play again and these to me was another critical part of the of the of the problem which was can we create a battlenet for agents yeah that's kind of what the Alpha star league is really fascinating and where they stick to their different strategies yeah wow that's it's really really interesting so but that said you were fortunate enough or just skilled enough to win 5-0 and so how hard is it to win I mean that's not the goal I guess I don't know what the goal is the goal should be to a majority not five zero but how hard is it in general to win all matchups I don't want 1v1 so that's a very interesting question because once you see alpha star and superficially you think well locate one let's if you some of the games like ten to one right it lost the game that it played with the camera interface you might think well that's that's done right there's it's it's super human at the game and that's not really the claim we really can make actually the claim is we beat a professional gamer for the first time Starcraft has really been a thing that it's been going on for a few years but moment a moment like this hasn't not had not occurred before yet but our decisions impossible to beat absolutely not right so that's a bit what's you know kind of that the difference is the agents play at Grandmaster level there definitely understand the game enough to play extremely well but are they am beatable do they play perfect no and actually in StarCraft because of these sneaky strategies it's always possible that you might take a huge risk sometimes but you might get wins right out of out of this so I think that as a domain it still has a lot of opportunities not only because of course we want to learn with less experience we would like to I mean if I if I learn to play Protoss I can play turn and learn it much quicker than alpha star can right so there are obvious interesting research challenges as well but even as as the raw like as the raw performance goes really the claim here can be we are at pro level or at high Grandmaster level but obviously the players also did not know what to expect right this kind of their prior distribution was a bit off because they played this kind of new like alien brain as they like to say drive and that's what makes it exciting for them but also I think if you look at the games closely you see there were weaknesses in some points maybe alpha star did not scout or if it had got invisible units going against at certain points it wouldn't have known and it would have been bad so there's still quite a lot of work to do but it's really a very exciting moment for us to be seeing Wow a single neural net on a GPU is actually playing against these guys who are amazing I mean you have to see them play in life if they're really really amazing players yeah I'm sure there's there's the most there must be a guy in Poland somewhere right now training his butt off to make sure that this never happens again with alpha star so that's really exciting in terms of alpha star having some holes to exploit yeah it's just great and then you build on top of each other and it feels like Starcraft I'll like go even if you win it's still not there it's still not there's so many different dimensions in which you can explore so that's really really interesting do you think there's a ceiling to alpha star you've said that it hasn't reached you know it's this is a big wait what you know let me actually just pause for a second how did it feel to come here to this point to beat a top professional player like that night I mean you know Olympic athletes have their gold medal right this is your gold medal and sense sure you're cited a lot you published a lot of prestige papers whatever but this is like a win how did it feel I mean it was for me was unbelievable because first the win itself me was so exciting I mean d so looking back to those last days of 2018 really well that's when the games were played I'm sure I look back at that moment I say oh my god I want to be it like in a project like that it's like I already feel the nostalgia of like yeah that was huge in terms of the energy and the team effort that went into it and so in that sense as soon as it happened I already knew it was kind of I was losing it a little bit so it is almost like sad that it happened and all like but on the other hand it also verifies the approach but to me also there's so many challenges and interesting aspects of intelligence that even though we can train a neural network to play at the level of the best humans there's still so many challenges so for me it's also like well this is really an amazing achievement but I already was also thinking about next steps I mean as I said these agents play Protoss vs. Protoss but they should be able to play a different race much quicker right so that would be an amazing achievement some people call this matter reinforcement learning meta learning and so on right so there's so many possibilities after that moment but the moment itself it really felt great it's I we had this bet so I I'm kind of a pessimist in general so I kind of send an email to the team and said okay let's again steal offers right like what's gonna be the result and I really thought we would lose like five zero right III we had some calibration made against the 5000 MMR player TLO was much stronger than that player even if he played Protoss which is his off race but yeah I was not imagining we would win so for me that was just kind of a test run or something and then it really kind of he was really surprised and unbelievably we went to this to this bar to celebrate and and dave tells me well why don't we invite someone who is a thousand mm are stronger in proto's like an actual Protoss player like like that it turned up being man all right and you know we had some drinks and I said sure why not but then I thought well that's really gonna be impossible to beat I mean even because it's so much I had a thousand MMR is really like 99% probability that mana would beat TLO as Protoss vs. Protoss right so we did that and to me the second the second game was much more important even though a lot of uncertainty kind of disappeared after we we cannot beat the yellow I mean it he is a professional player so that was kind of over that's really a very nice achievement but mana really was at the top and you could see he played much better but our agents got much better too so it's a and then after the first game I said if we take a single game at least we can say we beat a game I mean even if we don't beat the series for me that was a huge relief and I mean I remember hugging them is and I mean it was it was really like this moment for me will resonate forever as a researcher and I mean as a person and is a really like great accomplishment and it was great also to be there with the team in the room I don't know if you saw like so it was really like I mean from my perspective the other interesting thing is just like watching Kasparov watching mana was also interesting because he didn't he is kind of a loss of words I mean whenever you lose I've done a lot of sports you sometimes say excuses you look for reasons right and he couldn't really come up with a reason yeah yeah I mean so with the off race for Protoss you could say it was it felt awkward it wasn't but here it was yeah it was it was just beaten and it was beautiful to look at a human being being superseded by an AI system I mean it's a it's a beautiful moment for researchers so yeah for sure it was it was I mean probably the highlight of my career so far because of its uniqueness and coolness and I don't know I mean it's obviously as you said you can look at paper citations and so on but these these really is like a testament of the whole machine learning approach and using games to advance technology I mean it's really it really was everything came together at that moment that that's really the summary also on the other side it's a popularization of AI too because just like traveling to to the moon and so on I mean this is where a very large community of people that don't really and no way I get to really interact with it which is very important I mean it's really we must you know writing papers helps our peers researchers to understand what we're doing but I think AI is becoming mature enough that we must sort of try to explain what it is and perhaps through games is an obvious way because these games always had built so it may be everyone experience an AI playing a video game even if they don't know because there's always some scripted element and some people might even call that AI already right so what are other applications of the approaches underlying alpha star that you see happening there's a lot of echoes of he said transformer of language modeling so on have you already started thinking where the breakthroughs in alpha star get expanded to other applications right so I thought about a few things for like kind of next months next year's the main thing I'm thinking about actually is what's next as a kind of a grand challenge because for me like we've seen Atari and then there's like the sort of three-dimensional worlds that we've seen also like pretty good performance from these capture-the-flag agents that also some people at deep mine and elsewhere are working on we've also seen some amazing results on like for instance dota 2 which is also a very complicated game so for me like the main thing I'm thinking about is what's next in terms of challenge so as a researcher I see sort of two tensions between research and then applications or areas or domains where you apply them so on the one hand we've done thanks to the application of StarCraft is very hard we develop some techniques some new research that now we could look at elsewhere like are there other applications where we can apply this and the obvious ones absolutely you can think of feeding back to sort of the community we took from which was mostly sequence modeling or natural language processing so we've developed an extended things from the transformer and and we use pointer networks we combine LSD and transformers in interesting ways so that perhaps the kind of lowest hanging fruit of feeding back to now different fields of machine learning that's not playing video games let me go old-school and jump to the to mr. Alan Turing yeah so the Turing test you know there's a natural language test the conversational test what's your thought of it as a test for intelligence do you think it is a grand challenge that's worthy of undertaking maybe if it is would you reformulate it or phrase it somehow differently right so I really love the Turing test because I also like sequences and language understanding and in fact some of the early work we did in machine translation we tried to apply to apply to kind of a neural chat bot which obviously would never pass the Turing test because it was very limited but it is a very fascinating fascinating idea that you could really have an AI that would be indistinguishable from humans in terms of asking or conversing with with it right so I think the test itself seems very nice and it's kind of well defined actually like the passing it or not I think there's quite a few rules that feel like pretty simple and and and you know you could you could really like have I mean I think they have these competitions every year yes of the laudner prize but I don't know if you've seen a I don't know if you've seen the kind of bots that emerge from that competition they're not quite as what you would so it feels like that there's weaknesses with the way tauren formulated it it needs to be that the definition of a genuine rich fulfilling human conversation it needs to be something else like the Alexa prize which I'm not as well familiar with has tried to define that more I think by saying you have to continue keeping a conversation for 30 minutes something like that so basically forcing the agent not to just fool but to have an engaging conversation kind of thing is that I mean is is this have you thought about this problem richly like as and if if you have in general how how far away are we from you worked a lot on language understanding language generation but the full dialogue the conversation you know just sitting at the bar having a couple of beers for an hour and that kind of conversation have you thought about yeah so I think you touched here on the critical point which is feasibility right so so there's there's a great sort of essay by Hamming which describes sort of grand challenges of physics and he argues that well okay for instance teleportation or time travel our great grand challenges of physics but there's no attacks we really don't know or cannot kind of make any progress so that's why most physicists and so on they don't work on these in their PhDs and and as part of their careers so I see the Turing test as I in the full Turing test as a bit still too early like I am I think we're especially with the current trend of deep learning language models we've seen some amazing examples I think GPD to being the most recent one which is very impressive but to understand to fully solve passing or fooling a human to think that you're that there's a human on the other side I think we're quite far so as a result I don't see myself and I probably would not recommend people doing a PhD on solving the Turing test because it just feels it's kind of too early or too hard of a problem yeah but that said you said the exact same thing about Starcraft about a few years ago so into damage so I prefer yeah you'll probably also be the person who passes the Turing test in three years I mean I think I think that yeah so so we have the Sun record this is nice it's really I mean that the it's true that progress sometimes is a bit unpredictable I really wouldn't have not even six months ago I would not have predicted the level that we see that these agents can deliver at Grandmaster level but I I have worked on language enough and basically my concern is not that something could happen a breakthrough could happen that would bring us to solving or passing the Turing test is that I just think the statistical approach to it like this it is not is not gonna cut it so we need we need the breakthrough we is great for the community but given that I think there's quite a more uncertainty whereas for StarCraft I knew what the steps would be to kind of get us there I think it was clear that using the imitation learning part and then using these battlenet for agents were gonna be key and and it turned out that this was the case and a little more was needed but not much more for Turing test I just don't know what the plan or execution plan would look like so that's why I'm I myself working on it as a grand challenge is hard but there are quite a few sub challenges that are related that you could say well I mean what if you create a great assistant like Google already has like the Google assistant so can we make it better and can we make it fully new role and so on that I start to believe maybe we're reaching a point where we should attempt these challenges like this conversation so much because the echo is very much to start a conversation it's exactly how you approach StarCraft let's break it down into small pieces solve those and you end up solving the whole game great but that said you you're behind some of the sort of biggest pieces of work and deep learning in the last several years so you mentioned some limits what do you think of the current limits of deep learning and how do we overcome those limits so if I had to actually use a single word to define the main challenge in deep learning is a challenge that probably has been the challenge for many years and is that of generalization so what that means is that all that we're doing is fitting functions to data and when the data we see is not from the same distribution or even if there some times that it is very close to distribution but because of the way we train it with limited samples we then get to this stage where we just don't see generalization as much as we can generalize and I think adversarial examples are a clear example of these but if you study machine learning and literature and you know the reason why as VMs came very popular where because they were dealing and they had some guarantees about generalization which is unseen data or out of distribution or even within distribution where you take an image adding a bit of noise these models fail so I think really I don't see a lot of progress on generalization in in the strong generalization sense of the word I I think our neuron neural networks you can always find design examples that will make their outputs arbitrary which is which which is not good because we humans would never be fooled by these kind of images or manipulation of the image and if you look at the mathematics you kind of understand this is a bunch of matrices multiplied together there's probably numerix and instability that you can just find corner cases so I think that's really the underlying topic many times we see when even even at the grand stage of like doing test generalization I mean if used if you start I mean passing the Turing test should you should it be in English or should it be in any language right I mean as a human if you could you could if you ask something in a different language you actually will go and do some research and try to translate it and so on shoot the Turing test in clementa include that right and it's really a difficult problem and very fascinating and very mysterious actually yeah absolutely but do you think it's if you were to try to solve it can you not grow the size of data intelligently in such a way that the distribution of your training set does include the entirety of the testing set I think is that one path the other path is totally new methodology right it's not statistical so a path that has worked well and it worked well in in stark Ravin in machine translation and in languages scaling up the data and the model and that's kind of been maybe the only single formula that the leap still delivers today in deep learning right it's it's that scale data scale and model scale really do more and more of the things that we thought oh there's no way it can generalize to these Ori there's no way it can generalize to that but I don't think fundamentally it resolve with these and for instance I'm really liking some style or approach that would not only have neural networks but it would have programs or some discrete decision-making because there is what I feel there's a bit more like like I mean the example of the best example I think for understanding disease I also worked a bit on or like we can learn an algorithm with a neural network right so you give it many examples and it's going to sort your sort the input numbers or something like that but really strong generalization is you give me some numbers or you ask me to create an algorithm that sorts numbers and instead of creating a neural net which will be fragile because it's gonna go out of range at some point you're gonna give you numbers that are too large to small and whatnot you just if you just create a piece of code that sorts the numbers then you can prove that that will generalize to absolutely all the possible inputs you could give so I think that's the problem comes with some exciting prospects I mean scale is a bit more boring but it really works and then maybe programs and these critics tractions are a bit less developed but clearly I think they're quite exciting in terms of future for the field do you draw any insight wisdom from the 80s and expert systems and symbolic systems about computing do you ever go back to those reasoning that kind of logic do you think that might make a comeback you have to dust off those books yeah I actually love actually adding more inductive biases to me the problem really is what are you trying to solve if what you're trying to solve is so important that try to solve it no matter what then absolutely use rules use domain knowledge and then use a bit of the magic of machine learning to empower to make the system as the best system that will detect cancer or you know or detect weather patterns right or in terms of start of it also was a very big challenge so I was definitely happy that if we had to get take cut a corner here and there it could have been interesting to do and in fact in StarCraft we we start thinking about expert systems because it's very you know you can define I mean people actually build stack reports by thinking about those principal I guess you know state machines and rule-based and then you could you could think of combining a bit of a rule-based system but that has also neural networks incorporated to make it generalize a bit better so absolutely I mean we should we should definitely go back to those ideas and anything that makes the problem simpler as long as your problem is important that's okay and that's research driving a very important problem and on the other hand if you wanna really focus on the limits of reinforcement learning then of course you must try not to look at imitation data or to look some like for some rules of the domain that would help a lot or even feature engineering right so these these attention that depending on what you do I think both both ways are definitely fine and I would never not do one or the other if you're as long as you what you're doing is important and needs to be soft right right so there's a bunch of different ideas that that that you develop that I really enjoy so but one one is translating from the image captioning translated finish the text just just another just beautiful yeah beautiful idea I think that resonates throughout your work actually so the underlying nature of reality being language always yes somehow so what's the connection between images and text rather the visual world and the world of language in your view right so I think a piece of research that's been central to I would say even extending into Starcraft is is this idea of sequence to sequence learning which what we really meant by that is that you can you can now really input anything to a neural network as the input X and then the neural network will learn a function f that will take X as an input and produce any output Y and these x and y's don't need to be like static or like a features like as like a fixed vectors or anything like that it could be it really sequences and now beyond like data structures right so that paradigm was tested in a very interesting way when we moved from translating French to English to translating an image to its caption but the beauty the beauty of it is that really and that's actually how it happened I run I change the line of code in this thing that was doing machine translation I and I came the next day and I saw how it like it was producing captions that seemed like oh my god this is really really working and the principle is the same right so I think I don't see text vision speech waveforms as something different here as long as you basically learn a function that will vector eyes you know these into and then after we vectorize it we can then use you know transformers LS DMS whatever the flavor of the month of the model is and then as long as we have enough supervised data really this formula will work and we'll keep working I believe to some extent model of these generalization issues that I mentioned before so but the testers to vectorize sort of former representation that's meaningful nothing and your intuition now having worked with all this media is that once you are able to form that representation you can basically take any things any sequence is there go back to Starcraft is there limits on the length so we didn't really touch on a long term effect how did you overcome the whole really long term aspect of things here is there some tricks or so the main streak so Starcraft if you look at absolutely every frame you might think it's it's quite a long game so we would have to multiply 22 times 60 seconds per minute times maybe at least 10 minutes per game on average so there are quite a few frames but the trick really was to only observe in fact which might be seen as a limitation but it is also computational advantage only observe when you act and then what the neural network decides is what is the gap gonna be until the next action and if you look at most Starcraft games that we have in the in the data set that Blaser provided it turns out that most games are actually only I mean it is still a long sequence but it may be like a thousand to 1,500 actions which if you start looking at L STM's large LST M's transformers it's it's not like it's not that that difficult especially if you have supervised learning if you had to do it with reinforcement learning the credit assignment problem what is it that in this game that made you win that would be really difficult but thankfully because of imitation learning we didn't kind of have to deal with these directly although if we had to we tried it and what happen is you just take all your workers and attack with them and that sort of is kind of obvious in retrospect because you start trying random actions one of the actions will be a worker that goes to the enemy base and because it's self play it's not gonna know how to defend because it basically doesn't know almost anything and eventually what you develop is this take our workers and attack because the the create assignment issue in Arad is really really hard I do believe we could do better and that's maybe a research challenge for the future but yeah even even in StarCraft the sequences are maybe a thousand which I believe there is within the realm of what transformers can do yeah I guess the difference between Starcraft and go is in go and chest stuff starts happening right away right so there's not yeah it's pretty easy to self play not easy but to sulfa is possible to develop reasonable strategy as quickly as opposed to Starcraft meaning go there's only 400 actions but one action is what people would call the god action that would be if you had expanded the whole search tree that's the best action if you did minimax or whatever algorithm you would do if you had the computational capacity but in StarCraft the 400 is miniscule like a in 400 you don't even like you you couldn't even click on the pixels around a unit right so I think the problem there is in terms of action space size is way harder so and that surge is impossible so there's quite a few challenges indeed that make this kind of a step step up in terms of machine learning for humans maybe they playing Starcraft it seems more intuitive because it's looks real I mean you know like the graphics and everything moves smoothly whereas I I don't know how to come in go is a game that I wouldn't really mean to study it feels quite complicated but for machines kind of maybe easier reverse yes which shows you the gap actually between deep learning and however the heck our brains work so you developed a lot of really interesting ideas it's interesting to just ask what's the what's your process of developing new ideas do you like brainstorming with others do you like thinking alone do you like like was it eating good fellow said he came up with Gans after a few beers right he thinks beers are essential yeah coming up with new ideas we had beers to decide to play another game game of Starcraft after a week so it's really similar to that story actually I explained this in a in a deep mind retreat and I said this is the same as the gun story I mean we were wearing a bar and we decided let's play again next week and that's what happened I feel like we're giving the wrong message to young undergrads yeah but in general like yeah do you like brainstorming do you like thinking alone working stuff out and so I think I think throughout the years also things changed right so initially I was very fortunate to be with great minds like Geoff Hinton Jeff Dean Ilya sutskever I was really fortunate to join brain at a very good time so at that point it ideas I was just kind of brainstorming with my colleagues and learned a lot and keep learning is actually something you should never stop doing right so learning implies reading papers and also these casting ideas with others it's very hard at some point to not communicate that being reading paper forms from someone or actually discussing right so definitely that communication aspect needs to be there whether it's written or oral nowadays I'm also trying to be a bit more strategic about what research to do so I was describing a little bit this sort of tension between research for the sake of research and then you have on the other hand applications that can drive the research right and honestly the formula that has worked best for me is just find a hard problem and then try to see how research fits into it how it doesn't fit into it and then you must innovate so I think machine translation drove sequence to sequence then maybe like learning algorithms that had to like combinatorial algorithms led to pointer networks Starcraft led to really scaling a permutation learning and the Alpha star league so that's been a formula that I personally like but the other one is also about it and I seen it succeed a lot of the times where you just want to investigate model-based RL as a kind of a research topic and then you must then start to think well how are the tests how are you going to test these ideas you need to kind of a minimal environment to try things you need to read a lot of papers and so on and that's also very fun to do and something I've also done quite a few times both at brain at the mine and obviously as as a PhD so so I think besides that the ideas and discussions I think it's important also because you start sort of guiding not only your own goals but other people's goes to the next breakthrough so you you must really kind of understand these you know feasibility also as we were discussing before right whether whether these domain is ready to be tackled or not and you don't want to be too early you obviously don't want to be too late so it's it's really interesting and this is a strategic component of research which I think as a grad student I just had no idea to you know I just read papers and discussed ideas and I think this has been maybe the major change and I recommend people kind of feed forward to success how it looks like and try to backtrack other than just kind of looking out these looks cool these looks cool and then you do a bit of random work which sometimes you stumble upon some interesting things but in general it's it's also good to plan a bit yeah I like it especially like your approach I've taken a really hard problem stepping right in and then being super skeptical about yeah being robbed I mean there's a balance of both right there's a silly optimism and and a critical sort of skepticism that's good to balance which is why it's good to have a team of people that that balance that you don't do that on your own you have both mentors that have seen or you obviously wanna chat and discuss whether it's the right time I mean Demi's came in 2014 and he said maybe in a bit we'll do starcraft and maybe he knew and that's and I'm just following his lead which is great because he's he's brilliant right so these these things are obviously quite important that you wanna be surrounded by people who you know are diverse they they have their knowledge there's also important too I mean I I've learned a lot from people who actually have an idea that I might not think it's good but if I give them the space to try it I've been proven wrong many many times as well so that's that's great it's I think it's your colleagues are more important than yourself I think so sure now let's real quick talk about another impossible problem AGI right what do you think it takes to build a system that's human level intelligence we talked a little bit about the Tauranga stark after all these have echoes of general intelligence but if you think about just something that you would sit back and say wow this is really something that resembles human level intelligence what do you think it takes to build that so I find that AGI oftentimes is maybe not by well defined so what I'm trying to then come up with for myself is what would be a result look like that you would start to believe that you would have agents or neural nets that no longer sort of over feet to a single task right but actually kind of learn the skill of learning so to speak and that actually is a field that I am fascinated by which is the learning to learn or meta learning which is about no longer learning about a single domain so you can think about the learning algorithm itself is general right so the same formula we applied for alpha star or Starcraft we can now apply to kind of almost any video game or you could apply to many other problems and domains but the algorithm is what's kind of generalizing but the neural network the weights those weights are useless even to play another race right I train a network to play very well at Protoss vs. Protoss I need to throw away those weights if I want to play now terran vs terran i would need to retrain a network from scratch with the same algorithm that's beautiful but the network itself will not be useful so I think when I if I see an approach that can observe or start solving new problems without the need to kind of restart the process I think that to me would be a nice way to define some form of AGI again I don't know the grand views like age I mean so it should Turing test nice or before AGI I mean I don't know I think I think concretely I would like to see clearly that meta learning happened meaning there there is an architecture or network that as it sees new new problem or new data it solves it and to make it kind of a benchmark it should solve it at the same speed that we do solve new problems when I define your new object and you have to recognize it when I when you start playing a new game you played all the times but now you play a new Atari game well you you're gonna be pretty quickly pretty good at the game so that's perhaps what's the domain and what's the exact benchmark is a bit difficult I think as a community we might need to do some work to define it but I think this first step I could see it happen relatively soon but then the whole what a GI means and so on I am a bit more confused about what I think people mean different things there's an emotional psychological level that like the even the Turing test passing the Turing test is something that we just passed judgment on human beings what it means to be you know as a as a dog in a GI system yeah like what level what does it mean right yeah what does it mean but I like the generalization and maybe as a community would converge towards a group of domains that are sufficiently far away that would be really damn impressive if we're able to generalize some perhaps not as close as Protoss and Zerg but like Wikipedia step be a good stuff and then a really good step but then then like Wickham Starcraft 2 Wikipedia yeah I'm back yeah that kind of thing and that that feels also quite hard and far but I think there's as long as you put the benchmark out as we discovered for instance with image net then tremendous progress can be had so I think maybe there's a lack of benchmark but I'm sure we'll find one and yeah a community will will then work towards that and then beyond what a GI might mean or would imply I really am hopeful to see basically machine learning or AI just scaling up and helping you know people that might not have the resources to hire an assistant or that they might not even know what the weather is like but you know so I think there's in terms of the impact the positive impact of AI I think that's maybe what we should also not lose focus right the research community building AG I mean that's a real nice girl but man I think the way that deep mind puts it is and then use it to solve everything else right so I think we should paralyze yeah we shouldn't forget about all the positive things that are actually coming out of it already and I are not going to be coming out right but that I know let me ask relative the popular perception do you have any worry about the existential threat of artificial intelligence in the near or far future that some people have I think I'm in the near future I'm I'm skeptical so I hope I'm not wrong but I'm I'm not concerned but I I appreciate efforts ongoing efforts and even like whole research fields on AI safety emerging and in conferences and so on I think that's great in the long term I really hope we just can simply have the benefits outweigh the potential dangers I am hopeful for that but also we must remain vigilant to kind of monitor and assess whether the trade-offs are are there and and we have you know enough also lead time to prevent or to redirect our efforts if need be right so but I'm quite I'm quite optimistic about the technology and definitely more fearful of other threats in terms of planetary level at this point but obviously that's the one I kind of have more like power on so clearly I do start thinking more and more about this and it's kind of it's groaning me actually to to start reading more about AI safety jeez afield that so far I have not really contributed to but maybe there's something to be done there as well I think it's really important you know I would talk about this issue folks but it's important to ask you and shove it in your head because you're at the leading edge of actually what people are excited about nay I I mean the work with alpha star it it's arguably at the very cutting edge of the kind of thing that people are afraid of and so you speaking to that fact and that we're actually quite far away to the kind of thing that people might be afraid of but it's still something worthwhile to think about and it's also good that you're the you're not as worried and you're also open to us yeah me Maura there's two aspects I mean me not being worried but obviously we should prepare for for for it right for for like forever for things that could go wrong misuse of the technologies as with any technologies right so I think there's there's always trade-offs and I I as a society we've kind of solved these to some extent within the past so I'm hoping that by having the researchers and the whole community brainstorm and come up with interesting solutions to the new things that will happen in the future that we can still also push the research to the Avenue that I think is kind of the greatest Avenue which is to understand intelligence right how are we doing what we're doing and you know obviously from a scientific standpoint that is kind of the drive my personal driver of all the time that I spend doing what I'm doing really what do you see the deep learning as a field heading what do you think the next big big breakthrough might be so I think deep learning I I discuss a little of this before deep learning has to be combined with some form of discretization program synthesis I think that's kind of as a research in itself is an interesting topic to expand and start doing more research and then as kind of what will deep learning and able to do in the future I don't think that's gonna be what's gonna happen this year but also this idea of starting not to throw away all the weights that the this idea of learning to learn and really having these agents not having to restart their weights and you you can have an agent that is kind of solving or classifying images on image net but also generating speech if you ask it to generate some speech and and it should really be kind of almost the same network but might not be a neural networking might be a neural network with a optimization algorithm attached to it but I think this idea of generalization to new tasks is something that we first must define good benchmarks but then I think that's gonna be exciting and I'm not sure how close we are but I think there's the pet if you have a very limited domain I think we can start doing some progress and much like how we did a lot of programs in computer vision we should start thinking am I really like a talk that gave that Leon blue to give gave at ICML a few years ago which is this train test paradigm should be broken we we know we should stop thinking about a training test at Acharya training set and a test set and these are closed you know things that are untouchable I think we should go beyond these and in meta learning we call these the meta training set and the meta test set which is really thinking about if I know about imagenet why would that network not work on M NIST which is a much simpler problem but right now it really doesn't it you know yeah and but it just feels wrong right so I think that's kind of the there's the on the application or the benchmark sites we we probably will see quite a few more interest and progress and hopefully people defining new and exciting challenges really do you have any hope or interest in knowledge graphs within this context so just kind of totally yeah constructing graph so going back that graphs yap well okay neural networks and graphs but I mean a different kind of knowledge graph sort of like semantic graphs or there's concepts yeah so I think I think the the idea of graphs is is so I've been quite interested in sequences first and then more interesting or different data structures like graphs and I've studied graph narrow networks in the last three years or so I found these models just very interesting from like deep learning sites standpoint but then how what do we want why do we want these models and and why would we use them what's the application what's kind of the killer application of graphs right and perhaps if we could extract a knowledge Graff from Wikipedia automatically right um that would be interesting because then these graphs have this very interesting structure that also is a bit more comfortable with this idea of programs and deep learning kind of working together the jumping neighborhoods and so on you could imagine defining some primitives to go around graphs right so I think I really like the idea of a knowledge graph and in fact when we we started or you know as part of the research we did for StarCraft I thought wouldn't it be cool to give the graph of you know all the prerequisites like this all these buildings that depend on each other and units that have prerequisites of being built by that and so this is information that the network can learn and extract but it would have been great to see um or to think of really stack graph as a giant graph that even also as the game evolves use kind of star trek taking branches and so on and we tried we read a bit of research on these nothing too relevant but I I really like the idea and it has elements that are which something you also worked with in terms of visualizing your networks as elements of having human interpretable being able to generate knowledge representations that are human interpretable that maybe human experts can then tweak or at least understand so there's there's a lot of interesting aspect there and for me personally I'm just a huge fan of Wikipedia and it's it's a shame that our neural networks aren't taking advantage of all the structured knowledge that's on the web what's next for for you what's next for deep mind what are you excited about what a four alpha star yeah so I think the obvious next steps would be to apply alpha star to other races I mean that's sort of shows that the algorithm works because we wouldn't want to have created by mistake something in the architecture that happens to work for Protoss but not for other races right so as verification I think that's an obvious next step that we are working on and then I would like to see so agents and players can specialize on different skill sets that allow them to be very good I think we've seen alpha star understanding very well when to take battles and when to not to do that do that also very good at micromanagement and moving the units around and so on and also very good at producing non-stop and trading of economy with building units but I have not perhaps seen as much as I would like this idea of the poker idea that you mentioned right I'm not sure Starcraft or alpha star rather has developed a very deep understanding of what the opponent is doing and reacting to that and sort of trying to to to trick the player to do something else or that you know so this kind of reasoning I would like to see more so I think purely from a research standpoint there's perhaps also quite a few of you things to be done there in the domain of StarCraft yeah in a domain of games I've seen some interesting work in sort of in even auctions manipulating other players so forming a belief state and just messing with people yeah about theory of mind yeah yeah yeah this is a theory of mine on star Kirby's kind of they're really made for each other yeah so that would be very exciting to see those techniques applied to Starcraft or perhaps Starcraft driving new techniques right as I said this is always the tension between the two well Oriol thank you so much for talking today awesome it was great to be here thanks you
Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast #19
the following is a conversation with Ian good fellow he's the author of the popular textbook on deep learning simply titled deep learning he coined the term of generative adversarial networks otherwise known as Ganz and with his 2014 paper is responsible for launching the incredible growth of research and innovation in this subfield of deep learning he got his BS and MS at Stanford his PhD at University of Montreal with yoshua bengio and Erin Kerrville he held several research positions including an open AI Google brain and now at Apple as the director of machine learning this recording happened while Ian was still a Google brain but we don't talk about anything specific to Google or any other organization this conversation is part of the artificial intelligence podcast if you enjoy it subscribe on YouTube iTunes or simply connect with me on Twitter at lex friedman spelled fri d and now here's my conversation with Ian good fellow you open your popular deep learning book with a Russian doll type diagram that shows deep learning is a subset of representation learning which in turn is a subset of machine learning and finally a subset of AI so this kind of implies that there may be limits to deep learning in the context of AI so what do you think is the current limits of deep learning and are those limits something that we can overcome with time yeah I think one of the biggest limitations of deep learning is that right now it requires really a lot of data especially labeled data there's some unsupervised and semi-supervised learning algorithms that can reduce the amount of labeled data you need but they still require a lot of unlabeled data reinforcement learning algorithms they don't need labels but they need really a lot of experiences as human beings we don't learn to play pong by failing at pong two million times so just getting the generalization ability better is one of the most important bottlenecks and the capability of the technology today and then I guess I'd also say deep learning is like a of a bigger system so far nobody is really proposing to have only what you'd call deep learning as the entire ingredient of intelligence you use deep learning as sub modules of other systems like alphago has a deep learning model that estimates the value function most reinforcement learning algorithms have a deep learning module that estimates which action to take next but you might have other components here basically as building a function estimator do you think it's possible you said nobody is kind of in thinking about this so far but do you think neural networks could be made to reason in the way symbolic systems did in the 80s and 90s to do more create more like programs as opposed to functions yeah I think we already see that a little bit I already kind of think of neural nets as a kind of program I think of deep learning as basically learning programs that have more than one step so if you draw a flowchart or or if you draw a tensor flow graph describing your machine learning model I think of the depth of that graph is describing the number of steps that run in sequence and then the width of that graph is the number of steps that run in parallel now it's been long enough that we've had deep learning working that it's a little bit silly to even discuss shallow learning anymore but back when I first got involved in AI when we used machine learning we were usually learning things like support vector machines you could have a lot of input features to the model and you could multiply each feature by a different weight but all those multiplications were done in parallel to each other there wasn't a lot done in series I think what we got with deep learning was really the ability to have steps of a program that run in sequence and I think that we've actually started to see that what's important with deep learning is more the fact that we have a multi-step program rather than the fact that we've learned a representation if you look at things like res nuts for example they take one particular kind of representation and they update it several times back when deep learning first really took off in the academic world in 2006 when Geoff Hinton showed that you could train deep belief networks everybody who was under ested in the idea thought of it as each layer learns a different level of abstraction but the first layer trained on images learn something like edges and the second layer learns corners and eventually you get these kind of grandmother's cell units that recognize specific objects today I think most people think of it more as a computer program where as you add more layers you can do more updates before you output your final number but I don't think anybody believes the layer 150 of the resin it is a grand grandmother cell and you know layer 100 is contours or something like that okay so you think you're not thinking of it as a singular representation that keeps building you think of it as a program sort of almost like a state the representation is a state of understanding and yeah I think of it as a program that makes several updates and arrives it better and better understandings but it's not replacing the representation at each step its refining it and in some sense that's a little bit like reasoning it's not reasoning in the form of deduction but it's reasoning in the form of taking a thought and refining it and refining it carefully until it's good enough to use do you think and I hope you don't mind we'll jump philosophical every once in a while do you think of you know a cognition human cognition or even consciousness as simply a result of this kind of cincuenta sequential representation learning do you think that can emerge cognition yes I think so consciousness it's really hard to even define what we mean by that I guess there's consciousness is often defined as things like having self-awareness and that's relatively easy to turn into something actionable for a computer scientists the reason about people also defined consciousness in terms of having qualitative states of experience like qualia and there's all these philosophical problems like could you imagine jambe who does all the same information processing as a human but doesn't really have the qualitative experiences that we have that sort of thing I have no idea how to formalize or turn it into a scientific question I don't know how you could run in experiment to tell whether a person is a zombie or not and similarly I don't know how you could run an experiment to tell whether an advanced AI system had become conscious in the sense of qualia or not but in the more practical sense like almost like self attention you think consciousness and cognition can in an impressive way emerge from current types of architectures though yes yeah or or if if you think of consciousness in terms of self-awareness and just making plans based on the fact that the agent itself exists in the world reinforcement learning algorithms are already more or less forced to model the agents effect on the environment so that that more limited version of consciousness is already something that we get limited versions of with reinforcement learning algorithms if they're trained well but you say limited so the the big question really is how you jump from limited to human level yeah right and whether it's possible you know the even just building common-sense reasoning seems to be exceptionally difficult so K if we scale things up forget much better on supervised learning if we get better at labeling forget bigger datasets and the more compute do you think we'll start to see really impressive things that go from limited to you know something echoes of human level cognition I think so yeah I'm optimistic about what can happen just with more computation and more data I do think it'll be important to get the right kind of data today most of the machine learning systems we train our mostly trained on one type of data for each model but the human brain we get all of our different senses and we have many different experiences like you know riding a bike driving a car talking to people reading I think when you get that kind of integrated data set working with a machine learning model that can actually close the loop and interact we may find that algorithms not so different from what we have today learn really interesting things when you scale them up a lot and a large amount of multimodal data so multimodal is really interesting but within like you're working adversarial examples so selecting within modal within up one mode of data selecting better at what are the difficult cases from which are most useful to learn from oh yeah like could we could you get a whole lot of mileage out of designing a model that's resistant to adverse fare examples or something like that right yeah question but my thinking on that has evolved a lot over the last few years one nice thing when I first started to really invest in studying adversarial examples I was thinking of it mostly as that versus aryl examples reveal a big problem with machine learning and we would like to close the gap between how machine learning models respond to adversarial examples and how humans respond after studying the problem more I still think that adversarial examples are important I think of them now more of as a security liability then as an issue that necessarily shows there something uniquely wrong with machine learning as opposed to humans also do you see them as a tool to improve the performance of the system not not on the security side but literally just accuracy I do see them as a kind of tool on that side but maybe not quite as much as I used to think we've started to find that there's a trade-off between accuracy on adversarial examples and accuracy on clean examples back in 2014 when I did the first adversary trained classifier that showed resistance to some kinds of adversarial examples it also got better at the clean data on M NIST and that's something we've replicated several times an M NIST that when we train against weak adversarial examples Emnes classifiers get more accurate so far that hasn't really held up on other data sets and hasn't held up when we train against stronger adversaries it seems like when you confront a really strong adversary you tend to have to give something up interesting this is such a compelling idea because it feels it feels like that's how us humans learn yeah the difficult cases we we try to think of what would we screw up and then we make sure we fix that yeah it's also in a lot of branches of engineering you do a worst case analysis and make sure that your system will work in the worst case and then that guarantees that it'll work in all of the messy average cases that happen when you go out into a really randomized world you know with driving with autonomous vehicles there seems to be a desire to just look for think I'd viscerally tried to figure out how to mess up the system and if you can be robust to all those difficult cases then you can it's a hand waving empirical way to show that your system is yeah yes today most adverse early example research isn't really focused on a particular use case but there are a lot of different use cases where you'd like to make sure that the adversary can't interfere with the operation of your system like in finance if you have an algorithm making trades for you people go to a lot of an effort to obfuscate their algorithm that's both to protect their IP because you don't want to research and develop a profitable trading algorithm then have somebody else capture the gains but it's at least partly because you don't want people to make adversarial examples that fool you our algorithm into making bad trades or I guess one area that's been popular in the academic literature is speech recognition if you use speech recognition to hear an audio waveform and then in turn that into a command that a phone executes for you you don't want and a malicious adversary to be able to produce audio that gets interpreted as malicious commands especially if a human in the room doesn't realize that something like that is happening in speech recognition has there been much success in in being able to create adversarial examples that fool the system yeah actually I guess the first work that I'm aware of is a paper called hidden voice commands that came out in 2016 I believe and they were able to show that they could make sounds that are not understandable by a human but are recognized as the target phrase that the attacker wants the phone to recognize it as since then things have gotten a little bit better on the attacker side when worse on the defender side it's become possible to make sounds that sound like normal speech but are actually interpreted as a different sentence than the human here's the level of perceptibility of the adversarial perturbation is still kind of high the when you listen to the recording it sounds like there's some noise in the background just like rustling sounds but those rustling sounds are actually the adversarial perturbation that makes the phone hear a completely different sentence yeah that's so fascinating Peter Norvig mention that you're writing the deep learning chapter for the fourth edition of the artificial intelligence the modern approach book so how do you even begin summarizing the field of deep learning in a chapter well in my case I waited like a year before I actually read anything is it even having written a full length textbook before it's still pretty intimidating to try to start writing just one chapter that covers everything one thing that helped me make that plan was actually the experience of having ridden the full book before and then watching how the field changed after the book came out I realized there's a lot of topics that were maybe extraneous in the first book and just seeing what stood the test of a few years of being published and what seems a little bit less important to have included now helped me pare down the topics I wanted to cover for the book it's also really nice now that the field is kind of stabilized to the point where some core ideas from the 1980s are still used today when I first started studying machine learning almost everything from the 1980s had been rejected and now some of it has come back so that stuff that's really stood the test of time is what I focused on putting into the book there's also I guess two different philosophies about how you might write a book one philosophy is you try to write a reference that covers everything and the other philosophy is you try to provide a high level summary that gives people the language to understand a field and tells them what the most important concepts are the first deep learning book that I wrote with Yahshua and Aaron was somewhere between the the two philosophies that it's trying to be both a reference and an introductory guide writing this chapter for Russell and Norvig book I was able to focus more on just a concise introduction of the key concepts and the language you need to read about them more and a lot of cases actually just wrote paragraphs that said here's a rapidly evolving area that you should pay attention to it's it's pointless to try to tell you what the latest and best version of a you know learn to learn model is right you know I can I can point you to a paper that's recent right now but there isn't a whole lot of a reason to delve into exactly what's going on with the latest learning to learn approach or the latest module produced by learning to learn algorithm you should know that learning to learn is a thing and that it may very well be the source of the latest and greatest convolutional net or recurrent net module that you would want to use in your latest project but there isn't a lot of point in trying to summarize exactly which architecture in which learning approach got to which level of performance so you maybe focus more on the basics of the methodology so from back propagation to feed-forward to recur in your networks convolutional that kind of thing yeah yeah so if I were to ask you I remember I took algorithms and data structures algorithm there of course remember the professor asked what is an algorithm and yelled at everybody in a good way that nobody was answering it correctly everybody knew what the alkyl it was graduate course everybody knew what an algorithm was but they weren't able to answer it well let me ask you in that same spirit what is deep learning I would say deep learning is any kind of machine learning that involves learning parameters of more than one consecutive step so that I mean shallow learning is things where you learn a lot of operations that happen in parallel you might have a system that makes multiple steps like you might have had designed feature extractors but really only one step is learned deep learning is anything where you have multiple operations in sequence and that includes the things that are really popular today like convolutional networks and recurrent networks but it also includes some of the things that have died out like Bolton machines where we weren't using back propagation today I hear a lot of people define deep learning as gradient descent applied to these differentiable functions and I think that's a legitimate usage of the term it's just different from the way that I use the term myself so what's an example of deep learning that is not gradient descent on differentiable functions in your I mean not specifically perhaps but more even looking into the future what's your thought about that space of approaches yeah so I tend to think of machine learning algorithms as decomposed into really three different pieces there's the model which can be something like a neural nut or a Bolton machine or a recurrent model and I basically just described how do you take data and how do you take parameters and you know what function do you use to make a prediction given the data and the parameters another piece of the learning algorithm is the optimization algorithm or not every algorithm can be really described in terms of optimization but what's the algorithm for updating the parameters or updating whatever the state of the network is and then the the last part is the the data set like how do you actually represent the world as it comes into your machine learning system so I think of deep learning as telling us something about what does the model look like and basically to qualify as deep I say that it just has to have multiple layers that can be multiple steps in a feed-forward differentiable computation that can be multiple layers in a graphical model there's a lot of ways that you could satisfy me that something has multiple steps that are each parameterised separately I think of gradient descent as being all about that other piece the how do you actually update the parameters piece so you can imagine having a deep model like a convolutional net and training it with something like evolution or a genetic algorithm and I would say that still qualifies as deep learning and then in terms of models that aren't necessarily differentiable I guess Boltzmann machines are probably the main example of something where you can't really take a derivative and use that for the learning process but you you can still argue that the model has many steps of processing that it applies when you run inference in the model so that's the steps of processing that's key so geoff hinton suggests that we need to throw away back prop back propagation and start all over what do you think about that what could an alternative direction of training nil networks look like I don't know that back propagation is going to go away entirely most of this time when we decide that a machine learning algorithm isn't on the critical path to research for improving AI the algorithm doesn't die it just becomes used for some specialized set of things a lot of algorithms like logistic regression don't seem that exciting to AI researchers who are working on things like speech recognition or autonomous cars today but there's still a lot of use for logistic regression and things like analyzing really noisy data and medicine and finance or making really rapid predictions in really time-limited contexts so I think I think back propagation and gradient descent are around to stay but they may not end up being everything that we need to get to real human level or superhuman AI are you optimistic about us discovering you know back propagation has been around for a few decades so I optimistic bus about us as a community being able to discover something better yeah I am I think I think we likely will find something that works better you could imagine things like having stacks of models where some of the lower level models predict parameters of the higher level models and so at the top level you're not learning in terms of literally calculating gradients but just predicting how different values will perform you can kind of see that already in some areas like Bayesian optimization where you have a Gaussian process that predicts how well different parameter values will perform we already used those kinds of algorithms for things like hyper parameter optimization and in general we know a lot of things other than back prep that work really well for specific problems the main thing we haven't found is a way of taking one of these other non back based algorithms and having it really advanced the state-of-the-art on an AI level problem right but I wouldn't be surprised if eventually we find that some of these algorithms that even the ones that already exists not even necessarily a new one we might find some way of customizing one of these algorithms to do something really interesting at the level of cognition or or the the level of I think one system that we really don't have working quite right yet is like short-term memory we have things like LST M's they're called long short-term memory they still don't do quite what a human does with short-term memory like gradient descent to learn a specific fact has to do multiple steps on that fact like if I I tell you the meeting today is at 3 p.m. I don't need to say over and over again it's at 3 p.m. it's not 3 p.m. it's at 3 p.m. it's a 3 p.m. right for you to do a gradient step on each one you just hear it once and you remember it there's been some work on things like self attention and attention like mechanisms like the neural Turing machine that can write to memory cells and update themselves with facts like that right away but I don't think we've really nailed it yet and that's one area where I'd imagine that new optimization algorithms are different ways of applying existing optimization algorithms could give us a way of just lightning-fast updating the state of a machine learning system to contain a specific fact like that without needing to have it presented over and over and over again so some of the success of symbolic systems in the 80s is they were able to assemble these kinds of facts better but dude there's a lot of expert input required and it's very limited in that sense do you ever look back to that as something that will have to return to eventually sort of dust off the book from the shelf and think about how we build knowledge representation knowledge place well we have to use graph searches searches right and like first-order logic and entailment and things like that a thing yeah exactly in my particular line of work which has mostly been machine learning security and and also generative modeling I haven't usually found myself moving in that direction for generative models I could see a little bit of it could be useful if you had something like a differentiable knowledge base or some other kind of knowledge base where it's possible for some of our fuzzier machine learning algorithms to interact with the knowledge base immanuel Network is kind of like that it's a differentiable knowledge base of sorts yeah but if if we had a really easy way of giving feedback to machine learning models that would clearly helped a lot with with generative models and so you could imagine one way of getting there would be get a lot better at natural language processing but another way of getting there would be take some kind of knowledge base and figure out a way for it to actually interact with a neural network being able to have a chat within y'all network yes so like one thing in generative models we see a lot today is you'll get things like faces that are not symmetrical like like people that have two eyes that are different colors and I mean there are people with eyes that are different colors in real life but not nearly as many of them as you tend to see in the machine learning generated data so if if you had either a knowledge base that could contain the fact people's faces are generally approximately symmetric and eye color is especially likely to be the same on both sides being able to just inject that hint into the machine learning model without it having to discover that itself after studying a lot of data it would be a really useful feature I could see a lot of ways of getting there without bringing back some of the 1980s technology but I also see some ways that you could imagine extending the 1980s technology to play nice with neural nets and have it help get there awesome so you talked about the story of you coming up with idea of Gans at a bar with some friends you were arguing that this you know Gans would work Jenner of adversarial networks and the others didn't think so then he went home at midnight coated up and it worked so if I was a friend of yours at the bar I would also have doubts it's a really nice idea but I'm very skeptical that it would work what was the basis of their skepticism what was the basis of your intuition why he should work I don't want to be someone who goes around promoting alcohol for the science in this case I do actually think that drinking helped a little bit mm-hmm when your inhibitions are lowered you're more willing to try out things that you wouldn't try out otherwise so I I have noticed it in general that I'm less prone to shooting down some of my own ideas when I'm when I have had a little bit to drink I think if I had had that idea at lunch time yeah I probably would have thought it it's hard enough I mean one neural net you can't train a second neuron that in the inner loop of the outer neural net that was basically my friends action was that trying to train two neural nets at the same time would be too hard so it was more about the training process unless so my skepticism would be you know I'm sure you could train it but the thing would converge to would not be able to generate anything reasonable and any kind of reasonable realism yeah so so part of what all of us were thinking about when we had this conversation was deep Bolton machines which a lot of us in the lab including me were a big fan of deep bolts and machines at the time they involved two separate processes running at the same time one of them is called the positive phase where you load data into the model and tell the model to make the data more likely the owners called the negative phase where you draw samples from the model and tell the model to make those samples less likely in a deep Bolton machine it's not trivial to generate a sample you have to actually run an iterative process that gets better and better samples coming closer and closer to the distribution the model represents so during the training process you're always running these two systems at the same time one that's updating the parameters of the model and another one that's trying to generate samples from the model and they worked really well on things like Amnesty a lot of us in the lab including me had tried to get the Boltzmann machines to scale past em inist to things like generating color photos and we just couldn't get the two processes to stay synchronized so when I had the idea for Gans a lot of people thought that the discriminator would have more or less the same problem as the negative phase in the Boltzmann machine that trying to train the discriminator in the inner loop you just couldn't get it to keep up with the generator and the outer loop and that would prevent it from converging to anything useful yeah I share that intuition yeah what turns out to not be the case a lot of the time with machine learning algorithms it's really hard to predict ahead of time how well they'll actually perform you have to just run the experiment and see what happens and I would say I still today don't have like one factor I can put my finger on it say this is why ganz worked for photo generation and deep Boltzmann machines don't there are a lot of theory papers showing that under some theoretical settings the the gun algorithm does actually converge but those settings are restricted enough that they don't necessarily explain the whole picture in terms of all the results that we see in practice so taking a step back can you in the same way as we talked about deep learning can you tell me what generative adversarial networks are yeah so generative adversarial networks are a particular kind of generative model a generative model is a machine learning model that can train on some set of data like so you have a collection of photos of cats and you want to generate more photos of cats or you want to estimate a probability distribution over cats so you can ask how likely it is that some new image is a photo of a cat ganzar one way of doing this some generative models are good at creating new data other generative models are good at estimating that density function and telling you how likely particular pieces of data are to come from the same distribution as a training data gans are more focused on generating samples rather than estimating the density function there are some kinds of games like flow gun that can do both but mostly guns are about generating samples of generating new photos of cats that look realistic and they do that completely from scratch it's analogous to human imagination when again creates a new image of a cat it's using a neural network to produce a cat that has not existed before it isn't doing something like compositing photos together you're not you're not literally taking the eye off of one cat on the ear off of another cat it's it's more of this digestive process where the the neural net trains on a lot of data and comes up with some representation of the probability distribution and generates entirely new cats there are a lot of different ways of building a generative model what's specific against is that we have a two-player game in the game theoretic sense and as the players in this game compete one of them becomes able to generate realistic data the first player is called the generator it produces output data such as just images for example and at the start of the learning process it'll just produce completely random images the other player is called the discriminator the discriminator takes images as input and guesses whether they're real or fake you train it both on real data so photos that come from your training set actual photos of cats and you try to say that those are real you also train it on images that come from the generator network and you train it to say that those are fake as the two players compete in this game the discriminator tries to become better at recognizing where their images are real or fake and the generator becomes better at fooling the discriminator into thinking that its outputs are are real and you can analyze this through the language of game theory and find that there's a Nash equilibrium where the generator has captured the correct probability distribution so in the cat example it makes perfectly realistic cat photos and the discriminator is unable to do better than random guessing because all the all the samples coming from both the data and the generator look equally likely to have come from either source so do you ever do sit back and does it just blow your mind that this thing works so from very so it's able to estimate that density function enough to generate generate realistic images I mean does it yeah do you ever sit back yeah how does this even why this is quite incredible especially where Gant's have gone in terms of realism yeah and and not just to flatter my own work but generative models all of them have this property that if they really did what we asked them to do they would do nothing but memorize the training data right some models that are based on maximizing the likelihood the way that you obtain the maximum likelihood for a specific training set is you assign all of your probability mass to the training examples and nowhere else forgets the game is played using a training set so the way that you become unbeatable in the game is you literally memorize training examples one of my former interns wrote a paper his name is a Vaishnav nagarajan and he showed that it's actually hard for the generator to memorize the training data hard in a statistical learning theory sense that you can actually create reasons for why it would require quite a lot of learning steps and and a lot of observations of of different latent variables before you could memorize the training data that still doesn't really explain why when you produce samples that are new why do you get compelling images rather than you know just garbage that's different from the training set and I don't think we really have a good answer for that especially if you think about how many possible images are out there and how few images the generative model sees during training it seems just unreasonable that generative models create new images as well as they do especially considering that we're basically training them to memorize rather than generalize I think part of the answer is there's a paper called deep image prior where they show that you can take a convolutional net and you don't even need to learn the parameters of it at all you just use the model architecture and it's already useful for things like in painting images I think that shows us that the convolutional network architecture captures something really important about the structure of images and we don't need to actually use learning to capture all the information coming out of the convolutional net that would that would imply that it would be much harder to make generative models in other domains so far we're able to make reasonable speech models and things like that but to be honest we haven't actually explored a whole lot of different data sets all that much we don't for example see a lot of deep learning models of like biology datasets where you have lots of microarrays measuring the amount of different enzymes and things like that so we may find that some of the progress that we've seen for images and speech turns out to really rely heavily on the model architecture and we were able to do what we did for vision by trying to reverse-engineer the human visual system and maybe it'll turn out that we can't just use that same trick for arbitrary kinds of data all right so there's aspects of the human vision system the hardware of it that makes it without learning without cognition just makes it really effective at detecting the patterns we've seen the visual world yeah that's yeah that's really interesting what in a big quick overview in your view in your view what types of Gans are there and what other generative models besides games are there yeah so it's maybe a little bit easier to start with what kinds of generative models are there other than Gans so most generative models are likelihood based where to train them you have a model that tells you how how much probability it assigns to a particular example and you just maximize the probability assigned to all the training examples it turns out that it's hard to design a model that can create really complicated images or really complicated audio waveforms and still have it be possible to estimate the the likelihood function from a computational point of view most interesting models that you would just write down intuitively it turns out that it's almost impossible to calculate the amount of probability they assign to a particular point so there's a few different schools of generative models in the likelyhood family one approach is to very carefully design the model so that it is computationally tractable to measure the density it assigns to a particular point so there are things like auto regressive models like pixel CN n those basically break down the probability distribution into a product over every single feature so for an image you estimate the probability of each pixel given all of the pixels that came before it hmm there's tricks where if you want to measure the density function you can actually calculate the density for all these pixels more or less in parallel generating the image still tends to require you to go one pixel at a time and that can be very slow but there again tricks for doing this in a hierarchical pattern where you can keep the runtime under control or the quality of the images it generates putting runtime aside pretty good they're reasonable yeah the I would say a lot of the best results are from Gans these days but it can be hard to tell how much of that is based on who's studying which type of algorithm if that makes sense the amount of effort invest in it but yeah or like the kind of expertise so a lot of people who've traditionally been excited about graphics or art and things like that have gotten interested in Gans and to some extent it's hard to tell our Gans doing better because they have a lot of graphics and art experts behind them or our Gans doing better because they're more computationally efficient or our Gans doing better because they prioritize the realism of samples over the accuracy of the density function I think I think all of those are potentially valid explanations and it's it's hard to tell so can you give a brief history of Gans from 2014 we paid for 13 yeah so a few highlights in the first paper we just showed that Gans basically work if you look back at the samples we had now they looked terrible on the CFR 10 dataset you can't even recognize objects in them your papers I will use CFR 10 we use em NIST which is little handwritten digits we used the Toronto face database which is small grayscale photos of faces we did have recognizable faces my colleague Bing Xu put together the first again face model for that paper we also had the CFR 10 dataset which is things like very small 32 by 32 pixels of cars and cats and dogs for that we didn't get recognizable objects but all the deep learning people back then we're really used to looking at these failed samples and kind of reading them like tea leaves right and people who are used to reading the tea leaves recognize that our tea leaves at least look different right maybe not necessarily better but there was something unusual about them and that got a lot of us excited one of the next really big steps was lap gown by Emily Denton and seemeth chintala at Facebook AI research where they actually got really good high-resolution photos working with gans for the first time they had a complicated system where they generated the image starting at low res and then scaling up to high res but they were able to get it to work and then in 2015 I believe later that same year palek Radford and sumh intelli and Luke Metz published the DC gain paper which it stands for deep convolutional again it's kind of a non unique name because these days basically all gans and even some before that were deep in convolutional but they just kind of picked a name for a really great recipe where they were able to actually using only one model instead of a multi-step process actually generate realistic images of faces and things like that that was sort of like the beginning of the Cambrian explosion of gans like you know once once you got animals that had a backbone you suddenly got lots of different versions of you know like fish and right they have four-legged animals and things like that so so DC Gann became kind of the backbone for many different models that came out used as a baseline even still yeah yeah and so from there I would say some interesting things we've seen are there's a lot you can say about how just the quality of standard image generation ganz has increased but what's also maybe more interesting on an intellectual level is how the things you can use guns for has also changed one thing is that you can use them to learn classifiers without having to have class labels for every example in your your training set so that's called semi-supervised learning my colleague at open AI Tim Solomon's who's at at brain now wrote a paper called improved techniques for training guns I'm a co-author on this paper but I can't claim any credit for this particular part one thing he showed in the paper is that you can take the gun discriminator and use it as a classifier that actually tells you you know this image is a cat this image is a dog this image is a car this image is a truck and so and not just to say whether the image is real or fake but if it is real to say specifically what kind of object it is and he found that you can train these classifiers with far fewer labeled examples learn traditional classifiers so a few supervised based on also not just your discrimination ability but your ability to classify you're going to do much you're going to convert much faster to being effective at being a discriminator yeah so for example for the emne status set you want to look at an image of a handwritten digit and say whether it's a 0 a 1 or 2 and so on to get down to less than 1% accuracy required around 60,000 examples until maybe about 2014 or so in 2016 with this semi-supervised degan project tim was able to get below 1% error using only a hundred labeled examples so that was about a 600 X decrease in the amount of labels that he needed he's still using more images in that but he doesn't need to have each of them labeled as you know this one's a 1 this one's a 2 this one's a 0 and so on then to be able to for Ganz to be able to generate recognizable objects so object for a particular class you still need labelled data because you need to know what it means to be a particular class cat dog how do you think we can move away from that yeah some researchers at brain Zurich actually just released a really great paper on semi-supervised de Gans whether their goal isn't to classify its to make recognizable objects despite not having a lot of label data they were working off of deep minds big gun project and they showed that they can match the performance of began using only 10% I believe of the of the labels big gun was trained on the image net dataset which is about 1.2 million images and had all of them labelled this latest project from brain Zurich shows that they're able to get away with only having about 10% of the of the images labeled and they do that essentially using a clustering algorithm where the discriminator learns to assign the objects to groups and then this understanding that objects can be grouped into you know similar types helps it to form more realistic ideas of what should be appearing in the image because it knows that every image it creates has to come from one of these archetypal groups rather than just being some arbitrary image if you train again with no class labels you tend to get things that look sort of like grass or water or brick or dirt but but without necessarily a lot going on in them and I think that's partly because if you look at a large image net image the object doesn't necessarily occupy the whole image and so you learn to create realistic sets of pixels but you don't necessarily learn that the object is the star of the show and you want it to be in every image you make yeah you've heard you talk about the the horse the zebra cycle Gann mapping and how it turns out again thought provoking that horses are usually on grass and zebras are usually on drier terrain so when you're doing that kind of generation you're going to end up generating greener horses or whatever so those are connected together it's not just yeah yeah be able to you're not able to segment yeah it's generating the segments away so there are other types of games you come across in your mind that neural networks can play with each other to to to be able to solve problems yeah the the one that I spend most of my time on is insecurity you can model most interactions as a game where there's attackers trying to break your system and you order the defender trying to build a resilient system there's also domain adversarial learning which is an approach to domain adaptation that looks really a lot like Ganz the the author's had the idea before the game paper came out their paper came out a little bit later and you know they they're very nice and sighted again paper but I know that they actually had the idea before I came out domain adaptation is when you want to train a machine learning model in 1:1 setting called a domain and then deploy it in another domain later and he would like it to perform well in the new domain even though the new domain is different from how it was trained so for example you might want to train on a really clean image data set like image net but then deploy on users phones where the user is taking you know pictures in the dark or pictures while moving quickly and just pictures that aren't really centered or composed all that well when you take a normal machine learning model it often degrades really badly when you move to the new domain because it looks so different from what the model was trained on domain adaptation algorithms try to smooth out that gap and the domain adverse oral approach is based on training a feature extractor where the features have the same statistics regardless of which domain you extracted them on so in the domain adversarial game you have one player that's a feature extractor and another player that's a domain recognizer the domain recognizer wants to look at the output of the feature extractor and guess which of the two domains oh the features came from so it's a lot like the real versus fake discriminator and ends and then the feature extractor you can think of as loosely analogous to the generator in games except what's trying to do here is both fool the domain recognizer and two not knowing which domain the data came from and also extract features that are good for classification so at the end of the day you can in in the cases where it works out you can actually get features that work about the same in both domains sometimes this has a drawback where in order to make things work the same in both domains it just gets worse at the first one but there are a lot of cases where it actually works out well on both do you think gas being useful in the context of data augmentation yeah one thing you could hope for with Kenz is you could imagine I've got a limited training set and I'd like to make more training data to train something else like a classifier you could train Magan on the training set and then create more data and then maybe the classifier would perform better on the test set after training on those big ERG and generated data set so that's the simplest version of of something you might hope would work I've never heard of that particular approach working but I think there's some there's some closely related things that that I think could work in the future and some that actually already have worked so if you think a little bit about what we'd be hoping for if we use the gun to make more training data we're hoping that again we'll generalize to new examples better than the classifier would have generalized if it was trained on the same buddy at us and I don't know of any reason to believe that the Gann would generalize better than the classifier would but what we might hope for is that the Gann could generalize differently from a specific classifier so one thing I think is worth trying that I haven't personally tried but someone could try is what have you trained a whole lot of different generative models on the same training set create samples from all of them and then train a classifier on that because each of the generative models might generalize in a slightly different way they might capture many different axes of variation that one individual model wouldn't and then the classifier can capture all of those ideas by training in all of their data so we'd be a little bit like making an ensemble of classifiers and I say oh of gans yeah in a way I think that could generalize better the other thing that gans are really good for is not necessarily generating new data that's exactly like what you already have but by generating new data that has different properties from the data you already had one thing that you can do is you can create differentially private data so suppose that you have something like medical records and you don't want to train a classifier on the medical records and then publish the classifier because someone might be able to reverse-engineer some of the medical records you trained on there's a paper from Casey greens lab that shows how you can train again using differential privacy and then the samples one again still have the same differential privacy guarantees as the parameters that again so you can make fake patient data for other researchers to use and they can do almost anything they want with that data because it doesn't come from real people and the differential privacy mechanism gives you clear guarantees on how much the original people's data has been protected that's really interesting actually I haven't heard you talk about that before in terms of fairness I've seen from triple AI your talk how can an adversarial machine learning help models be more fair with respect to sensitive variables yeah there was a paper from Amos Torquay's lab about how to learn machine learning models that are incapable of using specific variables so to say for example you wanted to make predictions that are not affected by gender it isn't enough to just leave gender out of the input to the model you can often infer gender from a lot of other characteristics like say that you have the person's name but you're not told their gender well right if if their name is Ian they're kind of obviously a man so what you'd like to do is make a machine learning model that can still take in a lot of different attributes and make a really accurate informed prediction but be confident that it isn't reverse engineering gender or another sensitive variable internally you can do that using something very similar to the domain adversarial approach where you have one player that's a feature extractor and another player that's a feature analyzer and you want to make sure that the feature analyzer is not able to guess the value of the sensitive variable that you're trying to keep private right that's yeah I love this approach so we'll yeah with the with the feature you're not able to infer right this sensitive variables yeah brilliant it's quite quite brilliant and simple actually another way I think that Ganz in particular could be used for fairness would be to make something like a cycle again where you can take data from one domain and convert it into another we've seen cycle again turning horses into zebras we've seen other unsupervised gains made by Ming Yue Lu doing things like turning day photos into night photos I think for fairness you could imagine taking records for people in one group and transforming them into analogous people in another group and testing to see if they're they're treated equitably across those two groups there's a lot of things that be hard to get right to make sure that the conversion process itself is fair and I don't think it's anywhere near something that we could actually use yet but if you could design that conversion process very carefully it might give you a way of doing audits where you say what if we took people from this group converted them into equivalent people in another group does the system actually treat them how it ought to that's also really interesting you know in a popular in popular press and in general in our imagination you think well gangs are able to generate data and use to think about deep fakes or being able to sort of maliciously generate data that fakes the identity of other people is this something of a concern to you is this something if you look 10 20 years into the future is that something that pops up in your work in the work of the community that's working on generating models I'm a lot less concerned about 20 years from now than the next few years I think there will be a kind of bumpy cultural transition as people encounter this idea that there can be very realistic videos and audio that aren't real I think 20 years from now people will mostly understand that you shouldn't believe something is real just because you saw a video of it people will expect to see that it's been cryptographically signed or or have some other mechanism to make them believe the the content is real there's already people working on this like there's a startup called true pic that provides a lot of mechanisms for authenticating that an image is real there they're maybe not quite up to having a state actor try to to evade their their verification techniques but it's something people are already working on and I think we'll get right eventually so you think authentication will will eventually went out so being able to authenticate that this is real and this is not yeah as opposed to gas just getting better and better or generative models being able to get better and better to where the nature of what is real I don't think we'll ever be able to look at the pixels of a photo and tell you for sure that it's real or not real and I think it would actually be somewhat dangerous to rely on that approach too much if you make a really good fake detector and then someone's able to fool your fake detector and your fake detector says this image is not fake then it's even more credible than if you've never made a fake detector in the first place what I do think we'll get to is systems that we can kind of use behind the scenes for to make estimates of what's going on and maybe not like use them in court for a definitive analysis I also think we will likely get better authentication systems where you know if a match every phone cryptographically signs everything that comes out of it you wouldn't go to conclusively tell that an image was real but you would be able to tell somebody who knew the appropriate private key for this phone was actually able to sign this image and upload it to this server at this timestamp so you could imagine maybe you make phones that have the private keys Hardware embedded in them if like a State Security Agency really wants to infiltrate the company they could probably you know plant a private key of their choice or break open the chip and learn the private key or something like that but it would make it a lot harder for an adversary with fewer resources to fake things most of us yeah okay okay so you mentioned the beer and the bar and the new ideas you were able to implement this or come up with this new idea pretty quickly and implement it pretty quickly do you think there are still many such groundbreaking ideas and deep learning that could be developed so quickly yeah I do think that there are a lot of ideas that can be developed really quickly guns were probably a little bit of an outlier on the whole like one-hour timescale right but just in terms of a like low resource ideas where you do something really different on the algorithm scale and get a big payback I think it's not as likely that you'll see that in terms of things like core machine learning technologies like a better classifier or a better reinforcement learning algorithm or a better generative model if I had the gun idea today it would be a lot harder to prove that it was useful than it was back in 2014 because I would need to get it running on something like image net or celibate high resolution you know those take a while to train you couldn't you couldn't train it in an hour and know that it was something really new and exciting back in 2014 shredding an amnesty was enough but there are other areas of machine learning where I think a new idea could actually be developed really quickly with low resources what's your intuition about what areas of machine learning are ripe for this yeah so I think fairness and interpretability our areas where we just really don't have any idea how anything should be done yet like for interpretability I don't think we even have the right definitions and even just defining a really useful concept you don't even need to run any experiments could have a huge impact on the field we've seen that for example in differential privacy that uh Cynthia Dworkin her collaborators made this technical definition of privacy where before a lot of things are really mushy and then with that definition you could actually design randomized algorithms for accessing databases and guarantee that they preserved individual people's privacy in a in like a mathematical quantitative sense right now we all talk a lot about how interpretable different machine learning algorithms are but it's really just people's opinion and everybody probably has a different idea of what interpretability means in their head if we could define some concept related to interpretability that's actually measurable that would be a huge leap forward even without a new algorithm that increases that quantity and also once once we had the definition of differential privacy it was fast to get the algorithms that guaranteed it so you could imagine once we have definitions of good concepts and interpretability we might be able to provide the algorithms that have the interpretability guarantees quickly to what do you think it takes to build a system with human level intelligence as we quickly venture into the philosophical so artificial general intelligence what do you think I I think that it definitely takes better environments than we currently have for training agents that we want them to have a really wide diversity of experiences I also think it's going to take really a lot of computation it's hard to imagine exactly how much so you're optimistic about simulation simulating a variety of environments is the path forward I think it's a necessary ingredient yeah I don't think that we're going to get to artificial general intelligence by training on fixed datasets or by thinking really hard about the problem I think that the the agent really needs to interact and have a variety of experiences within the same lifespan and today we have many different models that can each do one thing and we tend to train them on one data set or one RL environment sometimes they're actually papers about getting one set of parameters to perform well in many different RL environments but we don't really have anything like an agent that goes seamlessly from one type of experience to another and and really integrates all the different things that it does over the course of its life when we do see multi agent environments they tend to be there are so many multi environment agents they tend to be similar environments like all of them are playing like an action based video game we don't really have an agent that goes from you know playing a video game to like reading The Wall Street Journal to predicting how effective a molecule will be as a drug or something like that what do you think is a good test for intelligence in you view it's been a lot of benchmarks started with the with Alan Turing a natural conversation being good being a good benchmark for intelligence what what are what would you and good fellows sit back and be really damn impressed if a system was able to accomplish something that doesn't take a lot of glue from human engineers so imagine that instead of having to go to the CFR website and download CFR 10 and then write a Python script to parse it and all that you could just point an agent at the CFR 10 problem and it downloads and extracts the data and trains a model and starts giving you predictions I feel like something that doesn't need to have every step of the pipeline assembled for it it definitely understands what it's doing is Auto ml moving into that direction are you thinking wave and bigger autosomal has mostly been moving toward once we've built all the glue can the machine learning system to design the architecture really well so I'm we're saying like if something knows how to pre-process the data so that it successfully accomplishes the task then it would be very hard to argue that it doesn't truly understand the task in some fundamental sense and I don't necessarily know that that's like the philosophical definition of intelligence but that's something that would be really cool to build that would be really useful and would impress me and would convince me that we've made a step forward in real AI so you give it like the URL for Wikipedia and then next day expected to be able to solve CFR 10 or like you type in a paragraph explaining what you want it to do and it figures out what web searches it should run and downloads all the whole unnecessary ingredients so you have a very clear calm way of speaking no arms easy to edit I've seen comments for both you and I have been identified as both potentially being robots if you have to prove to the world that you are indeed human how would you do it but I can understand thinking that I'm a robot it's the flipside yeah touring test I think yeah yeah the proof prove your human test I mean I lecture so you have to is there something that's truly unique in your mind I suppose it doesn't go back to just natural language again just being able to so proving proving that I'm not a robot with today's technology yeah that's pretty straightforward too like my conversation today hasn't veered off into you know talking about the stock market or something because in my training data but I think it's more generally trying to prove that something is real from the content alone it was incredibly hard that's one of the main things I've gotten out of my can research that you can simulate almost anything and so you have to really step back to a separate channel to prove that slang is real so like I guess I should have had myself stamped on a blockchain when I was born or something but I didn't do that so according to my own research methodology there's just no way to know at this point so what last question problem stands all for you that you're really excited about challenging in the near future so I think resistance to adversarial examples figuring out how to make machine learning secure against an adversary who wants to interfere it in control with it is one of the most important things researchers today could solve in all domains in image language driving in I guess I'm most concerned about domains we haven't really encountered yet like like imagine twenty years from now when we're using advanced day eyes to do things we haven't even thought of yet like if you ask people what are the important problems in security of phones in in like 2002 I don't think we would have anticipated that we're using them for you know nearly as many things as we're using them for today I think it's going to be like that with AI that you can kind of try to speculate about where it's going but really the business opportunities that end up taking off would be hard to predict ahead of time well you can predict ahead of time is that almost anything you can do with machine learning you would like to make sure that people can't get it to do what they want rather than what you want just by showing it a funny QR code or a funny input pattern and you think that the set of methodology to do that can be bigger than you want domain and that's I think so yeah yeah like one methodology that I think is not not a specific methodology but like a category of solutions that I'm excited about today is making dynamic models that change every time they make a prediction so right now we tend to train models and then after they're trained we freeze them and we just use the same rule to classify everything that comes in from then on that's really a sitting duck from a security point of view if you always output the same answer for the same input then people can just run inputs through until they find a mistake that benefits them and then they use the same mistake over and over and over again I think having a model that updates its predictions so that it's harder to predict what you're going to get will make it harder for the for an adversary to really take control of the system and make it do what they want it to do yeah models that maintain a bit of a sense of mystery and bought them because they always keep changing yeah and thanks so much for talking today it was awesome thank you for coming in that's great to see you you
Elon Musk: Tesla Autopilot | Lex Fridman Podcast #18
- The following is a conversation with Elon Musk. He's the CEO of Tesla, SpaceX, Neuralink, and a co-founder of several other companies. This conversation is part of the Artificial Intelligence Podcast. This series includes leading researchers in academia and industry, including CEOs and CTOs of automotive, robotics, AI and technology companies. This conversation happened after the release of the paper from our group at MIT on driver functional vigilance during use of Tesla's Autopilot. The Tesla team reached out to me offering a podcast conversation with Mr. Musk. I accepted with full control of questions I could ask and the choice of what is released publicly. I ended up editing out nothing of substance. I've never spoken with Elon before this conversation, publicly or privately. Neither he nor his companies have any influence on my opinion, nor on the rigor and integrity of the scientific method that I practice in my position at MIT. Tesla has never financially supported my research and I've never owned a Tesla vehicle, and I've never owned Tesla stock. This podcast is not a scientific paper, it is a conversation. I respect Elon as I do all other leaders and engineers I've spoken with. We agree on some things and disagree on others. My goal, as always with these conversations, is to understand the way the guest sees the world. One particular point of disagreement in this conversation was the extent to which camera-based driver monitoring will improve outcomes and for how long it will remain relevant for AI-assisted driving. As someone who works on and is fascinated by human-centered artificial intelligence, I believe that, if implemented and integrated effectively, camera-based driver monitoring is likely to be of benefit in both the short term and the long term. In contrast, Elon and Tesla's focus is on the improvement of Autopilot such that its statistical safety benefits override any concern for human behavior and psychology. Elon and I may not agree on everything, but I deeply respect the engineering and innovation behind the efforts that he leads. My goal here is to catalyze a rigorous, nuanced and objective discussion in industry and academia on AI-assisted driving, one that ultimately makes for a safer and better world. And now, here's my conversation with Elon Musk. What was the vision, the dream, of Autopilot in the beginning? The big picture system level when it was first conceived and started being installed in 2014, the hardware in the cars? What was the vision, the dream? - I wouldn't characterize it as a vision or dream, it's simply that there are obviously two massive revolutions in the automobile industry. One is the transition to electrification, and then the other is autonomy. And it became obvious to me that, in the future, any car that does not have autonomy would be about as useful as a horse. Which is not to say that there's no use, it's just rare, and somewhat idiosyncratic, if somebody has a horse at this point. It's just obvious that cars will drive themselves completely, it's just a question of time. And if we did not participate in the autonomy revolution, then our cars would not be useful to people, relative to cars that are autonomous. I mean, an autonomous car is arguably worth five to 10 times more than a car which is not autonomous. - In the long term. - Depends what you mean by long term but, let's say at least for the next five years, perhaps 10 years. - So there are a lot of very interesting design choices with Autopilot early on. First is showing on the instrument cluster, or in the Model 3 and the center stack display, what the combined sensor suite sees. What was the thinking behind that choice? Was there a debate, what was the process? - The whole point of the display is to provide a health check on the vehicle's perception of reality. So the vehicle's taking in information from a bunch of sensors, primarily cameras, but also radar and ultrasonics, GPS and so forth. And then, that information is then rendered into vector space with a bunch of objects, with properties like lane lines and traffic lights and other cars. And then, in vector space, that is re-rendered onto a display so you can confirm whether the car knows what's going on or not, by looking out the window. - Right, I think that's an extremely powerful thing for people to get an understanding, sort of become one with the system and understanding what the system is capable of. Now, have you considered showing more? So if we look at the computer vision, like road segmentation, lane detection, vehicle detection, object detection, underlying the system, there is at the edges, some uncertainty. Have you considered revealing the parts that the uncertainty in the system, the sort of-- - Probabilities associated with say, image recognition or something like that? - Yeah, so right now, it shows the vehicles in the vicinity, a very clean crisp image, and people do confirm that there's a car in front of me and the system sees there's a car in front of me, but to help people build an intuition of what computer vision is, by showing some of the uncertainty. - Well, in my car I always look at this with the debug view. And there's two debug views. One is augmented vision, which I'm sure you've seen, where it's basically we draw boxes and labels around objects that are recognized. And then there's we what call the visualizer, which is basically vector space representation, summing up the input from all sensors. That does not show any pictures, which basically shows the car's view of the world in vector space. But I think this is very difficult for normal people to understand, they're would not know what thing they're looking at. - So it's almost an HMI challenge through the current things that are being displayed is optimized for the general public understanding of what the system's capable of. - If you have no idea how computer vision works or anything, you can still look at the screen and see if the car knows what's going on. And then if you're a development engineer, or if you have the development build like I do, then you can see all the debug information. But this would just be like total gibberish to most people. - What's your view on how to best distribute effort? So there's three, I would say, technical aspects of Autopilot that are really important. So it's the underlying algorithms, like the neural network architecture, there's the data that it's trained on, and then there's the hardware development and maybe others. So, look, algorithm, data, hardware. You only have so much money, only have so much time. What do you think is the most important thing to allocate resources to? Or do you see it as pretty evenly distributed between those three? - We automatically get vast amounts of data because all of our cars have eight external facing cameras, and radar, and usually 12 ultrasonic sensors, GPS obviously, and IMU. And we've got about 400,000 cars on the road that have that level of data. Actually, I think you keep quite close track of it actually. - Yes. - Yeah, so we're approaching half a million cars on the road that have the full sensor suite. I'm not sure how many other cars on the road have this sensor suite, but I'd be surprised if it's more than 5,000, which means that we have 99% of all the data. - So there's this huge inflow of data. - Absolutely, a massive inflow of data. And then it's taken us about three years, but now we've finally developed our full self-driving computer, which can process an order of magnitude as much as the NVIDIA system that we currently have in the cars, and to use it, you unplug the NVIDIA computer and plug the Tesla computer in and that's it. In fact, we still are exploring the boundaries of its capabilities. We're able to run the cameras at full frame-rate, full resolution, not even crop the images, and it's still got headroom even on one of the systems. The full self-driving computer is really two computers, two systems on a chip, that are fully redundant. So you could put a boat through basically any part of that system and it still works. - The redundancy, are they perfect copies of each other or-- - Yeah. - Oh, so it's purely for redundancy as opposed to an arguing machine kind of architecture where they're both making decisions, this is purely for redundancy. - Think of it more like it's a twin-engine commercial aircraft. The system will operate best if both systems are operating, but it's capable of operating safely on one. So, as it is right now, we can just run, we haven't even hit the edge of performance so there's no need to actually distribute functionality across both SOCs. We can actually just run a full duplicate on each one. - So you haven't really explored or hit the limit of the system. - [Elon] No not yet, the limit, no. - So the magic of deep learning is that it gets better with data. You said there's a huge inflow of data, but the thing about driving, - Yeah. - the really valuable data to learn from is the edge cases. I've heard you talk somewhere about Autopilot disengagements being an important moment of time to use. Is there other edge cases or perhaps can you speak to those edge cases, what aspects of them might be valuable, or if you have other ideas, how to discover more and more and more edge cases in driving? - Well there's a lot of things that are learnt. There are certainly edge cases where, say somebody's on Autopilot and they take over, and then that's a trigger that goes out to our system and says, okay, did they take over for convenience, or did they take over because the Autopilot wasn't working properly? There's also, let's say we're trying to figure out, what is the optimal spline for traversing an intersection. Then the ones where there are no interventions are the right ones. So you then you say, okay, when it looks like this, do the following. And then you get the optimal spline for navigating a complex intersection. - So there's kind of the common case, So you're trying to capture a huge amount of samples of a particular intersection when things went right, and then there's the edge case where, as you said, not for convenience, but something didn't go exactly right. - So if somebody started manual control from Autopilot. And really, the way to look at this is view all input as error. If the user had to do input, there's something, all input is error. - That's a powerful line to think of it that way 'cause it may very well be error, but if you wanna exit the highway, or if it's a navigation decision that Autopilot's not currently designed to do, then the driver takes over, how do you know the difference? - Yeah, that's gonna change with Navigate on Autopilot, which we've just released, and without stalk confirm. Assuming control in order to do a lane change, or exit a freeway, or doing a highway interchange, the vast majority of that will go away with the release that just went out. - Yeah, so that, I don't think people quite understand how big of a step that is. - Yeah, they don't. If you drive the car then you do. - So you still have to keep your hands on the steering wheel currently when it does the automatic lane change. There's these big leaps through he development of Autopilot, through its history and, what stands out to you as the big leaps? I would say this one, Navigate on Autopilot without having to confirm is a huge leap. - It is a huge leap. - What are the-- It also automatically overtakes slow cars. So it's both navigation and seeking the fastest lane. So it'll overtake slow cars and exit the freeway and take highway interchanges, and then we have traffic light recognition, which introduced initially as a warning. I mean, on the development version that I'm driving, the car fully stops and goes at traffic lights. - So those are the steps, right? You've just mentioned some things that are an inkling of a step towards full autonomy. What would you say are the biggest technological roadblocks to full self-driving? - Actually, the full self-driving computer that we just, the Tesla, what we call, FSD computer that's now in production, so if you order any Model S or X, or any Model 3 that has the full self-driving package, you'll get the FSD computer. That's important to have enough base computation. Then refining the neural net and the control software. All of that can just be provided as an over-the-air update. The thing that's really profound, and what I'll be emphasizing at the investor day that we're having focused on autonomy, is that the car is currently being produced, with the hard word currently being produced, is capable of full self-driving. - But capable is an interesting word because-- - [Elon] The hardware is. - Yeah, the hardware. - And as we refine the software, the capabilities will increase dramatically, and then the reliability will increase dramatically, and then it will receive regulatory approval. So essentially, buying a car today is an investment in the future. I think the most profound thing is that if you buy a Tesla today, I believe you're buying an appreciating asset, not a depreciating asset. - So that's a really important statement there because if hardware is capable enough, that's the hard thing to upgrade usually. - Yes, exactly. - Then the rest is a software problem-- - Yes, software has no marginal cost really. - But, what's your intuition on the software side? How hard are the remaining steps to get it to where the experience, not just the safety, but the full experience is something that people would enjoy? - I think people it enjoy it very much so on highways. It's a total game changer for quality of life, for using Tesla Autopilot on the highways. So it's really just extending that functionality to city streets, adding in the traffic light recognition, navigating complex intersections, and then being able to navigate complicated parking lots so the car can exit a parking space and come and find you, even if it's in a complete maze of a parking lot. And, then it can just drop you off and find a parking spot, by itself. - Yeah, in terms of enjoyabilty, and something that people would actually find a lotta use from, the parking lot, it's rich of annoyance when you have to do it manually, so there's a lot of benefit to be gained from automation there. So, let me start injecting the human into this discussion a little bit. So let's talk about full autonomy, if you look at the current level four vehicles being tested on row like Waymo and so on, they're only technically autonomous, they're really level two systems with just a different design philosophy, because there's always a safety driver in almost all cases, and they're monitoring the system. - Right. - Do you see Tesla's full self-driving as still, for a time to come, requiring supervision of the human being. So its capabilities are powerful enough to drive but nevertheless requires a human to still be supervising, just like a safety driver is in other fully autonomous vehicles? - I think it will require detecting hands on wheel for at least six months or something like that from here. Really it's a question of, from a regulatory standpoint, how much safer than a person does Autopilot need to be, for it to be okay to not monitor the car. And this is a debate that one can have, and then, but you need a large amount of data, so that you can prove, with high confidence, statistically speaking, that the car is dramatically safer than a person. And that adding in the person monitoring does not materially affect the safety. So it might need to be 200 or 300% safer than a person. - And how do you prove that? - Incidents per mile. - Incidents per mile. - Yeah. - So crashes and fatalities-- - Yeah, fatalities would be a factor, but there are just not enough fatalities to be statistically significant, at scale. But there are enough crashes, there are far more crashes then there are fatalities. So you can assess what is the probability of a crash. Then there's another step which is probability of injury. And probability of permanent injury, the probability of death. And all of those need to be much better than a person, by at least, perhaps, 200%. - And you think there's the ability to have a healthy discourse with the regulatory bodies on this topic? - I mean, there's no question that regulators paid a disproportionate amount of attention to that which generates press, this is just an objective fact. And it also generates a lot of press. So, in the United States there's, I think, almost 40,000 automotive deaths per year. But if there are four in Tesla, they will probably receive a thousand times more press than anyone else. - So the psychology of that is actually fascinating, I don't think we'll have enough time to talk about that, but I have to talk to you about the human side of things. So, myself and our team at MIT recently released a paper on functional vigilance of drivers while using Autopilot. This is work we've been doing since Autopilot was first released publicly, over three years ago, collecting video of driver faces and driver body. So I saw that you tweeted a quote from the abstract, so I can at least guess that you've glanced at it. - Yeah, I read it. - Can I talk you through what we found? - Sure. - Okay, it appears that in the data that we've collected, that drivers are maintaining functional vigilance such that, we're looking at 18,000 disengagements from Autopilot, 18,900, and annotating were they able to take over control in a timely manner. So they were there, present, looking at the road to take over control, okay. So this goes against what many would predict from the body of literature on vigilance with automation. Now the question is, do you think these results hold across the broader population. So, ours is just a small subset. One of the criticism is that, there's a small minority of drivers that may be highly responsible, where their vigilance decrement would increase with Autopilot use. - I think this is all really gonna be swept, I mean, the system's improving so much, so fast, that this is gonna be a moot point very soon. Where vigilance is, if something's many times safer than a person, then adding a person does, the effect on safety is limited. And, in fact, it could be negative. - That's really interesting, so the fact that a human may, some percent of the population may exhibit a vigilance decrement, will not affect overall statistics, numbers on safety? - No, in fact, I think it will become, very, very quickly, maybe even towards the end of this year, but I would say, I'd be shocked if it's not next year at the latest, that having a human intervene will decrease safety. Decrease, like imagine if you're in an elevator. Now it used to be that there were elevator operators. And you couldn't go on an elevator by yourself and work the lever to move between floors. And now nobody wants an elevator operator, because the automated elevator that stops at the floors is much safer than the elevator operator. And in fact it would be quite dangerous to have someone with a lever that can move the elevator between floors. - So, that's a really powerful statement, and a really interesting one, but I also have to ask from a user experience and from a safety perspective, one of the passions for me algorithmically is camera-based detection of just sensing the human, but detecting what the driver's looking at, cognitive load, body pose, on the computer vision side that's a fascinating problem. And there's many in industry who believe you have to have camera-based driver monitoring. Do you think there could be benefit gained from driver monitoring? - If you have a system that's at or below a human level of reliability, then driver monitoring makes sense. But if your system is dramatically better, more reliable than a human, then driver monitoring does not help much. And, like I said, if you're in an elevator, do you really want someone with a big lever, some random person operating the elevator between floors? I wouldn't trust that. I would rather have the buttons. - Okay, you're optimistic about the pace of improvement of the system, from what you've seen with the full self-driving car computer. - The rate of improvement is exponential. - So, one of the other very interesting design choices early on that connects to this, is the operational design domain of Autopilot. So, where Autopilot is able to be turned on. So contrast another vehicle system that we were studying is the Cadillac Super Cruise system that's, in terms of ODD, very constrained to particular kinds of highways, well mapped, tested, but it's much narrower than the ODD of Tesla vehicles. - It's like ADD (both laugh). - Yeah, that's good, that's a good line. What was the design decision in that different philosophy of thinking, where there's pros and cons. What we see with a wide ODD is Tesla drivers are able to explore more the limitations of the system, at least early on, and they understand, together with the instrument cluster display, they start to understand what are the capabilities, so that's a benefit. The con is you're letting drivers use it basically anywhere-- - Anywhere that it can detect lanes with confidence. - Lanes, was there a philosophy, design decisions that were challenging, that were being made there? Or from the very beginning was that done on purpose, with intent? - Frankly it's pretty crazy letting people drive a two-ton death machine manually. That's crazy, like, in the future will people be like, I can't believe anyone was just allowed to drive one of these two-ton death machines, and they just drive wherever they wanted. Just like elevators, you could just move that elevator with that lever wherever you wanted, can stop it halfway between floors if you want. It's pretty crazy, so, it's gonna seem like a mad thing in the future that people were driving cars. - So I have a bunch of questions about the human psychology, about behavior and so on-- - That's moot, it's totally moot. - Because you have faith in the AI system, not faith but, both on the hardware side and the deep learning approach of learning from data, will make it just far safer than humans. - Yeah, exactly. - Recently there were a few hackers, who tricked Autopilot to act in unexpected ways for the adversarial examples. So we all know that neural network systems are very sensitive to minor disturbances, these adversarial examples, on input. Do you think it's possible to defend against something like this, for the industry? - Sure (both laugh), yeah. - Can you elaborate on the confidence behind that answer? - A neural net is just basically a bunch of matrix math. But you have to be a very sophisticated, somebody who really understands neural nets and basically reverse-engineer how the matrix is being built, and then create a little thing that's just exactly causes the matrix math to be slightly off. But it's very easy to block that by having, what would basically negative recognition, it's like if the system sees something that looks like a matrix hack, exclude it. It's such a easy thing to do. - So learn both on the valid data and the invalid data, so basically learn on the adversarial examples to be able to exclude them. - Yeah, you like basically wanna both know what is a car and what is definitely not a car. And you train for, this is a car, and this is definitely not a car. Those are two different things. People have no idea of neural nets really, They probably think neural nets involves, a fishing net or something (Lex laughs). - So, as you know, taking a step beyond just Tesla and Autopilot, current deep learning approaches still seem, in some ways, to be far from general intelligence systems. Do you think the current approaches will take us to general intelligence, or do totally new ideas need to be invented? - I think we're missing a few key ideas for artificial general intelligence. But it's gonna be upon us very quickly, and then we'll need to figure out what shall we do, if we even have that choice. It's amazing how people can't differentiate between, say, the narrow AI that allows a car to figure out what a lane line is, and navigate streets, versus general intelligence. Like these are just very different things. Like your toaster and your computer are both machines, but one's much more sophisticated than another. - You're confident with Tesla you can create the world's best toaster-- - The world's best toaster, yes. The world's best self-driving... yes, to me right now this seems game, set and match. I mean, I don't want us to be complacent or over-confident, but that's what it, that is just literally how it appears right now, I could be wrong, but it appears to be the case that Tesla is vastly ahead of everyone. - Do you think we will ever create an AI system that we can love, and loves us back in a deep meaningful way, like in the movie Her? - I think AI will capable of convincing you to fall in love with it very well. - And that's different than us humans? - You know, we start getting into a metaphysical question of, do emotions and thoughts exist in a different realm than the physical? And maybe they do, maybe they don't, I don't know. But from a physics standpoint, I tend to think of things, you know, like physics was my main sort of training, and from a physics standpoint, essentially, if it loves you in a way that you can't tell whether it's real or not, it is real. - That's a physics view of love. - Yeah (laughs), if you cannot prove that it does not, if there's no test that you can apply that would make it, allow you to tell the difference, then there is no difference. - Right, and it's similar to seeing our world a simulation, they may not be a test to tell the difference between what the real world - Yes. - and the simulation, and therefore, from a physics perspective, it might as well be the same thing. - Yes, and there may be ways to test whether it's a simulation, there might be, I'm not saying there aren't. But you could certainly imagine that a simulation could correct, that once an entity in the simulation found a way to detect the simulation, it could either pause the simulation, start a new simulation, or do one of many other things that then corrects for that error. - So when, maybe you, or somebody else creates an AGI system, and you get to ask her one question, what would that question be? - What's outside the simulation? - Elon, thank you so much for talking today, it's a pleasure. - All right, thank you.
Greg Brockman: OpenAI and AGI | Lex Fridman Podcast #17
the following is a conversation with Greg Brockman he's the co-founder and CTO of open AI a world-class research organization developing ideas and AI with the goal of eventually creating a safe and friendly artificial general intelligence one that benefits and empowers humanity open AI is not only a source of publications algorithms tools and datasets their mission is a catalyst for an important public discourse about our future with both narrow and general intelligence systems this conversation is part of the artificial intelligence podcast at MIT and beyond if you enjoy it subscribe on youtube itunes or simply connect with me on twitter at Lex Friedman spelled Fri D and now here's my conversation with Greg Brockman so in high school and right after you wrote a draft of a chemistry textbook I saw that that covers everything from basic structure of the atom to quantum mechanics so it's clear you have an intuition and a passion for both the physical world with chemistry and now robotics to the digital world with AI deep learning reinforcement learning so on do you see the physical world in the digital world is different and what do you think is the gap a lot of it actually boils down to iteration speed right that I think that a lot of what really motivates me is is building things right is the I you know think about mathematics for example where you think you're really hard about a problem you understand it you're right down in this very obscure form that we call proof but then this is in humanities library right it's there forever this is some truth that we've discovered you know maybe only five people in your field will ever read it now but somehow you've kind of moved humanity forward and so I actually used to really think that I was going to be a mathematician and then I actually started writing this chemistry textbook one of my friends told me you'll never publish it because you don't have a PhD so instead I decided to build a website and try to promote my ideas that way and then I discovered programming and I you know that in programming you think hard about a problem you understand it you write down in a very obscure form that we call a program but then once again it's in humanities library right and anyone could get the benefit from and the scalability is massive and so I think that the thing that really appeals to me about the digital world is that you can have this this this insane leverage right a single individual with an idea is able to affect the entire planet and that's something I think is really hard to do if you're moving around physical atoms but you said mathematics so if you look at the the what thing you know over here our mind do you ultimately see it as just math is just information processing or is there some other magic as you've seen if you've seen through biology and chemistry and so on I think it's really interesting to think about humans is just information processing systems and that it seems like it's actually a pretty good way of describing a lot of kind of how the world works or a lot of what we're capable of to think that that you know again if you just look at technological innovations over time that in some ways the most transformative innovation that we've had it has been the computer right in some ways the internet you know that what is the right the Internet is not about these physical cables it's about the fact that I am suddenly able to instantly communicate with any other human on the planet I'm able to retrieve any piece of knowledge that in some ways the human race has ever had and that those are these insane transformations do you see the our society as a whole the collective as another extension of the intelligence of the human being so if you look at the human being is an information processing system you mentioned the internet then networking do you see us all together as a civilization as a kind of intelligence system yeah I think this is actually a really interesting perspective to take and to think about to you sort of have this collective intelligence of all of society the economy itself is this superhuman machine that is optimizing something right and it's all in some ways a company has a will of its own right that you have all these individuals we're all pursuing their own individual goals and thinking really hard and thinking about the right things to do but somehow the company does something that is this emergent thing and that is it so there's a really useful abstraction and so I think that in some ways you know we think of ourselves as the most intelligent things on the planet and the most powerful things on the planet but there are things that are bigger than us that these systems that we all contribute to and so I think actually you know it's a it's interesting to think about if you've read as a guys a models foundation right that that there's this concept of psychohistory in there which is effectively this that if you have trillions or quadrillions of beings then maybe you could actually predict what that being that that huge macro being will do I and almost independent of what the individuals want I actually have a second angle on this I think is interesting which is thinking about a technological determinism one thing that I actually think a lot about with with open a tie right is that we're kind of coming on onto this insanely transformational technology of general intelligence right that will happen at some point and there's a question of how can you take actions that will actually steer it to go better rather than worse and that I think one question you need to ask is as a scientist as an inventor as a creator what impact can you have in general right you look at things like the telephone invented by two people in the same day like what does that mean what does that mean about the shape of innovation and I think that what's going on is everyone's building on the shoulders of the same giants and so you can kind of you can't really hope to create something no one else ever would you know if Einstein wasn't born someone else would have come up with relativity you know he changed the timeline a bit right that maybe it would have taken another 20 years but it wouldn't be that fundamentally humanity would never discover these these fundamental truths so there's some kind of invisible momentum that some people like Einstein or open the eyes plugging into that's anybody else can also plug into and ultimately that wave takes us into a certain direction that's me that's right that's right and you know this kind of seems to play out in a bunch of different ways that there's some exponential that is being ridden and that the exponential itself which one it is changes think about Moore's law an entire industry set its clock to it for 50 years like how can that be right how is that possible and yet somehow it happened and so I think you can't hope to ever invent something that no one else will maybe you can change the timeline a little bit but if you really want to make a difference I think that the thing that you really have to do the only real degree of freedom you have is to set the initial conditions under which a technology is born and so you think about the internet right that there are lots of other competitors trying to build similar things and the internet one and that the initial conditions where that was created by this group that really valued people being able to be you know anyone being able to plug in this very academic mindset of being open and connected and I think that the Internet for the next 40 years really played out that way you know maybe today things are starting to shift in a different direction but I think if those initial conditions were really important to determine the next 40 years worth of progress that's really beautifully put so another example of that I think about you know I recently looked at it I looked at Wikipedia the formation of Wikipedia and I wonder what the internet would be like if Wikipedia had ads you know there's a interesting argument that why they chose not to make it put advertisement wikipedia i think it's i think wikipedia is one of the greatest resources we have on the internet it's extremely surprising how well it works and how well it was able to aggregate all this kind of good information and they essentially the creator of wikipedia I don't know there's probably some debates there but set the initial conditions and now it carried it itself forward that's really interesting so you're the way you're thinking about AGI or artificial intelligences you're focused on setting the initial conditions for the for the progress that's right that's powerful okay so look into the future if you create an AGI system like one that can ace the Turing test natural language what do you think would be the interactions you would have with it what do you think are the questions you would ask like what would be the first question you would ask it her/him that's right I think it at that point if you've really built a powerful system that is capable of shaping the future of humanity the first question that you really should ask is how do we make sure that this plays out well and so that's actually the first question that I would ask a powerful AGI system is so you wouldn't ask your colleague you wouldn't ask like Ilya you would ask the AGI system oh we've already had the conversation with Ilya right and everyone here and so you want as many perspectives and a piece of wisdom as you can for it for answering this question so I don't think you necessarily defer to whatever your powerful system tells you but you use as one input I like to try to figure out what to do but and I guess fundamentally what it really comes down to is if you built something really powerful and you think about think about for example the creation of of shortly after the creation of nuclear weapons right the most important question the world was what's the world order going to be like how do we set ourselves up in where we're going to be able to survive this species with a GI I think the question is slightly different right that there is a question of how do we make sure that we don't get the negative effects but there's also the positive side right you imagine that you know like like what won't AGI be like like what will be capable of and I think that one of the core reasons that an AGI can be powerful and transformative is actually due to technological development yeah right if you have something that's capable as capable as a human and that it's much more scalable that you absolutely want that thing to go read the whole scientific literature and think about how to create cures for all the diseases right you want it to think about how to go and build technologies to help us create material abundance and to figure out societal problems that we have trouble with like how we're supposed to clean up the environment and you know maybe you want this to go and invent a bunch of little robots that will go out and be biodegradable and turn ocean debris into harmless molecules and I think that that that positive side is something that I think people miss sometimes when thinking about what an AGI will be like and so I think that if you have a system that's capable of all of that you absolutely want its advice about how do I make sure that we're using your your capabilities in a positive way for Humanity so what do you think about that psychology that looks at all the different possible trajectories of an AGI system many of which perhaps the majority of which are positive and nevertheless focuses on the negative trajectories I mean you get to interact with folks you get to think about this maybe within yourself as well you look at sam harris and so on it seems to be sorry to put it this way but almost more fun to think about the negative possibilities whatever that's deep in our psychology what do you think about that and how do we deal with it because we want AI to help us so I think there's kind of two problems so I entailed in that question the first is more of the question of how can you even picture what a world with a new technology will any like now imagine were in 1950 and I'm trying to describe Buber to someone apps and the internet yeah I mean your yeah that's that's going to be extremely complicated but it's imaginable it's imaginable right but and now imagine being a 1950 and predicting goober right and you need to describe the internet you need to describe GPS you need to describe the fact that everyone's going to have this phone in their pocket and so I think that that just the first truth is that it is hard to picture how a transformative technology will play out in the world we've seen that before with technologies that are far less transformative than AG I will be and so I think that that one piece is that it's just even hard to imagine and to really put yourself in a world where you can predict what that that positive vision would be like and you know I think the second thing is that it is I think it is always easier to support the negative side than the positive side it's always easier to destroy than create and you know less than in a physical sense and more just in an intellectual sense right because you know I think that with creating something you need to just get a bunch of things right and to destroy you just need to get one thing wrong yeah and so I think that that what that means is that I think a lot of people's thinking dead ends as soon as they see the negative story but that being said I actually actually have some hope right I think that the the positive vision is something that I think can be something that we can we can talk about I think that just simply saying this fact of yeah like there's positive there's negatives everyone likes to draw them the negative people have to respond well to that message and say huh you're right there's a part of this that we're not talking about not thinking about and that's actually something that's that's that's I think really been a key part of how we think about AGI at open AI right you can kind of look at it as like okay like opening eye talks about the fact that there are risks and yet they're trying to build this system like how do you square this those two facts so do you share the intuition that some people have I mean from Sam Harris even Elon Musk himself that it's tricky as you develop AGI to keep it from slipping into the existential threats into the negative what's your intuition about how hard is it to keep a a development on the positive track and you what's your intuition there to answer the question you can really look at how we structure open AI so we really have three main arms we have capabilities which is actually doing the technical work and pushing forward what these systems can do there's safety which is working on technical mechanisms to ensure that the systems we build are lined with human values and then there's policy which is making sure that we have governance mechanisms answering that question of well whose values and so I think that the technical safety one is the one that people kind of talk about the most right you talk about like think about you know all of the dystopic AI movies a lot of that is about not having good technical safety in place and what we've been finding is that you know I think that actually a lot of people look at the technical safety problem and think it's just intractable right this question of what do humans want how am I supposed to write that down can I even write down what I want no way and then they stop there but the thing is we've already built systems that are able to learn things that humans can't specify you know even the rules for how to recognize if there's a cat or a dog in an image turns out its intractable to write that down and yet we're able to learn it and that what we're seeing with systems we build it open it yeah and there's still an early proof of concept stage is that you are able to learn human preferences you're able to learn what humans want from data and so that's kind of the core focus for our technical safety team and I think that they're actually we've had some pretty encouraging updates in terms of what we've been able to make work so you have an intuition and a hope that from data you know looking at the value alignment problem from data we can build systems that align with the collective better angels of our nature so aligned with the ethics and the morals of human beings to even say this in a different way I mean think about how do we align in humans right think about like a human baby can grow up to be an evil person or a great person and a lot of that is from learning from data right that you have some feedback as a child is growing up they get to see positive examples and so I think that that just like them that the the only example we have of a general intelligence that is able to learn from data I too aligned with human values and to learn values I think we shouldn't be surprised that we can do the same sorts of techniques or whether the same sort of techniques end up being how we we saw value alignment for AG eyes so let's go even higher as I don't know if you've read the book sapiens mm-hmm but there's an idea that you know that as a collective is us human beings who kind of develop together and ideas that we hold there's no in that context objective truth we just kind of all agree to certain ideas and hold them as a collective if you have a sense that there is in the world of good and evil do you have a sense that to the first approximation there are some things that are good and that you could teach systems to behave to be good so I think that this actually blends into our third team right which is the policy team and this is the one the the aspect I think people really talk about way less than they should all right because imagine that we built super-powerful systems that we've managed to figure out all the mechanisms for these things to do whatever the operator wants the most important question becomes who's the operator what do they want and how is that going to affect everyone else right and and I think that this question of what is good what are those values I mean I think you don't even have to go to those those very grand existential places to start to realize how hard this problem is you just look at different countries and cultures across the world and that there's there's a very different conception of how the world works and you know what what what kinds of of ways that society wants to operate and so I think that the really core question is is is actually very concrete um and I think it's not a question that we have ready answers to right is how do you have a world where all the different countries that we have United States China Russia and you know the hundreds of other countries out there are able to continue to not just operate in the way that they see fit but in that the world that emerges in these where you have these very powerful systems I operating alongside humans ends up being something that empowers humans more that makes like exhuming existence be a more meaningful thing and the people are happier in wealthier and able to live more fulfilling lives it's not nob vyas thing for how to design that world once you have that very powerful system so if we take a little step back and we're having it like a fascinating conversation and open eyes in many ways a tech leader in the world and yet we're thinking about these big existential questions which is fascinating really important I think you're a leader in that space and it's a really important space of just thinking how AI affect society in a big-picture view so Oscar Wilde said we're all in the gutter but some of us are looking at the Stars and I think open air has a charter that looks to the Stars I would say to create intelligence to create general intelligence make it beneficial safe and collaborative so can you tell me how that came about how a mission like that and the path to creating a mission like that open yeah I was founded yeah so I think that in some ways it really boils down to taking a look at the landscape alright so if you think about the history of AI that basically for the past 60 or 70 years people have thought about this goal of what could happen if you could automate human intellectual labor right imagine you can build a computer system that could do that what becomes possible well out of sci-fi that tells stories of various dystopian and you know increasingly you have movies like heard that tell you a little bit about maybe more of a little bit utopic vision I you think about the impacts that we've seen from being able to have bicycles for our minds and computers and that I think that the the impact of computers and the Internet has just far outstripped what anyone really could have predicted and so I think that it's very clear that if you can build an AI it will be the most transformative technology that humans will ever create and so what it boils down to then is a question of well is there a path is there hope is there a way to build such a system and I think that for 60 or 70 years that people got excited and I they you know ended up not being able to deliver on the hopes that the people I pinned on them and I think that then you know that after you know two to winters of AI development that people I you know I think kind of almost stopped daring to dream right the really talking about a GI or thinking about a GI became almost this taboo in the community but I actually think that people took the wrong lesson from AI history and if you look back starting in nineteen fifty nine is when the perceptron was released and this is basically you know one of the earliest neural networks it was released to what was perceived as this massive overhype so in the New York Times in nineteen fifty-nine you have this article saying that you know the the perceptron will one day recognize people call out their names instantly translate speech between languages and people at the time looked at this and said this is Jack your system can't do any of that and basically spent ten years trying to discredit the whole perceptron direction and succeeded and all the funding dried up and you know people kind of went in other directions and you know the 80s there was a resurgence and I'd always heard that the resurgence in the 80s was due to the invention of back propagation and these these algorithms that got people excited but actually the causality was due to people building larger computers that you can find these these articles from the 80s saying that the democratization of computing power suddenly meant that you could run these larger neural networks and then people start to do all these amazing things the backpropagation algorithm was invented and you know that the the neural nets people running were these tiny little like 20 neuron neural nets right what are you supposed to learn with 20 neurons and so of course they weren't able to get great results and it really wasn't until 2012 that this approach that's almost the most simple natural approach that people have come up with in the 50s right in some ways even in the 40s before there were computers with a Pitts McCullen air and neuron suddenly this became the best way of solving problems right and I think there are three core properties that deep learning has that I think are very worth paying attention to the first is generality we have a very small number of deep learning tools SGD deep neural net maybe some some you know RL and it solves this huge variety of problems speech recognition machine translation game playing all these problems small set of tools so there's the generality there's a second piece which is the competence you want to solve any of those problems throw it forty years worth of computer vision research replacing the deep neural net it's kind of work better and there's a third piece which is the scalability right the one thing that has been shown time and time again is that you if you have a larger neural network for a more compute more data at it it will work better those three properties together feel like essential parts of building a general intelligence now it doesn't just mean that if we scale up what we have that we will have an AGI right there are clearly missing pieces they're missing ideas we need to have answers for reasoning but I think that the core here is that for the first time it feels that we have a paradigm that gives us hope the general intelligence can be achievable and so as soon as you believe that everything else becomes comes into focus right if you imagine that you may be able to and you know that the timeline I think remains uncertain on the but I think that that you know certainly within our lifetimes and possibly within a much shorter period of time than then people would expect if you can really build the most transformative technology that will ever exist you stop thinking about yourself so much right and you start thinking about just like how do you have a world where this goes well and that you need to think about the practicalities of how do you build an organization and get together a bunch of people and resources and to make sure that people feel motivated and ready to do it but I think that then you start thinking about well what if we succeed and how do we make sure that when we succeed that the world is actually the place that we want ourselves to exist then and almost in the Rawls the unveils sense of the word and so that's kind of the broader landscape and opening I was really formed in 2015 with that high level picture of AGI might be possible sooner than people think and that we need to try to do our best to make sure it's going to go well and then we spent the next couple years really trying to figure out what does that mean how do we do it and you know I think that typically with a company you start out very small so you in a co-founder and you build a product you got some users you get a product market fit you know then at some point you raise some money you hire people you scale and then you know down the road then the big companies realize you exist and try to kill you and for opening I it was basically everything in exactly the order let me just pause for a second he said a lot of things and let me just admire the jarring aspect of what open AI stands for which is daring to dream I mean you said it's pretty powerful you caught me off guard because I think that's very true the-the-the step of just daring to dream about the possibilities of creating intelligence in a positive in a safe way but just even creating intelligence is a much needed refreshing catalyst for the AI community so that's that's the starting point okay so then formation of open AI was just I just say that you know when we were starting opening AI that kind of the first question that we had is is it too late to start a lab with a bunch of the best people possible that was an actual question so those were those that was the core question of you know hey there's dinner in July of 20 2015 and there's that was that was really what we spent the whole time talking about and you know cuz it's the you think about kind of where AI was is that it transitioned from being an academic pursuit to an industrial pursuit and so a lot of the best people were in these big research labs and that we wanted to start our own one that you know no matter how much resources we could accumulate it would be you know pale in comparison to the big tech companies and we knew that and there's a question of are we going to be actually able to get this thing off the ground you need critical mass you can't just do you and a co-founder build a product right you really need to have a group of you know five to ten people and we kind of concluded it wasn't obviously impossible so it seemed worth trying well you're also dreamers so who knows right that's right okay so speaking of that competing with with the the big players let's talk about some of the some of the tricky things as you think through this process of growing of seeing how you can develop these systems a task at scale that competes so you recently recently formed open ILP a new cap profit company that now carries the name open it so open has now this official company the original non profit company still exists and carries the opening I nonprofit name so can you explain what this company is what the purpose of us creation is and how did you arrive at the decision yep to create it openly I the whole entity and opening I LP as a vehicle is trying to accomplish the mission of ensuring that artificial general intelligence benefits everyone and the main way that we're trying to do that is by actually trying to build general intelligence ourselves and make sure the benefits are distributed to the world that's the primary way we're also fine if someone else does this all right it doesn't have to be us if someone else is going to build an AGI and make sure that the benefits don't get locked up in one company or you know one one want with one set of people like we're actually fine with that and so those ideas are baked into our Charter which is kind of the the foundational document that are describes kind of our values and how we operate but it's also really baked into the structure of open at LP and so the way that we've set up opening ILP is that in the case where we succeed right if we actually build what we're trying to build then investors are able to get a return and but that return is something that is capped and so if you think of AGI in terms of data the value that you could really create you're talking about the most transformative technology ever created it's going to create orders of magnitude more value than any existing company and that all of that value will be owned by the world like legally title to the nonprofit to fulfill that mission and so that's that's the structure so the mission is a powerful one and it's a it's one that I think most people would agree with it's how we would hope a I progresses and so how do you tie yourself to that mission how do you make sure you do not deviate from that mission that you know other incentives that are profit driven wouldn't don't interfere with the mission so this was actually a really core question for us for the past couple years because you know I'd say that like the way that our history went was that for the first year we were getting off the ground right we had this high level picture but we didn't know exactly how we wanted to accomplish it and really two years ago it's when we first started realizing in order to build a GI we're just going to need to raise way more money than we can as a nonprofit I mean you're talking many billions of dollars and so the first question is how are you supposed to do that and stay true to this mission and we looked at every legal structure out there and concluded none of them were quite right for what we wanted to do and I guess it shouldn't be too surprising if you're going to do something like crazy unprecedented technology that you're gonna have to come up with some crazy unprecedent structure to do it in and a lot of a lot of our conversation was with people at opening I write the people who really join because they believe so much in this mission and thinking about how do we actually raise the resources to do it and also stay true to to what we stand for and the place you got to start is to really align on what is it that we stand for right what are those values what's really important to us and so I'd say that we spent about a year really compiling the opening I'd charter and that determines and if you even look at the first the first line item in there it says that look we expect we're gonna have to marshal huge amounts of resources but we're going to make sure that we minimize conflicts of interest with the mission and that kind of aligning on all of those pieces was the most important step towards figuring out how do we structure a company that can actually raise the resources to do what we need to do I imagined open AI the decision to create open ILP was a really difficult one and there was a lot of discussions as you mentioned for a year and there was different ideas perhaps detractors with an open AI sort of different paths that you could have taken what were those concerns what were the different paths considered what was that process of making that decision like yep um but so if you look actually at the opening I charter that there's almost two paths embedded within it there is we are primarily trying to build AGI ourselves but we're also ok if someone else does it and this is a weird thing for a company it's really interesting actually yeah there there is an element of competition that you do want to be the one that does it but at the same time you're ok somebody else's and you know we'll talk about that a little bit that trade-off that's the day that's really interesting and I think this was the core tension as we were designing open an ILP and really the opening eye strategy is how do you make sure that both you have a shot at being a primary actor which really requires building an organization raising massive resources and really having the will to go and execute on some really really hard vision all right you need to really sign up for a long period to go and take on a lot of pain and a lot of risk and to do that normally you just import the startup mindset right and that you think about okay like how do we how to execute everyone you give this very competitive angle but you also have the second angle of saying that well the true mission isn't for opening high to build a GI the true mission is for AGI to go well for Humanity and so how do you take all of those first actions and make sure you don't close the door on outcomes that would actually be positive in fulfill the mission and so I think it's a very delicate balance right I think that going 100% one direction or the other is clearly not the correct answer and so I think that even in terms of just how we talk about opening I and think about it there's just like like one thing that's always in the back of my mind is to make sure that we're not just saying opening eyes goal is to build AGI right that it's actually much broader than that right that first of all I you know it's not just AGI it's safe AGI that's very important but secondly our goal isn't to be the ones to build it our goal is to make sure it goes well for the world and so I think that figuring out how do you balance all of those and to get people to really come to the table and compile the the like a single document that that encompasses all of that wasn't trivial so part of the challenge here is your mission is I would say beautiful empowering and a beacon of hope for people in the research community and just people thinking about AI so your decisions are scrutinized more than I think a regular profit driven company do you feel the burden of this in the creation of the Charter and just in the way you operate yes so why do you lean into the burden by creating such a charter why not to keep it quiet I mean it just boils down to the to the mission right I'm here and everyone else is here because we think this is the most important mission right dare to dream all right so what do you think you can be good for the world or create an a GI system that's good when you're a for-profit company from my perspective I don't understand why profit interferes with positive impact on society I don't understand by Google that makes most of its money from ads you can't also do good for the world or other companies Facebook anything I don't I don't understand why those have to interfere you know you can profit isn't the thing in my view that affects the impact of a company what affects the impact of the company is the Charter is the culture is the you know the people inside and profit is the thing that just fuels those people so what are your views there yeah so I think that's a really good question and there's there's there's some some you know real like long-standing debates in human society that are wrapped up in it the way that I think about it is just think about what what are the most impactful nonprofits in the world what are the most impactful for profits in the world right is much easier to lists the for profits that's right and I think that there's there's some real truth here that the system that we set up the system for kind of how you know today's world is organized is one that that really allows for huge impact and that that you know kind of part of that is that you need to be you know for profits are our self-sustaining and able to to kind of you know build on their own momentum and I think that's a really powerful thing it's something that when it turns out that we haven't set the guardrails correctly causes problems right think about logging companies that go and DeForest you know you know the rain forest that's really bad we don't want that and it's actually really interesting to me the kind of this this question of how do you get positive benefits out of a for-profit company it's actually very similar to how do you get positive benefits out of an AGI right that you have this like very powerful system it's more powerful than any human and it's kind of autonomous in some ways you know super human and a lot of axes and somehow you have to set the guardrails to get good to happen but when you do the benefits are massive and so I think that the when when I think about nonprofit vs. for-profit I think it's just not enough happens in nonprofits they're very pure but it's just kind of you know it's just hard to do things they're in for profits in some ways like too much happens but if if kind of shaped in the right way it can actually be very positive and so with open NLP we're picking a road in between now the thing I think is really important to recognize is that the way that we think about opening ILP is that in the world where AGI actually happens right in a world where we are successful we build the most transformative technology ever the amount of value we're going to create will be astronomical and so then in that case that the if it the the cap that we have will be a small fraction of the value we create and the amount of value that goes back to investors and employees looks pretty similar to what would happen in a pretty successful startup and that's really the case that we're optimizing for right that we're thinking about in the success case making sure that the value we create doesn't get locked up and I expect that in another you know for-profit companies that it's possible to do something like that I think it's not obvious how to do it right and I think that as a for-profit company you have a lot of fiduciary duty to your shareholders and that there are certain decisions you just cannot make in our structure we've set it up so that we have a fiduciary duty to the Charter that we always get to make the decision that is right for the Charter rather than even if it comes at the expense of our own stakeholders and and so I think that when I think about what's really important it's not really about nonprofit vs. for-profit it's really a question of if you build a GI and you kind of you know humanities now in this new age who benefits whose lives are better and I think that what's really important is to have an answer that is everyone yeah which is one of the core aspects of the Charter so one concern people have not just with open the eye but with Google Facebook Amazon anybody really that's that's creating impact that scale is how do we avoid as your Charter says avoid enabling the use of or AGI to unduly concentrate power why would not a company like open a I keep all the power of an AGI system to itself the Charter the Charter so you know how does the Charter actualize itself in day to day so I think that first to zoom out right there the way that we structure the company is so that the the power first sort of you know dictating the actions that opening eye takes ultimately rests with the board right the board of the nonprofit I'm and the board is set up in certain ways certain certain restrictions that you can read about in the opening hi LP blog post but effectively the board is the is the governing body for opening ILP and the board has a duty to fulfill the mission of the nonprofit and so that's kind of how we tie how we thread all these things together now there's a question of so day to day how do people the individuals who in some ways are the most empowered ones ain't no the board sort of gets to call the shots at the high level but the people who are actually executing are the employees the way that people here on a day-to-day basis who have the you know the the keys to the technical Kingdom and their I think that the answer looks a lot like well how does any company's values get actualized right I think that a lot of that comes down to that you need people who are here because they really believe in that mission and they believe in the Charter and that they are willing to take actions that maybe are worse for them but are better for the Charter and that's something that's really baked into the culture and honestly I think it's I you know I think that that's one of the things that we really have to work to preserve as time goes on and that's a really important part of how we think about hiring people and bringing people into opening I so there's people here there's people here who could speak up and say like hold on a second this is totally against what we stand for cultural eyes yeah yeah for sure I mean I think that that we actually have I think that's like a pretty important part of how we operate and how we have even again with designing the Charter and designing open alp in the first place that there has been a lot of conversation with employees here and a lot of times where employees said wait a second this seems like it's coming in the wrong direction and let's talk about it and so I think one thing that's that's I think I really and you know here's here's actually one thing I think is very unique about us as a small company is that if you're at a massive tech giant that's a little bit hard for someone who's aligned employee to go and talk to the CEO and say I think that we're doing this wrong and you know you look at companies like Google that have had some collective action from employees to you know make ethical change around things like maven and so maybe there are mechanisms that other companies that work but here super easy for anyone to pull me aside to pull Sam aside to Balilla aside and people do it all the time one of the interesting things in the Charter is this idea that it'd be great if you could try to describe or untangle switching from competition to collaboration and late-stage AGI development it was really interesting this dance between competition and collaboration how do you think about that yeah assuming you can actually do the technical side of AGI development I think there's going to be two key problems with figuring out how do you actually deploy it make it go well the first one of these is the run-up to building the first AGI you look at how self-driving cars are being developed and it's a competitive race I'm the thing that always happens in a competitive race is that you have huge amounts of pressure to get rid of safety and so that's one thing we're very concerned about right is that people multiple teams figuring out we can actually get there but you know if we took the slower path that is more guaranteed to be safe we will lose and so we're going to take the fast path and so the more that we can both ourselves be in a position where we don't generate that competitive race where we say if the race is being run and that you know someone else's is further ahead than we are we're not gonna try to to leapfrog we're gonna actually work with them right we will help them succeed as long as what they're trying to do is to fulfill our mission then we're good we don't have to build AGI ourselves and I think that's a really important commitment from us but it can't just be unilateral right I think that's really important that other players who are serious about building AGI make similar commitments right I think that that you know again to the extent that everyone believes that AGI should be something to benefit everyone then it actually really shouldn't matter which company builds it and we should all be concerned about the case where we just race so hard to get there that something goes wrong so what role do you think government our favorite entity has in setting policy and rules about this domain from research to the development to early stage to late stage a a inhi development so I think that first of all is really important the government's in their right in some way shape or form you know at the end of the day we're talking about building technology that will shape how the world operates and that there needs to be government as part of that answer and so that's why we've we've we've done a number of different congressional testimonies we interact with a number of different lawmakers and the you know right now a lot of our message to them is that it's not the time for regulation it is the time for measurement right that our main policy recommendation is that people and you know the government does this all the time with bodies like NIST um spend time trying to figure out just where the technology is how fast it's moving and can really become literate and up to speed with respect to what to expect so I think that today the answer really is about about about measurement and I think if there will be a time in place where that will change and I think it's a little bit hard to predict exactly I what what exactly that trajectory should look like so there will be a point oh it's regulation federal in the United States the government steps in and and helps be the I don't want to say the adult in the room to make sure that there is strict rules may be conservative rules that nobody can cross well I think there's this kind of maybe to two angles to it so today with narrow AI applications that I think there are already existing bodies that are responsible and should be responsible for regulation you think about for example with self-driving cars that you want the you know the National Highway it's exactly to be very good mat that makes sense right that basically what we're saying is that we're going to have these technological systems that are going to be do performing applications that humans already do great we already have ways of thinking about standards and safety for those so I think actually empowering those regulators today is also pretty important and then I think for for a GI you know that there's going to be a point where we'll have better answers and I think that maybe a similar approach of first measurement and you know start thinking about what the rules should be I think it's really important that we don't prematurely squash you know progress I think it's very easy to kind of smother the budding field and I think that's something to really avoid but I don't think it's the right way of doing it is to say let's just try to blaze ahead and not involve all these other stakeholders so you've recently released a paper on GPT two language modeling but did not release the full model because you have concerns about the possible negative effects of the availability of such model it's uh outside of just that decision is super interesting because of the discussion as at a societal level the discourse it creates so it's fascinating in that aspect but if you think that's the specifics here at first what are some negative effects that you envisioned and of course what are some of the positive effects yeah so again I think to zoom out like the way that we thought about GPT 2 is that with language modeling we are clearly on a trajectory right now where we scale up our models and we get qualitatively better performance right GPT 2 itself was actually just a scale-up of a model that we've released in the previous June right and we just ran it at you know much larger scale and we got these results we're suddenly starting to write coherent prose which was not something we'd seen previously and what are we doing now well we're gonna scale up GPT 2 by 10x by hundred X by thousand X and we don't know what we're going to get and so it's very clear that the model that we were that we released last June you know I think it's kind of like it's it's it's it's a good academic toy it's not something that we think is something that can really have negative applications or you know to the sense that it can the positive of people being able to play with it is you know far far outweighs the possible harms you fast forward to not GPT to buy GPU 20 and you think about what that's gonna be like and I think that the capabilities are going to be substantive and so if there needs to be a point in between the two where you say this is something where we are drawing the line and that we need to start thinking about the safety aspects and I think for GPT too we could have gone either way and in fact when we had conversations internally that we had a bunch of pros and cons and it wasn't clear which one which one outweighed the other and I think that when we announced that hey we decide not to release this model then there was a bunch of conversation where various people said it's so obvious that you should have just released it there other people said it's so obvious you should not have released it and I think that that almost definitionally means that holding it back was the correct decision right if it's contra if there's if it's not obvious whether something is beneficial or not you should probably default to caution and so I think that the overall landscape for how we think about it is that this decision could have gone either way there are great arguments in both directions but for future models down the road and possibly sooner than you'd expect because you know scaling these things up doesn't have to take that long those ones but you're definitely not going to want to release into the wild and so I think that we almost view this as a test case and to see can we even design you know how do you have a society or how do you have a system that goes from having no concept of responsible disclosure where the mere idea of not releasing something for safety reasons is unfamiliar to a world where you say okay we have a powerful model let's at least think about it let's go through some process and you think about the security community it took them a long time to design responsible disclosure right you know you think about this question of well I have a security exploit I send it to the company the companies like tries to prosecute me or just sit just ignores it what do I do right and so you know the alternatives of oh I just just always publish your exploits that doesn't seem good either right and so it really took a long time and took this this it was bigger than any individual right is really about building the whole community that believed that okay we'll have this process where you send it to the company you know if they don't act in a certain time then you can go public and you're not a bad person you've done the right thing and I think that in AI part of the response of gbt to just proves that we don't have any concept of this so that's the high level picture um and so I think that I think this was this was a really important move to make and we could have maybe delayed it for D BT 3 but I'm really glad we did it for GPT too and so now you look at GPT 2 itself and you think about the substance of okay what are potential negative applications so you have this model that's been trained on the Internet which you know it's also going to be a bunch of very biased data a bunch of you know very offensive content and there and you can ask it to generate content for you on basically any topic right you just give it a prompt and we'll just start start writing and all writes content like you see on the internet you know even down to like saying advertisement in the middle of some of its generations and you think about the possibilities for generating fake news or abusive content and you know it's interesting seeing what people have done with you know we released a smaller version of GPT too and the people have done things like try to generate now I you know take my own Facebook message history and generate more Facebook messages like me and people generating fake politician content or you know there's a bunch of things there where you at least have to think is this going to be good for the world there's the flip side which is I think that there's a lot of awesome applications that we really want to see like creative applications in terms of if you have sci-fi authors that can work with this tool and come up with cool ideas like that seems that seems awesome if we can write better sci-fi through the use of these tools and we've actually had a bunch of people write in to us asking hey can we use it for you know for a variety of different creative applications so the positive I actually pretty easy to imagine there if you know the usual NLP applications are really interesting but let's go there it's kind of interesting to think about a world where look at Twitter where that just fake news but smarter and smarter BOTS being able to spread in an interesting complex in that working way in information that just floods out us regular human beings with our original thoughts so what are your views of this world with deep t20 right what are you how do we think about again it's like one of those things about in the 50s trying to describe the the internet or the smartphone what do you think about that world the nature of information do we and one possibility is that we'll always try to design systems that identify it robot versus human and we'll do so successfully and so we will authenticate that we're still human and the other world is that we just accept the part the fact that we're swimming in a sea of fake news and just learn to swim there well have you ever seen the there so you know popular meme of of robot eye with a physical physical arm and pen clicking the I'm not a robot button yeah I think I think the truth is that that really trying to distinguish between robot and human is a losing battle ultimately you think it's a losing battle I think it's a losing battle ultimately right I think that that is that in terms of the content in terms of the actions that you can take I mean think about how captures have gone alright the captures used to be a very nice simple you have this image all of our OCR is terrible you put a couple of of artifacts in it you know humans are gonna be able to tell what what it is an AI system wouldn't be able to today like I can barely do CAPTCHAs yeah and I think that that this is just kind of where we're going I think CAPTCHAs where we're a moment in time thing and as AI you systems become more powerful that they're being human capabilities that can be measured in a very easy automated way that the a eyes will not be capable of I think that's just like it's just an increasingly hard technical battle but it's not that all hope is lost right and you think about how do we already authenticate ourselves right the you know we have systems we have social security numbers if you're in the u.s. or you know you have you have uh you know ways of identifying individual people and having real world identity tied to to digital identity seems like a step towards you know authenticating the source of content rather than the content itself now there are problems with that how can you have privacy and unanimity in a world where the only content you can really trust is or the only way you can trust content is by looking at where it comes from and so I think that building out good reputation networks maybe maybe one possible solution but yeah I think that this this question is it's not an obvious one and I think that we you know maybe sooner than we think we'll be in a world where you know today I often will read a tweet and be like I feel like a real human wrote this or you know don't feel like this is like genuine I feel like I kind of judge the content a little bit and I think in the future it just won't be the case you will get for example the FCC comments on net neutrality it came out later that millions of those were auto-generated and that the researchers were able to do various statistical tik techniques to do that what do you do in a world where those statistical techniques don't exist it's just impossible to tell the difference between humans at any highs and in fact the the the the most persuasive arguments are written by by AI all that stuff it's not sci-fi anymore you okay GPT to making a great argument for why recycling is bad for the world you got to read that be like huh you're right yeah that's that's quite interesting I mean ultimately it boils down to the physical world being the last frontier of proving so you said like basically networks of people humans vouching for humans in the physical world and somehow the authentication and ends there I mean if I had to ask you I mean you're way too eloquent for a human so if I had to ask you to authenticate like prove how do I know you're not a robot and how do you know I'm not a robot you know I think that's so far were this in the space this conversation we just had the physical movements we did is the biggest gap between us and AI systems is the physical relation so maybe that's the last frontier well here's another question is is you know why why is why is solving this problem important right like what aspects are really important to us I think that probably where we'll end up is will hone in on what do we really want out of knowing if we're talking to a human and and I think that again this comes down to identity and so I think that the Internet of the future I expect to be one that will have lots of agents out there that will interact with with you but I think that the question of is this you know a real flesh-and-blood human or is this an automated system be less important let's actually go there it's GPT two is impressive and let's look at GPT 20 why is it so bad that all my friends are GPT 20 well why is it so why is it so important on the internet do you think to interact with only human beings why can't we live in a world where ideas can come from models trained on human data yeah I think this is I think is actually a really interesting question this comes back to the how do you even picture a world with some new technology right and I think that that one thing I think is important is is you know Gosei honesty um and I think that if you have you know almost in the Turing test style sense sense of technology you have a eyes that are pretending to be humans and deceiving you I think that is you know that that feels like a bad thing right I think that it's really important that we feel like we're in control of our environment right that we understand who we're interacting with and if it's an AI or a human um that that's not something we're being deceived about but I think that the flipside of can I have as a meaningful of an interaction with an AI as I can with a human well I actually think here you can turn to sci-fi and her I think is a great example of asking this very question right and one thing I really love about her is it really starts out almost by asking how meaningful are human virtual relationships right and and then you have a human who has a relationship with an AI and that you really start to be drawn into that right and that all of your emotional buttons get triggered in the same way as if there was a real human that was on the other side of that phone and so I think that that this is one way of thinking about it is that I think that we can have meaningful interactions and that if there's a funny joke some sense it doesn't really matter if it was written by a human or an AI but what you don't want anyway I think we should really draw hard lines is deception and I think that as long as we're in a world where you know why do why do we build AI systems at all alright the reason we want to build them is to enhance human lives to make humans be able to do more things to have human humans feel more fulfilled and if we can build AI systems that do that I you know sign me up so the process of language modeling how far do you think it take us let's look at movie her do you think a dialog natural language conversation is formulated by the Turing test for example do you think that process could be achieved through this kind of unsupervised language modeling so I think the Turing test in it seems real form isn't just about language right it's really about reasoning to write that to really pass the Turing test I should be able to teach calculus to whoever's on the other side and have it really understand calculus and be able to you know go and solve new calculus problems and so I think that to really solve the Turing test we need more than what we're seeing with language models we need some way of plugging and reasoning now how different will that be from what we already do that's an open question right might be that we need some sequence of totally radical new ideas or it might be that we just need to kind of shape our existing systems in a slightly different way but I think that in terms of how far language modeling will go it's already gone way further than many people would have expected right I think that things like and I think there's a lot of really interesting angles to poke in terms of how much does GBT to understand physical world like you know you you read a little bit about fire under water in ng bt - so it's like okay maybe it doesn't quite understand what these things are but at the same time I think that you also see various things like smoke coming from flame and you know a bunch of these things that gbg - it has no body it is no physical experience it's just statically read data and I think that I think that if the answer is like we don't know yet then these questions though we're starting to be able to actually ask them to physical systems the real systems that exist and that's very exciting do you think what's your intuition do you think if you just scale language modeling maintain like significantly scale that reasoning can emerge from the same exact mechanisms I think it's unlikely that if we just scale gbt - that will have reasoning in the full-fledged way and I think that there is like you know the type signature is a little bit wrong right that like there's something we do with that we call thinking right where we spend a lot of compute like a variable amount of compute get to better answers right I think a little bit harder I get a better answer and that that kind of type signature isn't quite encoded in a gbt all right G BT well kind of like it's been a long time and it's like evolutionary history baking and all this information getting very very good at this predictive process and then at runtime I just kind of do one forward pass and and am able to generate stuff and so you know there might be small tweaks to what we do in order to get the type signature right for example well you know it's not really one forward pass right you know you generate symbol by symbol and so maybe you generate like a whole sequence of thoughts and you only keep like the last bit or something right um but I think that at the very least I would expect you have to make changes like that yeah yeah just exactly how we you said think is the process of generating thought by thought in the same kind of way you like you said keep the last bit the thing that we converge towards you know and I think there's there's another piece which is which is interesting which is this out of distribution generalization right that like thinking somehow lets us do that right that we have an experience a thing and yet somehow we just kind of keep refine our mental model of it this is again something that feels tied to whatever reasoning is and maybe it's a small tweak to what we do maybe it's many ideas and we'll take as many decades yeah so the the assumption they're generalization out of distribution is that it's possible to create new new ideas the pot you know it's possible that nobody's ever creating new ideas and then was scaling GPT 2 to GPT 20 you would you would essentially generalize to all possible thoughts the Aussie was gonna have I think just to play devil's ne how many new new story ideas have we come up with since Shakespeare right yeah exactly it's just all different forms of love and drama and so on okay not sure if you read bitter lesson a recent blog post by Ray Sutton no I have he basically says something that echoes some of the ideas that you've been talking about which is he says the biggest lesson that can be read from so many years of AI research is that general methods the leverage computation are ultimately going to ultimately win out do you agree with this so basically and openly I in general about the ideas you are exploring about coming up with methods whether it's GPT to modeling or whether its opening i-5 playing dota or a general method is better than a more fine-tuned expert to tuned a method yeah so I think that well one thing that I think was really interesting about the reaction to that blog post was that a lot of people have read this as saying that compute is all that matters and it's a very threatening idea right and I don't think it's a true idea either right it's very clear that we have algorithmic ideas that have been very important for making progress and to really build a GI you want to push as far as you can on the computational scale and you want to push as far as you can on human human ingenuity and so I think you need both but I think the way that you phrase the question is actually very good right that it's really about what kind of ideas should we be striving for and absolutely if you can find a scalable idea you'd pour more compute into you pour more data into it it gets better like that's that's the real Holy Grail and so I think that the answer to the question I think is yes that that's really how we think about it that part of why we're excited about the power of deep learning the potential for building an AGI is because we look at the system that exists in the most successful AI systems and we realize that you scale those up they're gonna work better and I think that that scalability is something that really gives us hope for being able to build transformative systems so I'll tell you this is a partially an emotional you know a thing that responds that people often have is computers so important for state-of-the-art performance you know individual developers maybe a 13 year old sitting somewhere in Kansas or something like that you know they're sitting they that might not even have a GPU and or may have a single GPU a 1080 or something like that and there's this feeling like well how can I possibly compete or contribute to this world of AI if scale is so important so for if you can comment on that and in general do you think we need to also in the future focus on democratizing compute resources more more or as much as we democratize the algorithms well so the way that I think about it is that there's this space of a possible progress right there's a space of ideas and sort of systems that will work that will move us forward and there's a portion of that space and to some extent increasingly significant portion in that space that does just require massive compute resources and for that fit I think that the answer is kind of clear and that part of why we have this structure that we do is because we think it's really important to be pushing the scale and to be you know building these large clusters and systems but there's another part portion of the space that isn't about the large scale compute that are these ideas that and again I think that for the a is to really be impactful and really shine that they should be ideas that if you scaled them up would work way better than they do at small scale um but you can discover them without massive computational resources and if you look at the history of recent developments you think about things like began or the VA II that these are ones that I think you could come up with them without having and you know in practice people did come up with with them without having massive massive computational resources alright I just talked to being good fellow but the thing is the initial gaen produce pretty terrible results right so only because it was in a very specific it was because only because they're smart enough to know that this is quite surprising can generate anything that they know and do you see a world there's that too optimistic and dreamer like to imagine that the compute resources are something that's owned by governments and provided as utility actually some extent this this question reminds me of of blog post from one of my former professors at Harvard this guy map Matt Welsh who was a systems professor I remember sitting in his tenure talk right and you know that he had literally just gotten tenure he went to Google for the summer and I then decided he wasn't going back it's academia right and that kind of in his bog post makes this point that look as a systems researcher that I come with these cool system ideas right and I kind of a little proof of concept and the best thing I can hope for is that the people at Google or Yahoo which was around at the time I will implement it and like actually make it work at scale right that's like the dream for me right I built the little thing and they the big thing that's actually working and for him he said I'm done with that I want to be the person who's who's actually doing this building and and deploying and I think that there's a similar dichotomy here right I think that there are people who really actually find value and I think it is a valuable thing to do to be the person who produces those ideas right who builds the proof of a concept and yeah you don't get to generate the coolest possible Ganim ajiz but you invent it again right and so that there's that there's there's a real trade-off there and I think that's a very personal choice but I think there's value in both sides do you think creating AGI something or some new models would we would see echoes of the brilliance even at the prototype level so you would be able to develop those ideas without scale the initial so seeds you know I always like to look at at examples that exist right look at real precedent and so take a look at the June 2018 model that we released that we scaled up to turn into GPT - and you can see that at small scale it set some records right this was you know the devotional GPT we actually had some some cool generations that weren't nearly as amazing and really stunning as the GPT - ones every but it was promising it was interesting and so I think it is the case that with a lot of these ideas do you see prominence at small-scale but there is an asterisk here a very big asterisk which is sometimes we see behaviors that emerge that are qualitatively different from anything we saw it's small scale and that the original inventor of whatever algorithm looks at and says I didn't think it could do that this is what we saw in DotA all right so PPO was was created by John Schulman who's a researcher here and and with with dota we basically just ran PPO at massive massive scale and I there's some tweaks and in order to make it work but fundamentally it's PPO with the core and we were able to get this long-term planning these behaviors to really play out on a time scale that we just thought was not possible and John looked at that and it was like I didn't think it could do that that's what happens when you're at three orders of magnitude more scale contest to that yeah but it still has the same flavors of you know at least echoes of the expected billions although I suspect with GPT is scaled more and more you might get surprising things so yeah yeah you're right it's it's interesting that it's it's difficult to see how far an idea will go when it's scaled it's an open question we've also at that point with with dota and PPO like I mean here's a very concrete one right it's like it's actually one thing that's very surprising about dota that I think people don't really pay that much attention to is the decree of generalization out of distribution that happens right that you have this AI that's trained against other bots for its entirety the entirety of its existence sorry to take a step back and you can't talk through in his you know a story of dota a story of leading up to opening high five and that passed and what was the process of self play it's a lot of training yeah yeah yeah yeah so with donors dota yeah it's a complex video game and we started training we started trying to solve dota because we felt like this was a step towards the real world relative to other games like chess or go right those various free board games where you just kind of have this board very discrete moves dota starts to be much more continuous time so you have this huge variety of different actions that you have a 45 minute game with all these different units and it's got a lot of messiness to it that really hasn't been captured by previous games and famously all of the hard-coded bots for dota were terrible right just impossible to write anything good for it because it's so complex and so this seems like a really good place to push what's the state of the art in reinforcement learning and so we started by focusing on the one versus one version of the game and and and were able to solve that we were able to beat the world champions and that the learning you know the skill curve was this crazy exponential right it was like constantly we were just scaling up that we were fixing bugs and you know that you look at the at the skill curve and it was really very very smooth one it's actually really interesting to see how that like human iteration loop yielded very steady exponential progress and to want one side note first of all it's an exceptionally popular video game this effect is that there's a lot of incredible human experts at that video again so the benchmark the trying to reach is very high and the other can you talk about the approach that was used initially and throughout training these agents to play this game yep and so they person that we used is self play and so you have cue agents they don't know anything they battle each other they discover something a little bit good and now they both know it and they just get better and better and better without bound and that's a really powerful idea right that we then went from the one versus one version of the game and scaled up to four five versus five right so you think about kind of like with basketball where you have this like team sport you know I need to do all this coordination and we were able to push the same idea the same self play to to really get to the professional level at the full thigh versus by version of the game and and and the things I think are really interesting here is that these agents in some ways they're almost like an insect like intelligence right where the you know there's they've a lot in common with how an insect is trained right insect kind of lives in this environment for a very long time or you know the the ancestors of this insect I've been around for a long time and had a lot of experience it gets baked into into into this agent and you know it's not really smart in the sense of a human right it's not able to go and learn calculus but it's able to navigate its environment extremely well and simple they handle unexpected things in the environment that's never seen before pretty well and we see the same sort of thing with our dota BOTS right they're able to in within this game they're able to play against humans which are something that never existed in its evolutionary environment totally different playstyles from humans versus the bots and yet it's able to handle it extremely well and that's something I think was very surprising to us was something that doesn't really emerge from what we've seen with PPO at smaller scale writing the kind of scale we're running the stuff out was you know I could take a hundred thousand CPU cores running with like hundreds of GPUs it's probably about I you know like you know it's something like hundreds of years of experience going into this bot every single real day and so that scale is massive and we start to see very different kinds of behaviors out of the algorithms that we all know and love Dora he mentioned beat the world expert 1v1 and then you didn't weren't able to win 505 this year yeah at the best in the world so what's what's the comeback story what's first of all talk through that does exceptionally exciting event and what's what's the following months and this year look like yeah yeah so well one thing that's interesting is that you know we lose all the time because we we so the dota team at opening I we played the bot against better players than our system all the time or at least we used to it right like you know the the first time we lost publicly was we went up on stage at the International and we played against some of the best teams in the world and we ended up losing both games but we gave them a run for their money right the both games were kind of 30 minutes 25 minutes and that they went back and forth back and forth back and forth and so I think that really shows that we're at the professional level and that kind of looking at those games we think that the coin could have gone a different direction and it could have could have had some wins and so that was actually very encouraging for us and you know it's interesting because the international was at a fixed time right so we we knew exactly what day we were going to be playing and we pushed as far as we could as fast as we could two weeks later we had a bot that had an 80% win rate versus the one that played at ti so the march of progress you know you should think of as a snapshot rather than as an end state and so in fact well we'll be announcing our our finals pretty soon I actually think that we'll announce our final match I prior to this podcast being released Cassell's there should be will be playing will be playing against the the world champions and you know for us it's really less about like that the way that we think about what's upcoming is the final milestone the file competitive milestone for the project right that our goal in all of this isn't really about beating humans at dota our goal is to push the state of the art and reinforcement learning and we've done that right and we've actually learned a lot from our system and that we have I you know I think a lot of exciting next steps that we want to take and so you know kind of a final showcase of what we built we're going to do this match but for us it's not really the success or failure to see you know do do we have the coin flip go in our direction or against where do you see the field of deep learning heading in the next few years what do you see the work and reinforcement learning perhaps heading and more specifically with open AI all the exciting projects that you're working on what is 2019 hold for you massive scale scale I will put a naturist on that and just say you know I think that it's about ideas plus scale you need both so that's a really good point so the question in terms of ideas you have a lot of projects that are exploring different areas of intelligence and the question is when you when you think of scale do you think about growing scale those individual projects so do you think about adding new projects and society today in if you are thinking about adding new projects or if you look at the past what's the process of coming up with new projects and new ideas so we really have a life cycle of project here so we start with a few people just working on a small scale idea and language is actually a very good example of this that it was really you know one person here who was pushing on language for a long time I mean then you get signs of life right and so this is like let's say you know with with the original gbt we had something that was interesting and we said okay it's time to scale this right it's time to put more people on it put more computational resources behind it and and then we just kind of keep pushing and keep pushing and the end state is something that looks like dota or robotics where you have a large team of you know 10 or 15 people that are running things at very large scale and that you're able to really have material engineering and and and and you know sort of machine learning science coming together to make systems that work and get material results that just would've been impossible otherwise so we do that whole lifecycle we've done it a number of times you know typically end to end it's probably to two years or so to do it I you know the organization's been around for three years so maybe we'll find it we also have longer life cycle projects but you know we we will work up to those we have so so one one team that we were actually just starting Illya and I are kicking off a new team called the reasoning team and that this is to really try to tackle how do you get neural networks to reason and we think that this will be a long-term project and we're very excited about in terms of reasoning super exciting topic woody what kind of benchmarks what kind of tests of reasoning oh do you envision what what would if you set back with whatever drink and you would be impressed that this system is able to do something what would that look like not fear improving they are improving so some kind of logic and especially mathematical logic I think so right I think that there's there's there's kind of other problems that are dual to if you're improving in particular you know you think about programming I think about even like security analysis of code that these all kind of capture the same sorts of core reasoning and being able to do some amount of distribution generalization it would be quite exciting if open ai reasoning team was able to prove that P equals NP that would be very nice I be very very very exciting especially if it turns out the P equals NP that'll be interesting too it just it would be ironic and humorous you know so what problem stands out to you is uh the most exciting and challenging impactful to the work for us as a community in general and for open AI this year he mentioned reasoning I think that's that's a heck of a problem yeah so I think reasoning is an important one I think it's gonna be hard to get good results in 2019 you know again just like we think about the life cycle takes time I think for 2019 language modeling seems to be kind of on that ramp right it's at the point that we have a technique that works we want to scale 100 X thousand X see what happens awesome do you think we're living in a simulation I think it's I think it's hard to have a real opinion about it I you know it's actually interesting I separate out things that I think can have like you know yield materially different predictions about the world from ones that are just kind of you know fun fun to speculate about and I kind of view simulation it's more like is there a flying teapot between Mars and Jupiter like maybe but it's a little bit hard to know what that would mean for my life so there is something actionable I'd so some of the best work opening has done is in the field of reinforcement learning and some of the success of reinforcement learning come from being able to simulate the problem you trying to solve so it do you have a hope for reinforcement for the future of reinforcement learning and for the future of simulation like what we're talking about autonomous vehicles or any kind of system do you see that scaling so we'll be able to simulate systems and enhance be able to create a simulator that echoes our real world and proving once and for all even though you're denying it that we're living in a simulation question right so you know kind for the core thereof like can we use simulation for self-driving cars take a look at our robotic system dactyl right that was trained in simulation using the DOTA system in fact and it transfers to a physical robot and I think everyone looks at our dota system the wreck okay it's just a game how are you ever going to escape to the real world and the answer is well we did it with the physical robot the noble could program and so I think the answer is simulation goes a lot further than you think if you apply the right techniques to it now there's a question of you know are the beings in that simulation gonna wake up and have consciousness I think that one seems a lot a lot harder to again reason about I think that you know you really should think about like where where exactly just human consciousness come from and our own self-awareness and you know is it just that like once you have like a complicated enough neural net do you have to worry about the agents feeling pain and I think there's like interesting speculation to do there but but you know again I think it's a little bit hard to know for sure well let me just keep with a speculation do you think to create intelligence general intelligence you need one consciousness and to a body do you think any of those elements are needed or as intelligence something that's that's orthogonal to those I'll stick to the kind of like the the non grand answer first right so the non grand answer is just to look at you know what are we already making work yoga GPG to a lot of people would have said that even get these kinds of results you need real-world experience you need a body you need grounding how are you supposed to reason about any of these things how are you supposed to like even kind of know about smoke and fire and those things if you've never experienced them and GPT two shows it you can actually go way further than that kind of reasoning would predict so I think that the the in terms of doing any consciousness do we need a body it seems the answer is probably not right that we can probably just continue to push kind of the systems we have they already feel general they're not as competent or as general or able to learn as quickly as an aged guy would but you know they're at least like kind of proto AGI in some way and they don't need any of those things now now let's move to the grand answer which is you know if our neural next Nets conscious already would we ever know how can we tell right yeah here's where the speculation starts become become you know at least interesting or fun and maybe a little bit disturbing it depending on where you take it but it certainly seems that when we think about animals that there's some continuum of consciousness you know my cat I think is is conscious in some way right I you know not as conscious as a human and you could imagine that you could build a little consciousness meter right you pointed a cat gives you a little reading we ran a human gives you much bigger reading what would happen if you pointed one of those at a dota neural net and if your training of this massive simulation do the neural nets feel pain you know it becomes pretty hard to know that the answer is no and it becomes pretty hard to to really think about what that would mean if the answer were yes and it's very possible you know for example you could imagine that maybe the reason these humans are have consciousness is because it's a it's a convenient computational shortcut all right if you think about it if you have a being that wants to avoid pain which seems pretty important to survive in this environment I'm and once you like you know eat food then that may be the best way of doing it is to have a being that's conscious right that you know in order to succeed in the environment you need to have those properties and how are you supposed to implement them and maybe this this consciousness is way of doing that if that's true then actually maybe we should expect that really competent reinforcement learning agents will also have consciousness but you know it's a big if and I think there a lot of other arguments they can make in other directions I think that's a really interesting idea that even GPT to has some degree of consciousness that's something is actually not as crazy to think about it's useful to think about as we think about what it means to create intelligence of a dog intelligence of a cat and the intelligence of human so last question do you think we will ever fall in love like in the movie her with an artificial intelligence system or an artificial intelligence system falling in love with a human I hope so if there's any better way to end it on love so Greg thanks so much for talking today thank you for having me you
Eric Weinstein: Revolutionary Ideas in Science, Math, and Society | Lex Fridman Podcast #16
- The following is a conversation with Eric Weinstein. He's a mathematician, economist, physicist and the managing director of Thiel Capital. He coined the term and you could say, is the founder of the Intellectual Dark Web, which is a loosely assembled group of public intellectuals that includes Sam Harris, Jordan Peterson, Steven Pinker, Joe Rogan, Michael Shermer and a few others. This conversation is part of the artificial intelligence podcast at MIT and Beyond. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D. And now, here's my conversation with Eric Weinstein. - Are you nervous about this? - Scared shitless. - Okay, (speaking foreign language). - You mention Kung Fu Panda as one of your favorite movies. It has the usual profound master student dynamic going on, so who has been a teacher that significantly influenced the direction of your thinking and life's work? So, if you're the Kung Fu Panda, who was your Shifu? - Oh, well that's interesting, because I didn't see Shifu as being the teacher. - Who was the teacher? - Oogway, Master Oogway, the turtle. - Oh, the turtle, right. - They only meet twice in the entire film and the first conversation sort of doesn't count. So, they magic of the film, in fact, it's point is that the teaching that really matters is transferred during a single conversation and it's very brief. And so, who played that role in my life? I would say either my grandfather, Harry Rubin and his wife Sophie Rubin, my grandmother or Tom Lehrer. - Tom Lehrer? - Yeah. - In which way? - If you give a child Tom Lehrer records, what you do is you destroy their ability to be taken over by later malware, and it's so irreverent, so witty, so clever, so obscene, that it destroys the ability to lead a normal life for many people. So if I meet somebody who's usually really shifted from any kind of neuro typical presentation, I'll often ask them, are you a Tom Lehrer fan and the odds that they will respond are quite high. - Now Tom Lehrer is Poisoning Pigeons in the Park, Tom Lehrer? - That's very interesting. There are a small number of Tom Lehrer songs that broke into the general population. Poisoning Pigeons in the Park, the Element Song and perhaps the Vatican Rag. So, when you meet somebody who knows those songs, but doesn't know-- - Oh, you're judging me right now, aren't you? - Harshly. - Okay. - No, but you're Russian, so. - Yes. - Undoubtedly you know Nikolai Ivanovich Lobachevsky. That song. - Yes, yeah, yup. - So that was a song about plagiarism that was in fact plagiarized, which most people don't know, from Danny Kaye. Where Danny Kaye did a song called Stanislavsky of the musky arts. And, so Tom Lehrer did this brilliant job of plagiarizing a song and making it about plagiarism, and then making it about this mathematician, who worked in Non-Euclidean geometry. That was like giving heroin to a child. It was extremely addictive and eventually led me to a lot of different places. One of which may have been PhD in mathematics. - And he was also at least a lecturer in mathematics, I believe at Harvard, something like that? - Yeah, I just had dinner with him, in fact. When my son turned 13, we didn't tell him, but his Bar mitzvah present was dinner with his hero, Tom Lehrer, and Tom Lehrer was 88 years old, sharp as a tack, irreverent and funny as hell and just, you know, there are very few people in this world that you have to meet while they're still here and that was definitely one for our family. - So that wit is a reflection of intelligence in some kind of deep way. Like where that would be a good test of intelligence whether you're a Tom Lehrer fan. So, what do you think that is about wit? About that kind of humor, ability to see the absurdity in existence? Do you think that's connected to intelligence or are we just two Jews on a mic that appreciate that kind of humor. - No, I think that it's absolutely connected to intelligence. You can see it, there's a place where Tom Lehrer decides that he's going to lampoon Gilbert of Gilbert and Sullivan and he's going to out due Gilbert with clever, meaningless wordplay and he has, I forget the, well let's see. He's doing Clementine as if Gilbert and Sullivan wrote it and he says that I missed her, depressed her young sister named Esther. This mister to pester she'd tried. A pestering sister's a festering blister you're best to resist her, say I! The sister persisted, the mister resisted, I kissed her, all loyalty slipped. When she said I could have her her sister's cadaver must surely have turned in its crypt. That's so dense. It's so insane. - Yeah. - That that's clearly intelligence because it's hard to construct something like that. If I look at my favorite Tom Lehrer lyric, you know, there's a perfectly absurd one which is, once all the German's were warlike and mean, but that couldn't happen again. We taught them a lesson in 1918 and they've hardly bothered us since then. - Right. - That is a different kind of intelligence. You know, you're taking something that is so horrific and you're sort of making it palatable and funny, and demonstrating also just your humanity. I mean, I think the thing that came through as Tom Lehrer wrote all of these terrible, horrible lines, was just what a sensitive and beautiful soul he was, who was channeling pain through humor and through grace. - I've seen throughout Europe, throughout Russia, the same kind of humor emerged from the generation of World War II. It seemed like that humor is required to somehow deal with the pain and the suffering of that that war created. - Well, you do need the environment to create the broad Slavic soul. I don't think that many Americans really appreciate Russian humor, how you had to joke during the time of let's say, Article 58 under Stalin. You had to be very, very careful. You know, the concept of a Russian satirical magazine, like Krokodil, doesn't make sense. So you have this cross cultural problem that there are certain areas of human experience that it would be better to know nothing about and quite unfortunately, eastern Europe knows a great deal about them, which makes the songs of Vladimir Voevodsky so potent. You know, the prose of Pushkin, whatever it is. You have to appreciate the depth of the Eastern European experience and I would think that perhaps Americans knew something like this around the time of the Civil War or maybe under slavery and Jim Crow or even the harsh tyranny of the coal and steel employers during the labor wars. But in general I would say, it's hard for us to understand and imagine the collective culture, unless we have the system of selective pressures, that for example, Russians were subjected to. - Yeah, so if there's one good thing that comes outta war, it's literature, art and humor and music. - Oh, I don't think so. I think almost everything is good about war except for death and destruction. - Right. Without the death it would bring the romance of it, the whole thing is nice but-- - Well, this is why we're always caught up in war. We have this very ambiguous relationship to it, is that it makes life real and pressing and meaningful, and at an unacceptable price and the price has never been higher. - So to jump into AI a little bit, in one of the conversations you had or one of the videos, you described that one of the things AI systems can't do and biological systems can is self replicate in the physical world. - [Eric] Oh, no, no. - In the physical world. - Well, yes. The physical robots can't self replicate, but there's a very tricky point, which is that the only thing that we've been able to create that's really complex that has an analog of our reproductive system is software. - But, nevertheless, software replicates itself, if we're speaking strictly for the replication is this kinda digital space. So let me, just to begin, let me ask a question. Do you see a protective barrier or a gap between the physical world and the digital world? - Let's not call it digital. Let's call it the logical world versus the physical world. - Why logical? - Well, because even though we had, let's say Einstein's brain preserved, it was meaningless to us as a physical object because we couldn't do anything with what was stored in it at a logical level. And so, the idea that something may be stored logically and that it may be stored physically, are not necessarily, we don't always benefit from synonymizing. I'm not suggesting that there isn't a material basis to the logical world, but that it does warrant identification with a separate layer that need not invoke logic gates and zeros and ones. - And, so connecting those two worlds, the logical world and the physical world or maybe just connecting to the logical world inside our brain, Einstein's brain, you mention the idea of outtelligence. - [Eric] Artificial outtelligence. - Artificial outtelligence. - Yes, this is the only essay that John Brockman every invited me to write that he refused to publish in Edge. (Lex chuckling) - Why? - Well, maybe it wasn't well written, but I don't know. - The idea is quite compelling. It's quite unique and new, at least from my stance point. Maybe you can explain it? - Sure. What I was thinking about is why it is that we're waiting to be terrified by artificial general intelligence. When in fact, artificial life is terrifying in and of itself and it's already here. So, in order to have a system of selective pressures, you need three distinct elements. You need variation within a population, you need heritability, and you need differential success. So, what's really unique and I've made this point, I think elsewhere, about software, is that if you think about what humans know how to build that's impressive. So, I always take a car and I say does it have an analog of each of the physiological systems? Does it have a skeletal structure, that's its frame. Does it have a neurological structure, it has an onboard computer. It has a digestive system. The one thing it doesn't have is a reproductive system. But if you can call spawn on a process, effectively you do have a reproductive system. And that means that you can create something with variation, heritability and differential success. Now, the next step in the chain of thinking was where do we see inanimate non intelligent life outwitting intelligent life? And I have two favorite systems and I try to stay on them so that we don't get distracted. One of which is the Ophrys orchid sub species or sub clave, I don't what to call it. - Is that a type of flower? - Yeah, it's a type of flower that mimics the female of a pollinator species in order to dupe the males into engaging in what is called pseudocopulation, with the fake female, which is usually represented by the lowest pedal, and there's also a pheromone component to fool the males into thinking they have a mating opportunity. But the flower doesn't have to give up energy in the form of nectar as a lure, because it's tricking the males. The other system is a particular species of muscle, lampsilis in the clear streams of Missouri and it fools bass into biting a fleshy lip that contain its young and when the bass see this fleshy lip, which looks exactly like a species of fish that the bass like to eat, the young explode and clamp onto the gills and parasitize the bass and also use the bass to redistribute them as they eventually release. Both of these systems, you have a highly intelligent dupe, being fooled by a lower life form. And what is sculpting these convincing lures? It's the intelligence of previously duped targets for these strategies. So when the target is smart enough to avoid the strategy, those weaker mimics fall off, they have terminal lines, and only the better ones survive. So it's an arms race between the target species that is being parasitized, getting smarter, and this other less intelligent or non intelligent object getting as if smarter. And so, what you see is, is that artificial general intelligence is not needed to parasitize us. It's simply sufficient for us to outwit ourselves, so you could have a program, let's say, one of these Nigerian scams that writes letters and uses whoever sends it Bitcoin to figure out which aspects of the program should be kept, which should be varied and thrown away, and you don't need it to be in anyway intelligent in order to have a really nightmare scenario being parasitized by something that has no idea what it's doing. - So you phrased a few concepts really eloquently, so let me try to, there's a few directions this goes. So one, first of all, in the way we write software today, it's not common that we allow it to self modify. - But we do have that ability now. - We have the ability. - It's just not common. - I just isn't common. So your thought is that that is a serious worry if there becomes-- - But self modifying code is available now. - So there's different types of self modification right? There's personalization, you know your email app, your Gmail is self modifying to you after you login or whatever, you can think of that way. But ultimately it's central, all the information is centralized, but you're thinking of ideas where you're, this is a unique entity operating under selective pressures and it changes-- - Well, you just, if you think about the fact that our immune systems don't know what's coming at them next, but they have a small set of spanning components, and if it's a sufficiently expressive system in that any shape or binding region can be approximated with the Lego that is present, then you can have confidence that you don't need to know what's coming at you because the combinatorics are sufficient to reach any configuration needed. - So that's a beautiful, well terrifying thing to worry about because it's so within our reach. - Whenever I suggest these things I do always have a concern as to whether or not I will bring them into being by talking about them. - So there's this thing from OpenAI next week, to talk to the founder of OpenAI, this idea that their text generation, the new stuff they have for generating text is, they didn't wanna bring it, they didn't wanna release it because they're about the-- - I'm delighted to hear that, but they're going to end up releasing. - Yes, that's the thing is I think talking about it, well at least from my end, I'm more a proponent of technology preventing, so further innovation preventing the detrimental effects of innovation. - Well we're in a, we're sort of tumbling down a hill at accelerating speed. So, whether or not we're proponents or-- - It doesn't really matter. - It may not matter. - But I. - Well, may not. - Well, I do feel that there are people who have held things back and you know, died poorer than they might have otherwise been. We don't even know their names. I don't think that we should discount the idea that having the smartest people showing off how smart they are by what they've developed may be a terminal process. I'm very mindful in particular of a beautiful letter that Edward Teller of all people wrote to Leo Szilard where Szilard was trying to figure out how to control the use of atomic weaponry at the end of World War II and Teller rather strangely, because many of us view him as a monster, showed some very advanced moral thinking talking about the slim chance we have for survival and that the only hope is to make war unthinkable. I do think that not enough of us feel in our gut, what it is we are playing with when we are working on technical problems, and I would recommend to anyone who hasn't seen it, a movie called A Bridge on the River Kwai, about I believe captured British POWs who just in a desire to do a bridge well, end up over collaborating with their Japanese captors. - Well, now you're making me question the unrestricted open discussion of ideas and AI. - I'm not saying I know the answer, I'm just saying that I could make a decent case for either our need to talk about this and to become technologically focused on containing it or need to stop talking about this and try to hope that the relatively small number of highly adept individuals who are looking at these problems is small enough that we should in fact be talking about how to contain it. - Well, the way ideas, the way innovation happens, what new ideas develop, Newton with calculus. Whether if he was silent, the idea would emerge elsewhere, well in case of Newton, of course, but you know, in case of AI, how small is this set of individuals out of which such ideas would arise? Is it in question-- - Well, the ideas of the researchers we know and those that we don't know. Who may live in countries that don't wish us to know what level they're currently at and are very disciplined in keeping these things to themselves. Of course, I will point out that there is a religious school in Kerala that developed something very close to the calculus, certainly in terms of infinite series in, I guess religious prayer and rhyme and prose. So, you know, it's not that Newton had any ability to hold that back and I don't really believe that we have an ability to hold back. I do think that we could change the proportion of the time we spend worrying about the effects of what if we are successful, rather than simply trying to succeed and hope that we'll be able to contain things later. - Beautifully put. So, on the idea of outtelligence, what form, treading cautiously 'cause we've agreed as we tumbled down the hill. What form-- - Can't stop ourselves can we? - We cannot. What form do you see it taking? So one example, Facebook, Google, do want to, I don't know a better word, you want to influence users to behave a certain way. And so that's one kind of example of outtelligence, is systems perhaps modifying the behavior of these intelligent human beings in order to sell more product of different kind. But do you see other examples of this actually emerging in? - Just take any parasitic system, you know? Make sure that there's some way in which that there's differential success, heritability and variation, and those are the magic ingredients. And if you really wanted to build a nightmare machine, make sure that the system that expresses the variability has a spanning set, so that it can learn to arbitrary levels by making it sufficiently expressive. That's your nightmare. - So, it's your nightmare, but it could also be, it's a really powerful mechanism by which to create, well, powerful systems. So, are you more worried about the negative direction that might go versus the positive? So, you said parasitic, but that doesn't necessarily need to be what the system converges towards. It could be, what is it, symbiotic-- - Well, parasitism, the dividing line between parasitism and symbiosis is not so clear. - [Lex] That's what they tell me about marriage. I'm still single, so I don't know. - Well, yeah I do. Would could go into that too, but um. (Lex laughing) No, I think we have to appreciate, you know, are you infected by your own mitochondria? - Right. Yeah. - Right, so in marriage you fear the loss of independence, but even though the American therapeutic community may be very concerned about co-dependence, what's to say that co-dependence isn't what's necessary to have a stable relationship in which to raise children who are maximally K-selected and require incredible amounts of care, because you have to wait 13 years before there's any reproductive payout and most of us don't want our 13 year olds having kids. That's a very tricky situation to analyze. I would say that predators and parasites drive much of our evolution and I don't know whether to be angry at them or thank them. - Well, ultimately, I mean, nobody knows the meaning of life or what even happiness is, but there is some metrics-- - Oh, they didn't tell you? - They didn't, they didn't. That's why all the poetry and books are bought. You know, there are some metrics under which you can kinda measure how good it is that these AI systems are roaming about. So, you're more nervous about software than you are optimistic about ideas of self replicating larceny? - I don't think we've really felt where we are. You know, occasionally we get a wake up. 9/11 was so anomalous compared to everything else we've experienced on American soil, that it came to us as a complete shock that that was even a possibility. What it really was, was a highly creative and determined RND team deep in the bowels of Afghanistan showing us that we had certain exploits that we were open to, that nobody had chosen to express. I can think of several of these things that I don't talk about publicly, that just seem to have to do with how relatively unimaginative those who wish to cause havoc and destruction have been up until now. The great mystery of our time, of this particular little era, is how remarkably stable we've been since 1945 when we demonstrated the ability to use nuclear weapons in anger. And, we don't know why things like that haven't happened since then. We've had several close calls, we've had mistakes, we've had brinkmanship and what's now happened is that we've settled into a sense that oh, it'll always be nothing. It's been so long since something was at that level of danger, that we've got a wrong idea in our head and that's why when I went on the Ben Shapiro Show, I talked about the need to resume above ground testing of nuclear devices because we have people whose developmental experience suggests that when, let's say Donald Trump and North Korea engage on Twitter, oh it's nothing, it's just posturing, everybody's just in it for money, there's a sense that people are in a video game mode, which has been the right call since 1945. We've been mostly in video game mode. It's amazing. - So you're worried about a generation which has not seen any existential-- - We've lived under it. See, you're younger. I don't know if, and again, you came from Moscow. - [Lex] From, yeah. - There was a TV show called The Day After that had a huge effect on a generation growing up in the US, and it talked about what life would be like after a nuclear exchange. We have not gone through an embodied experience collectively where we've thought about this, and I think it's one of the most irresponsible things that the elders among us have done, which is to provide this beautiful garden in which the thorns are cut off of the rosebushes and all of the edges are rounded and sanded, and so people have developed this totally unreal idea which is everything is going to be just fine. And do I think that my leading concern is AGI or my leading concern is thermonuclear exchange or gene drives or any one of these things? I don't know. But I know that our time here in this very long experiment here is finite, because the toys that we've built are so impressive and the wisdom to accompany them has not materialized. And I think we actually got a wisdom uptick since 1945. We had a lot of dangerous, skilled players on the world stage who nevertheless, no matter how bad they were, managed to not embroil us in something that we couldn't come back from. - The Cold War. - Yeah, and the distance from the Cold War, you know, I'm very mindful of, there was a Russian tradition, actually, of on your wedding day going to visit a memorial to those who gave their lives. Can you imagine this? Where on the happiest day of your life, you go and you pay homage to the people who fought and died in the Battle of Stalingrad? I'm not a huge fan of communism, I gotta say, but there were a couple of things that the Russians did that were really positive in the Soviet era, and I think trying to let people know how serious life actually is, is the Russian model of seriousness is better than the American model. - And maybe, like you mentioned, there was a small echo of that after 9/11, but-- - We wouldn't let it form. We talk about 9/11, but it's 9/12 that really moved the needle. When we were all just there and nobody wanted to speak. We witnessed something super serious and we didn't want to run to our computers and blast out our deep thoughts and our feelings. And it was profound because we woke up, briefly, and I talk about the gated institutional narrative that sort of programs our lives, I've seen it break three times in my life. One of which was the election of Donald Trump, another time was the fall of Lehman Brothers, when everybody who knew that Bear Stearns wasn't that important, knew that Lehman Brothers met AIG was next, and the other one was 9/11. And so, if I'm 53 years old and I only remember three times that the global narrative was really interrupted, that tells you how much we've been on top of developing events, you know? We had the Murrah Federal Building explosion, but it didn't cause the narrative to break, it wasn't profound enough. Around 9/12, we started to wake up out of our slumber, and the powers that be, did not want a coming together. You know, the admonition was go shopping. - The powers that be, so what is that force? As opposed to blaming individuals-- - We don't know. - So whatever that-- - Whatever that force is. - In silence. - There's a component of it that's emergent and there's a component of it that's deliberate. So, give yourself a portfolio with two components. Some amount of it is emergent, but some amount of it is also an understanding that if people come together, they become an incredible force. And what you're seeing right now, I think is, there are forces that are trying to come together and there are forces that are trying to push things apart, and you know, one of them is the globalist narrative versus the national narrative. Where to the globalist perspective, the nations are bad things in essence. That they're temporary, they're nationalistic, they're jingoistic, it's all negative, to people more in the national idiom, they're saying look, this is where I pay my taxes, this is where I do my army service, this is where I have a vote, this is where I have a passport. Who the hell are you to tell me that because you've moved into some place that you can make money globally, that you've chosen to abandon other people to whom you have a special and elevated duty. And I think that these competing narratives have been pushing towards the global perspective from the elite and a larger and larger number of disenfranchised people are saying, hey, I actually live in a place and I have laws and I speak a language, I have a culture, and who are you to tell me that because you can profit in some far away land, that my obligations to my fellow countrymen are so much diminished. - So these tensions between nations and so on, ultimately you see being proud of your country and so on, which creates potentially the kind of things that led to wars and so on. They ultimately, it is human nature and it is good for us, for wake up calls of different kinds. - Well, I think that these are tensions. And my point isn't, I mean nationalism run amuck is a nightmare. And internationalism run amuck is a nightmare. And the problem is we're trying to push these pendulums to some place where they're somewhat balanced. Where we have a higher duty of care to those who share our laws and our citizenship, but we don't forget our duties of care to the global system. I would think this is elementary, but the problem that we're facing concerns the ability for some to profit by abandoning their obligations to others within their system and that's what we've had for decades. - You mention nuclear weapons. I was hoping to get answers from you since one of the many things you've done as economics, maybe you can understand human behavior of why the heck we haven't blown each other up yet. But okay, so we'll get-- - I don't know the answer. - Yeah. It's really important to say that we really don't know-- - [Eric] A mild uptick in wisdom. - A mild uptick in wisdom, Steven Pinker who I've talked with has a lot of really good ideas about why, but he-- - I don't trust his optimism. (Lex chuckling) - Listen, I'm Russian, so I never trust a guy who's that optimistic-- - No, no, no, it's just that you're talking about a guy who's looking at a system in which more and more of the kinetic energy, like war, has been turned into potential energy like unused nuclear weapons. - Wow, beautifully put. - And you know now I'm looking at that system and I'm saying, okay, well if you don't have a potential energy trim, then everything's just getting better and better. - Yeah, yeah, wow, that's beautifully put. Only a physicist could, okay. - [Eric] I'm not a physicist. - Well, is that a dirty word? - [Eric] No, no, I wish I were a physicist. - Me too, my dad's a physicist. I'm trying to live up to that probably for the rest of my life. He's probably gonna listen to this too, so. - Hey dad. - Yeah, (chuckling). So, your friend, Sam Harris, worries a lot about the existential threat of AI. Not in the way that you've described, but in the more. - Well, he hangs out with Elon. I don't know Elon. - So, are you worried about that kind of, you know, about the, about either robotics systems or traditionally defined AI systems essentially becoming super intelligent, much more intelligent than human beings and getting-- - Well, they already are, and they're not. - When seen as a collective, you mean? - I can mean all sorts of things, but certainly, many of the things that we thought were peculiar to general intelligence do not require general intelligence. So that's been one of the big awakenings that you can write a pretty convincing sports story from stats alone. Without needing to have watched the game. So, you know, is it possible to write lively prose about politics? Yeah, no, not yet. So, we're sort of all over the map. One of the things about chess, there's a question I once asked on Quora that didn't get a lot of response, which was, what is the greatest brilliancy ever produced by a computer in a chess game? Which was different than the question of what is the greatest game ever played. So if you think about brilliancies, is what really animates many of us to think of chess as an art form. Those are those moves and combinations that just show such flair, panache and soul. Computers weren't really great at that. They were great positional monsters. And recently we've started seeing brilliancies. - [Lex] Yeah, a few grandmasters have identified with AlphaZero that things were quite brilliant. - Yeah, so that's an example of something. We don't that that's AGI, but in a very restricted set of rules like chess, you're starting to see poetry of a high order. And so I don't like the idea that we're waiting for AGI. AGI is sort of slowly infiltrating our lives in the same way that I don't think a worm should be, you know C. Elegans shouldn't be treated as non conscious because it only has 300 neurons. Maybe it just has a very low level of consciousness. Because we don't understand what these things mean as they scale up. So, am I worried about this general phenomena? Sure, but I think that one of the things that's happening is that a lot of us are fretting about this in part because of human needs. We've always been worried about the Golem, right? - [Lex] Well, the Golem is the artificially created-- - Life, you know? - [Lex] It's like Frankenstein type of character-- - Yeah, sure, it's a Jewish version. Frankenberg, Franken-- - Yeah, that makes sense. - Sorry, so the, but we've always been worried about creating something like this and it's getting closer and closer and there are ways in which we have to realize that the whole thing, the whole thing that we've experienced are the context of our lives, is almost certainly coming to an end. And I don't mean to suggest that we won't survive, I don't know. And I don't mean to suggest that it's coming tomorrow. It could be 300, 500 years, but there's no plan that I'm aware of, if we have three rocks that we could possibly inhabit that are sensible within current technological dreams; the Earth, the Moon and Mars, and we have a very competitive civilization that is still forced into violence to sort out disputes that cannot be arbitrated. It is not clear to me that we have a long term future until we get to the next stage, which is to figure out whether or not the Einsteinian speed limit can be broken, and that requires our source code. - Our source code, the stuff in our brains to figure out? What do you mean by our source code? - The source code of the context. Whatever it is that produces the quarks, the electrons, the neutrinos. - Oh, our source code, I got it, so this is-- - You're talking about the stuff that's written in a higher level language. - Yeah, yeah, that's right. You're talking about the low level, the bits or even lower-- - Right, that's what is currently keeping us here. We can't even imagine, you know, we have hair brain schemes for staying within the Einsteinian speed limit. You know, maybe if we could just drug ourselves and go into a suspended state or we could have multiple generations of that. I think all that stuff is pretty silly. But, I think it's also pretty silly to imagine that our wisdom is going to increase to the point that we can have the toys we have and we're not going to use them for 500 years. - Speaking of Einstein, I had a profound breakthrough when I realized you're just one letter away from the guy. - Yeah, but I'm also one letter away from Feinstein. - Well, you get to pick. Okay, so, unified theory. You know, you've worked, you enjoy the beauty of geometry. Well, I don't actually know if you enjoy it. You certainly are quite good at it-- - I tremble before it. - Tremble before it. If you're religious that is one of the-- - I don't have to be religious. It's just so beautiful, you will tremble anyway. - I just read Einstein's biography and one of the ways, one of the things you've done is try to explore a unified theory talking about a 14 dimensional observerse that has the 4D space time continuum embedded in it. I'm just curious how you think, philosophically at a high level, about something more than four dimensions. How do you try to, what does it make you feel talking in the mathematical world about dimensions that are greater than the ones we can perceive? Is there something that you take away that's more than just the math? - Well, first of all, stick out your tongue at me. Okay, now. (Lex chuckling) On the front of that tongue. - Yeah? - There was a sweet receptor. And next to that were salt receptors on two different sides. A little bit farther back there were sour receptors, and you wouldn't show me the back of your tongue where your bitter receptor was. - [Lex] I show the good side always. - Okay, but that was four dimensions of taste receptors. But you also had pain receptors on that tongue and probably heat receptors on that tongue. So let's assume that you have one of each. That would be six dimensions. So when you eat something, you eat a slice of pizza and it's got some hot pepper on it, maybe some jalapeno. You're having a six dimensional experience, dude. - Do you think we over emphasize the value of time as one of the dimensions or space? Well, we certainly over emphasize the value of time 'cause we things to start and end, or we really don't like things to end, but they seem to. - Well, what if you flipped one of the spacial dimensions into being a temporal dimension? And you and I were to meet in New York City and say, well where and when should we meet? And I say, how about I'll meet you on 36th and Lexington at 2:00 in the afternoon and 11 o'clock in the morning? That would be very confusing. - Well, it's so convenient for us to think about time, you mean? - We happen to be in a delicious situation in which we have three dimensions of space and one of time, and they're woven together in this sort of strange fabric where we can trade off a little space for a little time. But we still only have one dimension that is picked out relative to the other three. It's very much Gladys Knight and the Pips. - So, which one developed for who? Did we develop for these dimensions? Or did the dimensions, or were they always there and it doesn't-- - Well, do you imagine that there isn't a place where there are four temporal dimensions? Or two and two of space and time? Or three of time and one of space? And then would time not be playing the role of space? Why do you imagine that the sector that you're in is all that there is? - I certainly do not, but I can't imagine otherwise. I mean, I haven't done ayahuasca or any of those drugs. I hope to one day, but-- - Instead of doing ayahuasca, you could just head over to Building Two. - That's where the mathematicians are? - [Eric] Yeah, that's where they hang. - [Lex] Just to look at some geometry? - Well just ask about pseudo-Riemannian geometry, that's what you're interested in. (Lex chuckling) - [Lex] Okay. - Or you can talk to a shaman and end up in Peru. - And then some extra money for that trip-- - Yeah, but you won't be able to do any calculations if that's how you choose to go about it. - Well, a different kind of calculation-- - So to speak. - Yeah. One of my favorite people, Edward Franco, Berkeley professor, author of Love and Math, great title for a book, said that you were quite a remarkable intellect to come up with such beautiful, original ideas in terms of unified theory and so on. But you were working outside academia. So, one question in developing ideas that are truly original, truly interesting, what's the difference between inside academia and outside academia when it comes to developing such ideas? - Oh, it's a terrible choice, a terrible choice. So, if you do it inside of academics, you are forced to constantly... show great loyalty to the consensus and you distinguish yourself with small, almost microscopic heresies to make your reputation in general. And you have very competent people and brilliant people who are working together who formed very deep social networks, and have a very high level of behavior, at least within mathematics and at least technically within physics, theoretical physics. When you go outside, you meet lunatics and crazy people. Madmen and these are people who do not usually subscribe to the consensus position and almost always lose their way. And the key question is will progress likely come from someone who is miraculously managed to stay within the system and is able to take on a larger amount of heresy, that is sort of unthinkable? In which case, that will be fascinating. Or, is it more likely that somebody will maintain a level of discipline from outside of academics and be able to make use of the freedom that comes from not having to constantly affirm your loyalty to the consensus of your field. - So you've characterized in ways that academia, in this particular sense is declining. You posted the plot, the older population of the faculty is getting larger. The younger is getting smaller and so on. So, which direction of the two are you more hopeful about? - Well, the Baby Boomers can't hang on forever. - Which is first of all in general true, and second of all in academia-- - But that's really what this time is about-- - Is the Baby Boomers control. - Is we didn't, we're used to like financial bubbles that last a few years in length and then pop. - Yes. - The Baby Boomer bubble is this really long lived thing and all of the ideology, all of the behavior patterns, the norms, you know, for example string theory is an almost entirely Baby Boomer phenomena. It was something that Baby Boomers were able to do because it required a very high level of mathematical ability. - You don't think of string theory as an original idea? - Oh, I mean it was original to Veneziano who probably is older than the Baby Boomers and there are people who are younger than the Baby Boomers who are still doing string theory. And I'm not saying that nothing discovered within the large string theoretic complex is wrong. Quite the contrary. A lot of brilliant mathematics and a lot of the structure of physics was elucidated by string theorists. What do I think of the deliverable nature of this product that will not ship called string theory? I think that is largely an affirmative action program for highly mathematically and geometrically talented Baby Boomer physicists so that they can say that they're working on something within the constraints of what they will say is quantum gravity. Now there are other schemes. You know, there's like asymptotic safety. There are other things that you could imagine doing. I don't think much of any of the major programs, but to have inflicted this level of loyalty through a shibboleth, well surely you don't question x. Well, I question almost everything in the string program, and that's why I got out of physics. When you called me physicist, was a great honor, but the reason I didn't become a physicist wasn't that I fell in love with mathematics. As I said, wow, in 1984, 1983, I saw the field going mad, and I saw that mathematics, which has all sorts of problems, was not going insane. And so instead of studying things within physics, I thought it was much safer to study the same objects within mathematics. And there's a huge price to pay for that. You lose physical intuition. But the point is, is that it wasn't a North Korean reeducation camp, either. - Are you hopeful about cracking open the Einstein Unified Theory in a way that has, in really understanding whether uniting everything together with quantum theory and so on? - I mean, I'm trying to play this role myself. To do it to the extent of handing it over to the more responsible, more professional, more competent community. So, I think that they're wrong about a great number of their belief structures, but I do believe, I mean I have a really profound love hate relationship with this group of people. - On the physics side? - Oh yeah. - 'Cause the mathematicians actually seem to be much more open minded and-- - Well, they are and they aren't. They're open minded about anything that looks like great math. - Right. - Right, they'll study something that isn't very important physics, but if it's beautiful mathematics then they'll have, they have great intuition about these things. As good as the mathematicians are, and I might even intellectually at some horsepower level give them the edge. The theoretical physics community is bar none, the most profound intellectual community that we have ever created. It is the number one, there is nobody in second place as far as I'm concerned. Like, in their spare time, in the spare time they invented molecular biology. - What was the origin of molecular biology? You're saying physicists-- - Well somebody like Francis Crick. A lot of the early molecular biologists-- - Were physicists? - Yeah, I mean you know, Schrodinger wrote What is Life and that was highly inspirational. I mean, you have to appreciate that there is no community like the basic research community in theoretical physics. And it's not something, I'm highly critical of these guys. I think that they would just wasted the decades of time with and your religious devotion to their misconceptualization of where the problems were in physics. But this has been the greatest intellectual collapse ever witnessed within academics. - You see it as a collapse or just a lull? - Oh, I'm terrified that we're about to lose the vitality. We can't afford to pay these people. We can't afford to give them an accelerator just to play with in case they find something at the next energy level. These people created our economy. They gave us the RAD Lab and radar. They gave us two atomic devices to end World War II. They created the semi-conductor and the transistor to power our economy through Moore's law. As a positive externality of particle accelerators, they created the World Wide Web and we have the insolence to say, why should we fund you with our taxpayer dollars? No, the question is, are you enjoying your physics dollars? Right, these guys signed the world's worst licensing agreement. - Right. - And, if they simply charged for every time you used a transistor or a URL or enjoyed the peace that they have provided during this period of time through the terrible weapons that they developed, or your communications devices. All of the things that power our economy, I really think came out of physics, even to the extent that chemistry came out of physics, and molecular biology came out of physics. So, first of all you have to know that I'm very critical of this community. Second of all, it is our most important community. We have neglected it, we've abused it, we don't take it seriously, we don't even care to get them to rehab after a couple of generations of failure. Right, no one, I mean I think the youngest person to have really contributed to the standard model at a theoretical level was born in 1951, right? Frank Wilczek. And almost nothing has happened that in theoretical physics after 1973, '74, that sent somebody to Stockholm for theoretical development that predicted experiment. So, we have to understand that we are doing this to ourselves. Now, with that said, these guys have behaved abysmally, in my opinion, because they haven't owned up to where they actually are, what problems they're really facing, how definite they can actually be. They haven't shared some of their most brilliant discoveries, which are desperately needed in other fields like gauge theory, which at least the mathematicians can share, which is an upgrade of the differential calculus of Newton and Leibniz, and they haven't shared the importance of renormalization theory, even though this should be standard operating procedure for people across the sciences dealing with different layers and different levels of phenomena, so-- - And by shared you mean communicated in such a way that it disseminates throughout the different sciences? - These guys are sitting, both theoretical physicists and mathematicians are sitting on top of a giant stockpile of intellectual gold, right? They have so many things that have not been manifested anywhere. I was just on Twitter I think I mentioned the Hoberman switch pitch that shows the self duality of the tetrahedron realizes that it linkage mechanism. Now this is like a triviality and it makes an amazing toy that's, you know, built a market. - Yeah. - Hopefully a fortune for Chuck Hoberman. Well, you have no idea how much great stuff that these priests have in their monastery. - So, it's a truly a love and hate relationship for you? It sounds like it's more on the love side-- - [Eric] This building that we're in right here. - Yes. - Is the building in which I really put together the conspiracy between the National Academy of Sciences and the National Science Foundation through the Government University Industry Research round table to destroy the bargaining power of American academics using foreign labor. On microfiche in the base. - Post docs and so on? - Oh yeah, that was done here in this building. Isn't that weird? - I'm truly speaking with a revolutionary and a radical-- - No, no, no, no, no, no, no, no. At an intellectual level, I am absolutely garden variety. I'm just straight down the middle. The system that we are in. This University is functionally insane. - [Lex] Yeah. - Harvard is functionally insane and we don't understand that when we get these things wrong, the financial crisis made this very clear. There was a long period where every grownup, everybody with a tie who spoke in baritone tones with a right degree at the end of their name. - Yeah. - Were talking about how we banished volatility, we were in the great moderation. Okay, they were all crazy. And who was right? It was like Nassim Taleb. - Right. - Nouriel Roubini. Now, what happens is is that they claimed the market went crazy. But the market didn't go crazy. The market had been crazy and what happened is is that it suddenly went sane. Well that's where we are with academics. Academics right now is mad as a hatter and it's absolutely evident. I can show you graph after graph. I can show you the internal discussions. I can show you the conspiracies. Harvard's dealing with one right now over its admissions policies for people of color who happen to come from Asia. All of this madness is necessary to keep the game going. What we're talking about, just while we're on the topic of revolutionaries, is we're talking about the danger of an outbreak of sanity. - Yeah, you're the guy pointing out the elephant in the room here and-- - The elephant has no clothes. - Is that how that goes? I was gonna talk a little bit to Joe Rogan about this, ran out of time, but I think you have some, just listening to you, you can probably speak really eloquently to academia on the difference between the different fields. So, do you think there's a difference between science, engineering and then the humanities in academia, in terms of tolerance, that they're willing to tolerate? So, from my perspective I thought computer science and maybe engineering is more tolerant to radical ideas, but that's perhaps innocent of me. 'Cause I always, you know all the battles going on now are a little bit more of the humanities side and gender studies and so on. - Have you seen the American Mathematical Society's publication of an essay called Get out the Way. - I have not, what's the-- - The idea is that white men who hold positions within Universities in mathematics should vacate their positions so that young black women can take over or something like this. - That's in terms of diversity, which I also wanted to ask you about, but in terms of diversity of strictly ideas. - Oh, sure. - Do you think, 'cause you're basically saying physics as a community, has become a little bit intolerant to some degree, to new radical ideas or at least you said that-- - But it's changed a little bit recently. Which is that even string theory is now admitting, okay, this doesn't look very promising in the short term. Right, so the question is what compiles, if you wanna take the computer science metaphor. What will get you into a journal? Will you spend your life trying to push some paper into a journal or will it be accepted easily? What do we know about the characteristics of the submitter and what gets taken up and what does not? All of these fields are experiencing pressure because no field is performing so brilliantly well that it's revolutionizing our way of speaking and thinking, in the ways in which we have become accustomed. - But don't you think, even in theoretical physics, a lot of times, even with theories like string theory, you could speak to this, it does eventually lead to what are the ways that this theory would be testable? - Yeah, ultimately, although look, there's this thing about Popper and the scientific method that's a cancer and a disease in the minds of very smart people. That's not really how most of the stuff gets worked out, it's how it gets checked. - Right, so-- - And there is a dialog between theory and experiment. But, everybody should read Paul Dirac's 1963 Scientific American article where he, you know, it's very interesting. He talks about it as if it was about the Schrodinger equation and Schrodinger's failure to advance his own work because of his failure to account for some phenomena. The key point is that if your theory is a slight bit off, it won't agree with experiment, but it doesn't mean that the theory is actually wrong. But Dirac could as easily have been talking about his own equation in which he predicted that the electrons should have an anti-particle. And since the only positively charged particle that was known at the time was the proton, Heisenberg pointed out, well shouldn't your anti-particle, the proton, have the same mass as the electron and doesn't that invalidate your theory? So, I think that Dirac was actually being, potentially quite sneaky in talking about the fact that he had been pushed off of his own theory, to some extent, by Heisenberg. But look, we fetishize the scientific method and Popper and falsification because it protects us from crazy ideas entering the field. So, you know, it's a question of balancing type one and type two error and we were pretty maxed out in one direction. - The opposite of that. Let me say what comforts me. Sort of biology or engineering at the end of the day, does the thing work? - Yeah. - You can test the crazies away. Well see now, you're saying but some ideas are truly crazy and some are actually correct, so. - Well there's pre-correct currently crazy. - Yeah. - Right? And so you don't wanna get rid of everybody who's pre-correct and currently crazy. The problem is is that we don't have standards in general, for trying to determine who has to be put to the sword in terms of their career and who has to be protected as some sort of giant time suck pain in the ass who may change everything. - Do you think that's possible? Creating a mechanism of those selective-- - Well, you're not gonna like the answer, but here it comes. - [Lex] Oh, boy. - It has to do with very human elements. We're trying to do this at the level of like rules and fairness, it's not gonna work. 'Cause the only thing that really understands this, you ever read The Double Helix? - It's a book? - Oh, you have to read this book-- - Oh, boy. - Not only did Jim Watson half discover this three dimensional structure of DNA, he was also one hell of a writer before he became an ass. No, he's tried - Yes, like he is. - To destroy his own reputation-- - I knew about the ass, I didn't know about the good writer. - Jim Watson is one of the most important people now living, and as I've said before, Jim Watson is too important a legacy to be left to Jim Watson. That book tells you more about what actually moves the dial. I mean, there's another story about him which I don't agree with, which is that he stole everything from Rosalind Franklin. I mean, the problems that he had with Rosalind Franklin are real, but we should actually honor that tension in our history by delving into it, rather than having a simple solution. Jim Watson talks about Francis Crick being a pain in the ass that everybody secretly knew was super brilliant. And there's an encounter between Chargaff who came up with the equimolar relations between the nucleotides, who should've gotten the structure of DNA and Watson and Crick, and you know, he talks about missing a shiver in the heartbeat of biology and this stuff is so gorgeous, it just makes you tremble even thinking about it. Look, we know very often who is to be feared, and we need to fund the people that we fear. The people who are wasting our time need to be excluded from the conversation. You see, and you know, maybe we'll make some errors in both directions, but we have known our own people. We know the pains in the asses that might work out, and we know the people who are really just blowhards who really have very little to contribute most of the time. It's not 100%, but you're not gonna get there with rules. - Right, it's using some kind of instinct. I mean, to be honest, I'm gonna make you roll your eyes for a second, but in the first time I heard that there was large community of people who believe the earth is flat, actually made me pause and ask myself the question-- - Why would there be such a community? - Yeah, is it possible the earth is flat? So I had to like, wait a minute. I mean, then you go through a thinking process that I think is really healthy. It ultimately ends up being a geometry thing I think. It's an interesting thought experiment at the very least. - Well, see I don't, I do a different version of it. I say, why is this community stable? - Yeah, that's a good way to analyze it. - Interesting that whatever we've done has not erased the community. So, you know, they're taking a long shot bet that won't pan out, you know? Maybe we just haven't thought enough about the rationality of the square root of two and somebody brilliant will figure it out. Maybe we will eventually land one day on the surface of Jupiter and explore it. Right, these are crazy things that will never happen. - So, much of social media operates by AI algorithms, we talked this a little bit, recommending the content you see. So, on this idea of radical thought, how much should AI show you things you disagree with on Twitter and so on? In the Twitterverse in the-- - I hate this question. - Yeah? - Yeah. - 'Cause you don't know the answer? - No, no, no, no. Look, they've pushed out this cognitive Lego to us that will just lead to madness. It's good to be challenged with things that you disagree with. You answer is, no. It's gonna to be challenged with interesting things with which you currently disagree, but that might be true. I don't really care about whether or not I disagree with something or don't disagree, I need to know why that particular disagreeable thing is being pushed out. Is it because it's likely to be true? Is it because, is there some reason? Because I write a computer generator to come up with an infinite number of disagreeable statements that nobody needs to look at. So, please before you push things at me that are disagreeable, tell me why. - There is an aspect in which that question is quite dumb, especially because it's being used to almost very generically by these different networks to say, well we're trying to work this out, but you know, basically how much, do you see the value of seeing things you don't like? Not you disagree with, because it's very difficult to know exactly what you articulated, which is the stuff that's important for you to consider that you disagree with. That's really hard to figure out. The bottom line is the stuff you don't like. If you're a Hillary Clinton supporter, it might not make you feel good to see anything about Donald Trump. That's the only thing algorithms can really optimize for currently. They really can't-- - No, they can do better. - You think so? - No, we're engaged in some moronic back and forth where I have no idea why people who are capable of building Google, Facebook, Twitter are having us in these incredibly low level discussions. Do they not know any smart people? Do they not have the phone numbers of people who can elevate these discussions? - They do, but this-- - Please, no, no, no. - They're optimizing for a different thing and they are pushing those people out of those rooms. - No, they're optimizing for things we can't see, and yes, profit is there. Nobody's questioning that. But they're also optimizing for things like political control or the fact that they're doing business in Pakistan and so they don't wanna talk about all the things that they're going to bending to in Pakistan. So, we're involved in a fake discussion. - You think so, you think these conversations at that depth are happening inside Google? You don't think they have some basic metrics under user engagements? - You're having a fake conversation with us, guys. We know you're having a fake conversation. I do not wish to be part of your fake conversation. You know how to cool these units. You know high availability like nobody's business. My Gmail never goes down, almost. - So you think just because they can do incredible work on the software side with infrastructure, they can also deal with some of these difficult questions about human behavior, human understanding, you're not, (chuckling). - I mean, I've seen the developer's screens that people take shots of inside of Google. - [Lex] Yeah. - And I've heard stories inside of Facebook and Apple. We're not, we're engaged, they're engaging us in the wrong conversations. We are not at this low level. Here's one of my favorite questions. - Yeah. - Why is every piece of hardware that I purchase in text base equipped as a listening device? Where's my physical shutter to cover my lens? We had this in the 1970s. They had cameras that had lens caps, you know? How much would it cost to have a security model? Pay five extra bucks. Why is my indicator light software controlled? Why when my camera is on, do I not see that the light is on by putting it as something that cannot be bypassed? Why have you setup all of my devices, at some difficulty to yourselves, as listening devices and we don't even talk about this. This thing is total fucking bullshit. - Well, I hope, so. - Wait, wait, wait. - These discussions are happening about privacy, 'cause they're more difficult than you give 'em credit for-- - It's not just privacy. - Yeah? - It's about social control. We're talking about social control. Why do I not have controls over my own levers? Just have a really cute UI, where I can switch, I can dial things or I can at least see what the algorithms are. - You think that there are some deliberate choices being made here-- - There's emergence and there is intention. There are two dimensions. The vector does not collapse onto either axis. But the idea that anybody who suggests that intention is completely absent is a child. - That's really beautifully put and like many things you've said is gonna make me-- - Can I turn this around slightly though? - Yeah. - I sit down with you and you say that you're obsessed with my feed. - Uh huh. - I don't even know what my feed is, what are you seeing that I'm not? - I was obsessively looking through your feed on Twitter, 'cause it was really enjoyable because there's the Tom Lehrer element, there's the humor in it. - By the way that feed is ericrweinstein on Twitter. - That's great. - @ericrweinstein. - Yeah. - No, but seriously, why? - Why did I find it enjoyable or what was I seeing? - What are you looking for? Why are we doing this? What is this podcast about? I know you got all these interesting people. I'm just some guy who is sort of a podcast guest. - Sort of a podcast, you're not even wearing a tie. I mean, - I'm not even wearing a tie. - It's not even a serious interview. I was searching for meaning, for happiness, for a dopamine rush, so short term and long term. - And how are you finding your way to me? What is, I don't honestly know what I'm doing to reach you. - The representing ideas which field common sense to me and not many people are speaking. So it's kinda like, the Intellectual Dark Web folks, right? These folks, from Sam Harris to Jordan Peterson, to yourself, are saying things where it's like, you're like saying, look there's an elephant and he's not wearing any clothes and I say, yeah, yeah, let's have more of that conversation. That's how I'm finding you. - I'm desperate to try to change the conversation we're having. I'm very worried we've got an election in 2020. I don't think we can afford four more years of a misinterpreted message, which is what Donald Trump was, and I don't want the destruction of our institutions. They all seem hellbent on destroying themselves. So, I'm trying to save theoretical physics, trying to save the New York Times, trying to save our various processes and I think it feels delusional to me that this is falling to a tiny group of people who are willing to speak out without getting so freaked out that everything they say will be misinterpreted and that their lives will be ruined through the process. I mean, I think we're in an absolutely bananas period of time and I don't believe it should fall to such a tiny number of shoulders to shoulder this weight. - So, I have to ask you on a capitalism side, you mentioned that technology is killing capitalism or has effects that are, well not unintended, but not what economists would predict or speak of capitalism creating. I just wanna talk to you about in general, the effect of even then, artificial intelligence or technology automation taking away jobs and these kinds of things and what you think is the way to alleviate that. Whether the Andrew Ang presidential candidate with universal basic income, UBI, what are your thoughts there? How do we fight off the negative effects of technology that-- - All right, you're a software guy, right? - Yep. - A human being is a worker, is an old idea. - Yes. - A human being has a worker is a different object, right? - Yes. - So if you think about object oriented programming as a paradigm, a human being has a worker and a human being has a soul. We're talking about the fact that for a period of time, the worker that a human being has, was in a position to feed the soul that a human being has. However, we have two separate claims on the value in society. One is as a worker and the other is as a soul, and the soul needs sustenance, it needs dignity, it needs meaning, it needs purpose. As long as you're means of support is not highly repetitive, I think you have a while to go before you need to start worrying. But if what you do is highly repetitive and it's not terrible generative, you are in the crosshairs of for loops and while loops and that's what computers accel at; repetitive behavior and when I say repetitive I may mean things that have never happened through combinatorial possibilities, but as long as it has a looped characteristic to it, you're in trouble. We are seeing a massive push towards socialism because capitalists are slow to address the fact that a worker may not be able to make claims. A relatively undistinguished median member of our society still has needs to reproduce, needs to dignity and when capitalism abandons the median individual or the bottom tenth or whatever it's going to do, it's flirting with revolution and what concerns me is that the capitalists aren't sufficiently capitalistic to understand this. You really want to court authoritarian control in our society because you can't see that people may not be able to defend themselves in the marketplace because the marginal product of their labor is too low to feed their dignity as a soul? So, my great concern is that our free society has to do with the fact that we are self organized. I remember looking down from my office in Manhattan when Lehman Brothers collapsed and thinking, who's gonna tell all these people that they need to show up at work when they don't have a financial system to incentivize them to show up at work? So, my complaint is first of all, not with the socialists, but with the capitalists, which is you guys are being idiots. You're courting revolution by continuing to harp on the same old ideas that well, try harder, bootstrap yourself. Yeah, to an extent that works, to an extent. But we are clearly headed in a place that there's nothing that ties together our need to contribute and our need to consume and that may not be provided by capitalism, because it may have been a temporary phenomena. So, check out my article on anthropic capitalism and the new gimmick economy. I think people are late getting the wake up call, and we would be doing a better job saving capitalism from itself because I don't want this done under authoritarian control, and the more we insist that everybody who's not thriving in our society during their reproductive years in order to have a family, is failing at a personal level. I mean, what a disgusting thing that we're saying. What a horrible message. Who the hell have we become that we've so bought in to the Chicago model that we can't see the humanity that we're destroying in that process and I hate the thought of communism, I really do. My family has flirted with it decades past, it's a wrong, bad idea, but we are going to need to figure out how to make sure that those souls are nourished and respected and capitalism better have an answer. And I'm betting on capitalism, but I gotta tell ya, I'm pretty disappointed with my team. - So you're still on the capitalism team, just there's a theme here-- - [Eric] Well, radical capitalism. - Right, hyper capitalism, yeah. - Look, I want, I think hyper capitalism is gonna have to be coupled to hyper socialism. You need to allow the most productive people to create wonders and you gotta stop bogging them down with all of these extra nice requirements. You know, nice is dead. Good has a future. Nice doesn't have a future because nice ends up with gulags. - Damn, that's a good line. Okay, last question. You Tweeted today, a simple, quite insightful equation saying "Imagine that every unit f of fame you picked up, "s stalkers and h haters". So, I imagine s and h are dependent on your path to fame, perhaps a little bit-- - Well, it's not a simple. I mean, people always take these things literally when you have like 280 characters to explain yourself. - So you mean that's not a mathematical-- - No, there's no law. - Oh, okay, all right. - I just, I put the word imagine because I still have a mathematicians desire for precision. - Yes. - Imagine that this were true. - But there was a beautiful way to imagine that there is a law that has those variables in it-- - [Eric] Yeah, yeah. - And you've become quite famous these days, so how do you yourself optimize that equation with the peculiar kind of fame that you've gathered along the way? - I wanna be kinder. I wanna be kinder to myself, I wanna kinder to others, I wanna be able to have heart, compassion and these things are really important, and I have a pretty spectrumy kind of approach to analysis. I'm quite literal. I can go full Rain Man on you at any given moment. No, I can, I can. It's facultative autism, if you like, and people are gonna get angry because they want autism to be respected, but. When you see me coding or you see me doing mathematics, you know, I speak with speech apnea, (stutters), be right down for dinner, you know? - [Lex] Yeah. - We have to try to integrate ourselves in those tensions between, you know, it's sort of back to us as a worker and us as a soul. Many of us are optimizing one at the expense of the other. And I struggle with social media and I struggle with people making threats against our families and I struggle with just how much pain people are in. And if there's one message I would like to push out there, you're responsible, everybody, all of us, myself included, with struggling. Struggle mightily because it's nobody else's job to do your struggle for you. Now with that said, if you're struggling and you're trying, and you're trying to figure out how to better yourself and where you've failed, where you've let down your family, your friends, your workers, all this kind of stuff, give yourself a break, you know? If it's not working out, I have a lifelong relationship with failure and success. There's been no period of my life where both haven't been present in one form or another. And, I do wish to say that a lot of the times people think this is glamorous. I'm about to go, you know, do a show with Sam Harris. - Yeah. - People are gonna listen in on two guys having a conversation on stage. It's completely crazy when I'm always trying to figure out how to make sure that those people get maximum value and that's why I'm doing this podcast, you know, just give yourself a break. You owe us your struggle. You don't owe your family or your coworkers or your lovers or your family members success. As long as you're in there and you're picking yourself up, recognize that this new situation with the economy that doesn't have the juice to sustain our institutions, has caused the people who've risen to the top of those institutions to get quite brutal and cruel. Everybody is lying at the moment. Nobody is really truth teller. Try to keep your humanity about you. Try to recognize that if you're failing, if things aren't where you want them to be and you're struggling and you're trying to figure out what you're doing wrong, what you could do, it's not necessarily all your fault. We are in a global situation. I have not met the people who are honest, kind, good, successful. Nobody that I've met is checking all the boxes. Nobody's getting all 10s. So, I just think that's an important message that doesn't get pushed out enough. Either people wanna hold society responsible for their failures, which is not reasonable. You have to struggle, you have to try. Or they wanna say you're 100% responsible for your failures, which is total nonsense. - Beautifully put. Eric, thank you so much for talking today. - Thanks for having me, buddy.
Leslie Kaelbling: Reinforcement Learning, Planning, and Robotics | Lex Fridman Podcast #15
the following is a conversation with lesie Cale bling she's a roboticist and professor at MIT she's recognized for her work and reinforcement learning planning robot navigation and several other topics in AI she won the IAI computers and thought award and was the editor-in-chief of the prestigious Journal machine learning research this conversation is part of the artificial intelligence podcast at MIT and Beyond if you enjoy it subscribe on YouTube iTunes or simply connect with me on Twitter at Lex Freedman spelled f r d and now here's my conversation with lesie cing what made me get excited about AI I can say that is I read girdle eer Bach when I was in high school that was pretty formative for me because it exposed uh the interestingness of Primitives and combination and how you can make complex things out of simple parts and ideas of AI and what kinds of programs might generate intelligent Behavior so so you first fell in love with AI reasoning logic versus robots yeah the robots came because um my first job so I finished an undergraduate degree in philosophy at Stanford and was about to finish a masters in computer science and I got hired at Sr uh in their AI lab and they were building a robot it was a kind of a follow on to shaky but all the shaky people were not there anymore and so my job was to try to get this robot to do stuff and that's really kind of what got me interested in robots so maybe taking a small step back your Bachelor's in Stanford and philosophy did Masters in PhD and computer science but the Bachelor of philosophy uh so what was that Journey like what elements of philosophy do you think you bring to your work in computer science so it's surprisingly relevant so the part of the reason that I didn't do a computer science undergraduate degree was that there wasn't one at Stanford at the time but that there's part of philosophy and in fact Stanford has a special sub major in something called now symbolic systems which is logic model Theory formal semantics of natural language and so that's actually a perfect preparation for work in Ai and computer science that that's kind of interesting so if you were interested in artificial intelligence what what kind of Majors were people even thinking about taking what in NEOS science was so besides philosophies what what were you supposed to do if you were fascinated by the idea of creating intelligence there weren't enough people who did that for that even to be a conversation okay I mean I think probably probably philosophy I mean it's interesting in my class my graduating class of undergraduate philosophers probably maybe slightly less than half went on in computer science slightly less than half went on in law and like one or two went on in philosophy uh so it was a common kind of connection do you think AI researchers have a role be part-time philosophers or should they stick to the solid science and engineering without sort of taking the philosophizing tangents I mean you work with robots you think about what it takes to create intelligent beings uh aren't you the perfect person to think about the big picture philosophy of it all the parts of philosophy that are closest to AI I think or at least the closest to AI that I think about are stuff like belief and knowledge and denotation and that kind of stuff and that's you know it's quite formal and it's like just one step away from the kinds of computer science work that we do kind of routinely I think that there are important questions still about what you can do with a machine and what you can't and so on although at least my personal view is that I'm completely a materialist and I don't think that there's any reason why we can't make a robot be behaviorally indistinguishable from a human and the question of whether it's in distinguishable internally whether it's a zombie or not in philosophy terms I actually don't I don't know and I don't know if I care too much about that right but there there is a philosophical Notions they're mathematical and philosophical because we don't know so much of how difficult it is how difficult is the perception problem how difficult is the planning problem how difficult is it to operate in this world successfully because our robots are not currently as successful as human beings in many tasks the the question about the gap between current robots and human beings borders a little bit on philosophy uh you know the the expanse of knowledge that's required to operate in this world the ability to uh form Common Sense knowledge the ability to reason about uncertainty much of the work you've been doing there's there's open questions there that uh I I don't know required to activate a certain big picture of view to me that doesn't seem like a philosophical Gap at all that's just to me it's there is a big technical Gap there's a huge technical Gap but I don't see any reason why it's more than a technical Gap perfect so when you mentioned AI you mentioned SRI and uh maybe can you describe to me when you first fell in love with robotics with robots were inspired uh which so you mentioned uh flaky or shaky shaky flaky and what what what was the robot that first captured your imagination what's possible right well so the first robot I worked with was flaky shaky was a robot that the SRI people had built but by the time I think when I arrived it was sitting in a corner of somebody's office dripping hydraulic fluid into a pan uh but it's iconic and really every everybody should read the shaky tech report because it has so many good ideas in it I mean they invented a star search and symbolic planning and learning macro operators they had uh low-level kind of configuration space planning for their robot they had Vision they had all this the basic ideas of a ton of things can you take a step by shaky have arms that what was the job what was the goals Shakey was a mobile robot but it could push objects and so it would move things around with which actuated with with it s with it with its base okay great um so it could but it and they had painted the base boards black uh so it used it used Vision to localize itself in a map it detected objects it could detect objects that were surprising to it uh it would plan and replan based on what it saw it reasoned about whether to look and take pictures I mean it really had the basics of of so many of the things that we think about now um how did it represent the space around it so it had representations at a bunch of different levels of abstraction so it had I think a kind of an occupancy grid of some sort at the lowest level uh at the high level it was uh abstract symbolic kind of rooms and connectivity so where does flaky come in yeah okay so I should up at SRI and the we were building a brand new robot as I said none of the people from the previous project were kind of there or involved anymore so we were kind of starting from scratch and my advisor uh was Stan resen Shin he ended up being my thesis adviser and he was motivated by this idea of situated computation or situated at Toma and the idea was that the tools of logical reasoning were important but possibly only for the engineers or designers to use in the analysis of a system but not necessarily to be manipulated in the head of the system itself right so I might use logic to prove a theorem about the behavior of my robot even if the robot's not using logic in its head to prove theorems right so that was kind of the distinction and so the idea was to kind of use those principles to make a robot do stuff but a lot of the basic things we had to kind of learn for ourselves cuz I had zero background in robotics I didn't know anything about control I didn't know anything about sensors so we reinvented a lot of wheels on the way to getting that robot to do stuff do you think that was an advantage or hindrance oh no it's I I I mean I I'm big in favor of wheel reinvention actually I mean I think you learned a lot by doing it yes uh it's important though to eventually have the pointers to so that you can see what's really going on but I think you can appreciate much better the the good Solutions once you've messed around a little bit on your own and found a bad one yeah I think you mentioned Reinventing reinforcement learning yeah and referring to uh rewards as Pleasures pleasure yeah I or I think which I think is a nice name for it yeah it seems good to me it's more it's more fun almost do you think you could tell the history of AI machine learning reinforcement learning and how you think about it from the 50s to now one thing is that it's oscillates right so Things become fashionable and then they go out and then something else becomes cool and that goes out and so on and I think there's so there's some interesting sociological process that actually drives a lot of what's going on early days was kind of cybernetics and control right and the idea that of homeostasis right people who made these robots that could I don't know try to plug into the wall when they needed power and then come loose and roll around and do stuff and then I think over time the thought well that was inspiring but people said no no no we want to get maybe closer to what feels like real intelligence or human intelligence mhm and then maybe the expert systems people tried to do that but maybe a little too superficially right so oh we get this surface understanding of what intelligence is like because I understand how a steel mill works and I can try to explain it to you and you can write it down in logic and then we can make a computer in for that and then that didn't work out but what's interesting I think is when a thing starts to not be working very well it's not only do we change methods we change problems right so it's not like we have better ways of doing the problem with the expert systems people were trying to do we have no ways of trying to do that problem oh yeah know I think or maybe a few but we kind of give up on that problem and we switch to a different problem and we we work that for a while and we make progress as a broad Community as a community and there's a lot of people who would argue you don't give up on the problem it's just you uh decrease the number of people work on it you almost kind of like put on a shelf say we'll come back to this 20 years later yeah that kind of I think that's right or you might decide that it's malformed like you might say it's wrong to just try to make something that does Superficial symbolic reasoning behave like a doctor you can't do that until you've had the sensory motor experience of being a doctor or something right so there's arguments that say that that's problem was not well formed or it could be that it is well formed but but we just weren't approaching it well so you me mentioned that your favorite part of logic and symbolic systems is that they give short names for large sets so there is some use to this uh they use to some symbolic reasoning so looking at expert systems and symbolic Computing what do you think think are the roadblocks that were hit in the 80s and 90s ah okay so right so the fact that I'm not a fan of expert systems doesn't mean that I'm not a fan of some kinds of symbolic reasoning right so let's see roadblocks well the main road block I think was that the idea that humans could articulate their knowledge effectively into into you know some kind of logical statements so it's not just the cost the effort but just the capability of doing it right because we're all experts in Vision right but totally don't have introspective access into how we do that right and it's true that I mean I think the idea was well of course even people then would know of course I wouldn't ask you to please write down the rules that you use for recognizing a water bottle that's crazy and everyone understood that but we might ask you to please write down the rules you use for deciding I don't know what tie to put on or how to set up a microphone or something like that but even those things I think people maybe I think what they found I'm not sure about this but I think what they found was that the so-called experts could give explanations that sort of post Hawk explanations for how and why they did things but they weren't necessarily very good and then they def they depended on maybe some kinds of perceptual things which which again they couldn't really Define very well so I think I think fundamentally I think the the underlying problem with that was the assumption that people could articulate how and why they make their decisions right so it's almost en encoding the knowledge uh from converting from expert to something that a machine could understand and reason with no no no no not even just in coding but getting it out of you just right not not not writing it I mean yes hard also to write it down for the computer yeah but I don't think that people can produce it you can tell me a story about why you do stuff but I'm not so sure that's the why great so there are still on the hierarchical planning side places where symbolic reasoning is very useful so um as as you've talked about so where right so don't where's the Gap yeah okay good so saying that humans can't provide a description of their reasoning processes that's okay fine but that doesn't mean that it's not good to do reasoning of various Styles inside a computer those are just two orthogonal points so then the question is uh what kind of reasoning should you do inside a computer right uh and the answer is I think you need to do all different kinds of reasoning inside a computer depending on what kinds of problems you face I guess the question is what kind of things can you uh encode symbolically so you can reason about I think the idea about and and even symbolic I don't even like that terminology because I don't know what it means technically and formally I do believe in abstractions so abstractions are critical right you cannot reason at completely fine grain about everything in your life right you can't make a plan at the level of images and torqus for getting a PhD right so you have to reduce the size of the state space and you have to reduce the Horizon if you're going to reason about getting a PhD or even buying the ingredients to make dinner and so so how can you reduce the spaces and the Horizon of the reasoning you have to do and the answer is abstraction spatial abstraction temporal abstraction I think abstraction along the lines of goals is also interesting like you might or well abstraction and decomposition goals is maybe more of a decomposition thing so I think that's where these kinds of if you want to call it symbolic or discret models come in you you talk about a room of your house instead of your pose you talk about uh you know doing something during the afternoon instead of at 2:54 and you do that because it makes you reasoning problem easier and also because you have you don't don't have enough information to reason in High Fidelity about your pose of your elbow at 2:35 this afternoon anyway right when you're trying to get a PhD that when you're doing anything really oh yeah okay uh except for at that moment at that moment you do have to reason about the pose of your elbow maybe but then you maybe you do that in some continuous joint space kind of model it so I again I my biggest point about all of this is that there should be that Dogma is not the thing right we shouldn't it shouldn't be that I in favor against symbolic reasoning and you're in favor against neural networks it should be that just just computer science tells us what the right answer to all these questions is if we were smart enough to figure it out well yeah when you try to actually solve the problem with computers the right answer comes out but you mentioned abstractions I mean NE networks form abstractions or uh rather there's there's automated ways to form abstractions and there's expert driven way to form abstractions and uh expert human driven ways and humans just seems to be way better at forming abstractions currently and certain problems so when you're referring to 2:45 a uh p.m. versus afternoon how do we construct that taxonomy is there any room for automated construction of such abstractions oh I think eventually yeah I mean I think when we get to be better and machine learning Engineers will build algorithms that build awesome abstractions that are useful in this kind of way that you're describing yeah so let's then step from the the abstraction discussion and let's talk about uh bomb mdp's partially observable marov decision processes so uncertainty so first what are marov decision processes what are Market decision and maybe how much of our world can be models mdps how much when when you wake up in the morning and making breakfast how do do you think of yourself as an mdp and so how do you think about mdps and how they relate to our world well so there's a stance question right so a stance is a position that I take with respect to a problem so I as a researcher or a person who designed systems can decide to make a model of the world around me in some terms right so I take this messy world and I say I'm going treat it as if it were a problem of this formal kind and then I can apply solution Concepts or algorithms or whatever to solve that formal thing right so of course the world is not anything it's not an mdp or a pomdp I don't know what it is but I can model aspects of it in some way or some other way and when I model some aspect of it in a certain way that gives me some set of algorithms I can use you can model the world in all kinds of ways uh some have some are more accepting of uncertainty more easily modeling uncertainty of the world some really Force the world to be deterministic and so certainly mdps uh model the uncertainty of the world yes model some uncertainty they model not present State uncertainty but they model uncertainty in the way the future will unfold right yeah so what are Markov decision process so Markov decision process is a model it's a kind of a model that you could make that says I I know completely the current state of my system and what it means to be a state is that I that all the I have all the information right now that will let me make predictions about the future as well as I can so that remembering anything about my history wouldn't make my predictions any better um and but but then it also says that that then I can take some actions that might change the state of the world and that I don't have a deterministic model of those changes I have a a probabilistic model of how the world might change uh it's a it's a useful model for some kinds of systems I think it's a I mean it's certainly not a good model for most problems I think because for most problems you don't actually know the state uh for most problems you it's partially observed so that's now a different problem class so okay that's where the PM DPS the POS obser Markov decision processes step in so how do they address the fact that you can't observe most uh you have incomplete information about most of the world around you right so now the idea is we still kind of postulate that there exists a state we think that there is some information about the world out there such that if we knew that we could make good predictions but we don't know the state and so then we have to think about how but we do get observations maybe I get images or I hear things or I feel things and those might be local or noisy and so therefore they don't tell me everything about what's going on and then I have to reason about given the history of actions I've taken and observations I've gotten what do I think is going on in the world and then given my own kind of uncertainty about what's going on in the world I can decide what actions to take and so how difficult is this problem of planning under uncertainty in your view in your long experience with modeling the world trying to deal with this uncertainty in especially in World Systems optimal planning for even discret pom DPS can be undecidable depending on how you set it up and for so lots of people say I don't use pomdps because they are intractable and I think that that's a kind of a very funny thing to say because the problem you have to solve is the problem you have to solve so if the problem you have to solve is intractable that's what makes us AI people right so uh we solve we understand that the problem we're solving is is complet wildly intractable that we can't we will never be able to solve it optimally at least I don't yeah right so later we can come back to an idea about bounded optimality and something but anyway I we can't come up with Optimal Solutions to these problems so we have to make approximations approximations in modeling approximations in solution algorithms and so on and so I don't have a problem with saying yeah my problem actually it is pomdp and continuous space with continuous observations and it's so computationally complex I can't even think about it's you know bigo whatever but that doesn't prevent me from it helps me gives me some clarity to think about it that way and to then take steps to make approximation after approximation to get down to something that's like computable in some reasonable time when you think about optimality you know the community broadly has shifted on on that I think a little bit and how much they value the idea of uh optimality of chasing an optimal solution how is your views of chasing an optimal solution uh changed over the years and when you work with robots that's interesting I I think we have a little bit of a methodological crisis actually from the theoretical side I mean I do think that theory is important and that right now we're not doing much of it so there's lots of empirical hacking around and training this and doing that and Reporting numbers but is it good is it bad we don't know we it's very hard to say things and if you look at like computer science theory so people talked for a while everyone was about solving problems optimally or completely and and then there were interesting relaxations right so people look at oh can I are there regret bounds or can I do some kind of um you know approximation can I prove something that I can approximately solve this problem or that I get closer to the solution as I spend more time and so on what's interesting I think is that we don't have good approximate solution concepts for very difficult problems right I like to you know I like to say that I I'm interested in doing a very bad job of very big problems uh quote right so very job very big problems I like to do that but I would I wish I could say something I wish I had a I don't know some kind of a of a formal solution concept that I could use to say oh this this algorithm actually it it gives me something like I know what I'm going to get I can do something other than just run it and get out 6 that notion is still somewhere deeply compelling to you the notion that you can say you can drop thing on the table says this you can expect that this ALG will give me some good results I hope there's I hope science will I mean there's engineering and there's science I think that they're not exactly the same and I think right now we're making huge engineering like Leaps and Bounds so that engineering is running way ahead of the science which is cool and often how it goes right so we're making things and nobody knows how and why they work roughly but we need to turn that into science I think there's some form it's uh yeah there's some room for formalizing we need to know what the principles are why does this work why does that not work I mean for a while people build Bridges by trying but now we can often predict whether it's going to work or not without building it can we do that for learning systems or for robots see your hope is from a materialistic perspective that intelligence artificial intelligence systems robots I kind I just more fancier Bridges belief space what's the difference between belief space and state space so you mentioned mdps spam DPS you reasoning uh about you sense the world there's a state uh what What's this belief space idea yeah that sounds so good it sounds good so belief space that is instead of thinking about what's the state of the world and trying to control that as a robot I think about what is the space of belief belief that I could have about the world what's if I think of a belief as a probability distribution of our ways the world could be a belief State as a distribution and then my control problem if I'm reasoning about how to move through a world I'm uncertain about my control problem is actually the problem of controlling my beliefs so I think about taking actions not just what effect they'll have on the world outside but what effect I'll have on my own understanding of the world outside and so that might compel me to ask a question or look somewhere to gather information which may not really change the world state but it changes my own belief about the world that's a powerful way to to empower the agent to reason about the world to explore the world uh what kind of problems does it allow you to solve to to uh consider belief Space versus just State space well any problem that requires deliberate information gathering right so if in some problems like chess there's no uncertainty or maybe there's uncertainty about the opponent um there's no uncertainty about the state uh and some problems there's uncertainty but you gather information as you go right you might say oh I'm driving my autonomous car down the road and it doesn't know perfectly where it is but the Liars are all going all the time so I don't have to think about whether to gather information but if you're a human driving down the road you sometimes look over your shoulder to see what's going on behind you in the lane and you have to decide whether you should do that now and you have to trade off the fact that you're not seeing in front of you when you're looking behind you and how valuable is that information and so on and so to make choices about information gathering you have to reason in belief space Also also I mean also to just take into account your own uncertainty before trying to do things so you might say if I understand where I'm standing relative to the door jam uh pretty accurately then it's okay for me to go through the door but if I'm really not sure where the door is then it might be better to not do that right now the degree of your uncertainty about about the world is actually part of the thing you're trying to optimize in forming the plan right that's right so this idea of a long Horizon of planning for a PhD or just even how to get out of the house or how to make breakfast you show this presentation of the the WTF where's the fork uh of robot looking at a sink uh and uh uh can you describe how we plan in this world of this idea of hierarchical planning we've mentioned so so yeah how can a robot hope to plan about something this was such a long heride where the goal is quite far away people since probably reasoning began have thought about hierarchical reasoning the temporal hierarchy in partic well there spal hierarchy but let's talk about temporal hierarchy so you might say oh I have this long uh execution I have to do but I can divide it into some segments abstractly right so maybe I have to get out of the house I have to get in the car I have to drive so on and so you can plan if you can build abstractions so this we started out by talking about abstractions and we're back to that now if you can build abstractions in your state space and abstractions sort of temporal abstractions then you can make plans at a high level and you can say I'm going to go to town and then I'll have to get gas and then I can go here and I can do this other thing and you can reason about the dependencies and constraints among these actions again without thinking about the complete details what we do in our hierarchical planning work is then say all right I make a plan at a high level of abstraction I have to have some reason to think that it's feasible without working it out in complete detail and that's actually the interesting step I always like to talk about walking through an airport like you can plan to go to New York and arrive at the airport and then find yourself in an office building later you can't even tell me in advance what your plan is for walking through the airport partly because you're too lazy to think about it maybe but partly also because you just don't have the information you don't know what gate you're Landing in or what people are going to be in front of you or anything so there's no point in planning in detail but you have to have you have to make a leap of faith that you can figure it out once you get there and it's really interesting to me how you arrive at that how do you so you have learned over your lifetime to be able to make some kinds of predictions about how hard it is to achieve some kinds of sub goals MH and that's critical like you would never plan to fly somewhere if you couldn't didn't have a model of how hard it was to do some of the intermediate steps so one of the things we're thinking about now is how do you do this kind of very aggressive generalization uh to situations that you haven't been in and so on to predict how long will it take to walk through the koala lour airport like you could give me an estimate and it wouldn't be crazy and you have to have an estimate of that in order to make plans that involve walking through the qual po airport even if you don't need to know it in detail so I'm really interested in these kinds of abstract models and how do we acquire them but once we have them we can use them to do hierarchical reasoning which is I think is very important yeah there's this notion of go uh goal regression and pre-image back chaining this idea of starting at the goal and just forming these big clouds of States you I mean it's almost like saying to the airport you know you you know once you show up to the uh the airport that that's you're like a few steps away from the goal so like thinking of it this way uh is kind of interesting I don't know if you have sort of further comments on that uh of starting at the goal why that's yeah I mean it's interesting that Simon herb Simon back in the early days of AI did talked a lot about mean Zen's reasoning and reasoning back from the goal there's a kind of an intuition that people have that the number of that state space is Big the number of actions you could take is really big so if you say here I sit and I want to search forward from where I am what are all the things I could do that's just overwhelming if you say if you can reason at this other level and say Here's what I'm hoping to achieve what could I do to make that true that somehow the branching is smaller now what's interesting is that like in the AI planning community that hasn't worked out in the class of problems that they at and the methods that they tend to use it hasn't turned out that it's better to go backward um it's still kind of my intuition that it is but I can't prove that to you right now right I share your intuition at least for us mere humans speaking of which uh when you uh maybe now we take a take a take a little step into that philosophy Circle uh how hard would it when you think about human life you you give those examples often how hard do you think it is to formulate human life as a planning problem or aspects of human life so when you look at robots you're often trying to think about object manipulation uh tasks about moving a thing when when you take a slight step outside the room let the robot leave and go get lunch uh or maybe try to uh pursue more fuzzy goals how hard do you think is that problem if you were to try to maybe put another way try to formulate human life as as a planning problem well that would be a mistake I mean it's not all a planning problem right I think it's really really important that we understand that you have to put together pieces and parts that have different styles of reasoning and representation and learning I think I think it's it's seems probably clear to anybody that that you can't all be this or all be that brains aren't all like this or all like that right they have different pieces and parts and substructure and so on so I don't think that there's any good reason to think that there's going to be like one true algorithmic thing that's going to do the whole job so it's a bunch of pieces together uh designed to solve a bunch of specific problem one specific uh or maybe styles of problems I mean there's probably some reasoning that needs to go on in image space I think again there's this model base versus model free idea right so in reinforcement learning people talk about oh should I learn I could learn a policy just straight up a way I behaving I could learn it's popular learn a value function that's some kind of weird intermediate ground uh or I could learn a transition model which tells me something about the Dynamics of the world if I take a trans imagine that I learn a transition model and I couple it with a planner and I draw a box around that I have a policy again it's just stored a different way right right it's and but it's just as much of a policy as the other policy it's just I've made I think the way I see it is it's a time space tradeoff in computation right a more overt policy representation maybe it takes more space but maybe I can compute quickly what action I should take on the other hand maybe a very compact model of the world Dynamics plus a planner lets me compute what action to take two just more slowly there's no I mean I don't think there's no argument to be had it's just like a question of what form of computation is best for us for the various sub problems right so and and so like learning to do algebra manipulations for some reason is I mean that's probably going to want naturally a sort of a different representation than rioting a unicycle right the time constraints on the unicycle are serious the state space is may be smaller I don't know but so I and there could be the more human sides of falling in love having a relationship that might be another uh another sty have no idea how to model that yeah let's let's first solve the algebra and the object manipulation uh what do you think is harder perception or planning perception that's why understanding that's uh so what do you think is so hard about perception about understanding the world around you well I I mean I think the big question is representational hugely the question is representation right so perception has made great strides lately right and we can classify images and we can play certain kinds of games and predict how to steer the car and all that sort of stuff um I don't think we have a very good idea of what perception should deliver right so if you if you believe in modularity okay there's there's a very strong view which says we shouldn't build in any modularity we should make a giant gigantic neural network train it end to end to do the thing and that's the best way forward and it's hard to argue with that except on a sample complexity basis right so you might say oh well if I want to do endtoend reinforcement learning on this giant giant neural network it's going to take a lot of data and a lot of like broken robots and stuff so then the only answer is to say okay we have to build something in build in some structure or some bias we know from theory of machine learning the only way to cut down the sample complexity is to kind of cut down somehow cut down this the hypothesis space you can do that by building in bias there's all kinds of reason to think that nature built bias into humans um convolution is a bias right it's a very strong bias and it's a very critical bias so my own view is that we should look for more things that are like convolution but that address other aspects of reasoning right so convolution helps us a lot with a certain kind of spatial reasoning that's quite close to the Imaging I think there's other ideas like that maybe some them out of forward search maybe some Notions of abstraction maybe the notion that objects exist actually I think that's pretty important and a lot of people won't give you that to start with right so almost like a convolution in the uh uh uh in the object semantic object space of some kind some kind some kind of ideas in there that's right and people are St like the graph graph convolutions are an idea that are related to Rel relational representations and so so I think there are so you I've come far a field from perception but I think um I think the thing that's going to make perception that kind of the next step is actually understanding better what it should produce right so what are we going to do with the output of it right it's fine when what we're going to do with the output is steer it's less clear when we're just trying to make a one integrated intelligent agent what should the output of perception be we have no idea and how should that hook up to the other stuff we don't know right so I think the pr question is what kinds of structure can we build in that are like the moral equivalent of convolution that will make a really awesome super structure that then learning can kind of progress on efficiently I agree very compelling description of actually where we stand with the perception problem uh you're teaching a course on EMB body intelligence what do you think it takes to build a robot with human level intelligence I don't know if we knew we would do it if you were to I mean okay so do you think a robot needs to have a uh self-awareness uh Consciousness fear of mortality or is it is it simpler than that or is consciousness a simple thing like do you do you think about these Notions I don't think much about Consciousness even most philosophers who care about it will give you that you could have robots that are zombies right that behave like humans but are not conscious and I at this moment would be happy enough with that so I'm not really worried one way or the other so then the technical side you're not thinking of the use of self-awareness um well but I okay but then what does self-awareness mean I mean that you need to have some part of the system that can observe other parts of the system and tell whether they're working well or not that seems critical so does that count as I mean does that count as self-awareness or not well it depends on whether you think that there's somebody at home who can articulate whether they're self-aware but clearly if I have like you know some piece of code that's counting how many times this procedure gets executed that's a kind of self-awareness right so there's a big Spectrum it's clear you have to have some of it right you know we're quite far away on many dimensions but is there a direction of research that's most compelling to you for you know trying to achieve human level intelligence in in our robots well to me I guess the thing that seems most compelling to me at the moment is this question of what to build in and what to learn um I think we're we don't we're missing a bunch of ideas and and we you know people you know don't you dare ask me how many years it's going to be till that happens because I won't even participate in the conversation because I think we're missing ideas and I don't know how long it's going to take to find them so I won't ask you how many years but uh maybe I'll ask you what it when you'll be sufficiently impressed that we've achieved it so what's what's uh a good test of intelligence do you like the touring test the natural language in the robotic space is there something where you would sit back and think oh that's that's pretty impressive uh as a test as a benchmark do you you think about these kinds of problems no I I resist I mean I think all the time that we spend arguing about those kinds of things could be better spent just making the robots work better uh so you don't value competition so I mean there's the nature of Benchmark benchmarks and data sets or touring test challenges where everybody kind of gets together and tries to build a better robot cuz they want to out compete each other like the Dara challenge with the autonomous vehicles do you see the value of that or can get in the way I think it can get in the way I mean some people many people find it motivating and so that's good I find it anti motivating personally yeah uh but I think what I mean I think you get an interesting cycle where for a contest a bunch of smart people get super motivated and they hack their brains out and much of what gets done as just hacks but sometimes really cool ideas emerge and then that gives us something to chew on after that so I'm I it's not a thing for me but I don't I don't regret that other people do it yeah it's like you said with everything else the mix is good so jumping topics a little bit he started the Journal of machine learning research and served as its editorinchief uh how did the publication come about and uh what do you think about the current publishing model space in machine learning artificial intelligence okay good so it came about because there was a journal called machine learning which still exists which was owned by cluer and there was I was on the editorial board and we used to have these meetings annually where we would complain to clu that it was too expensive for the libraries and that people couldn't publish and we would really like to have some kind of relief on those fronts and they would always sympathize but not do anything so uh we just decided to make a new journal and uh there was the Journal of AI research which has was on the same model which had been in existence for maybe five years or so and it was going along pretty well so uh we just made a new Journal it wasn't I mean it um I don't know I guess it was work but it wasn't that hard so basically the editorial board probably 75% of the editorial board of uh machine learning resigned and we founded the new Journal but it was sort of it was more open yeah right so it's completely open it's open access actually uh I had a post do George conidaris who wanted to call these journals freefor all uh because there were I mean it both has no page charges and has no uh uh access restrictions and the reason and so lots of people I mean for there were there were people who are mad about the existence of this journal who thought it was a fraud or something it would be impossible they said to run a journal like this with basically I mean for a long time I didn't even have aank account uh I paid for the lawyer to incorporate and the IP address and it just didn't cost a couple hundred dollars a year to run it's a little bit more now but not that much more but it's because I think computer scientists are competent and autonomous in a way that many scientists in other fields aren't I mean at doing these kinds of things we already types that around papers we all have students and people who can hack a website to together in the afternoon so the infrastructure for us was like not a problem but for other people in other fields it's a harder thing to do yeah and this kind of Open Access Journal is nevertheless one of the most prestigious journals so it's not like um a Prestige and it can be achieved without any of the paper is not required for Prestige turns out yeah so on the review process side I've actually a long time ago I don't remember when I reviewed a paper where you were also a reviewer and I remember reading your review and being influenced by it it was really well written it influenced how I write feature reviews uh you disagreed with me actually uh and you made it uh my review much better so but nevertheless the review process you know has its uh flaws and how do you think what do you think works well how how can it be improved so actually when I started jamr I wanted to do something completely different and I didn't because it felt like we needed a traditional Journal of record and so we just made jamr be almost like a normal Journal except for the Open Access parts of it basically um increasingly of course publication is not even a sensible word you can publish something by putting it in archive so I can publish everything tomorrow so making stuff public is there's no barrier we still need curation and evaluation I don't have time to read all of archive and you could argue that kind of social thumbs uping of Articles suffices right you might say oh heck with this we don't need journals at all we'll put everything on archive and people will upload and down about the Articles and then your CV will say oh man they he got a lot of up votes so uh that's good um but I think there's still value in careful reading and commentary of things and it's hard to tell when people are up voting and down voting or arguing about your paper on Twitter and Reddit whether they know what they're talking about right so then I have the second order problem of trying to decide whose opinions I should value and such so I don't know I what I if I had infinite time which I don't and I'm not going to do this because I really want to make robots work but if I felt inclined to do something more in the publication Direction I would do this other thing which I thought about doing the first time which is to get together some set of people whose opinions I value and who are pretty articulate and I guess we would be public although we could be private I'm not sure and we would review papers we wouldn't publish them and you wouldn't submit them we would just find papers and we would write reviews MH and we would make those reviews public and maybe if you you know so we're Leslie's friends who review papers and maybe eventually if if we are opinion was sufficiently valued like the opinion of jmr is valued then you'd say on your CV that lesli's friends gave my paper a five-star reading and that would be just as good as saying I got it you know accepted into this journal um so I think I think we should have good public commentary uh and organize it in some way but I don't really know how to do it it's interesting times the way the the way you describe it actually is is really interesting I mean we do it for movies imdb.com there's a experts critics come in they write reviews but there's also regular non- critics humans write reviews and they're separated I like open review open the the the I uh I clear process I think is interesting it's a step in the right direction but it's still not as compelling as uh reviewing movies or video games I mean it sometimes almost it might be silly at least from my perspective to say but it boils down to the user interface how fun and easy it is to actually perform the reviews how efficient how much you as a reviewer get uh street cred for being a good reviewer those ele those human elements come into play no it's a big investment to do a good review of a paper and the flood of papers is out of control right so you know there aren't 3,000 new I don't know how many new movies are there in a year I don't know but that's probably going to be less than how many machine learning papers there are in a year now and I'm worried I you know I I H right so I'm like an old person so of course I'm going to say raar raar raar things are moving too fast I'm a stick in the mud uh so I can say that but my particular flavor of that is I think the Horizon for researchers has gotten very short that students want to publish a lot of papers and there's a huge there's value it's exciting and there's value in that and you get patted on the head for it and so on but and some of that is fine but I'm worried that we're driving out people who would spend two years thinking about something back in my day when we worked on our thesis we did not publish papers you did your thesis for years you picked a hard problem and then you worked and chewed on it and did stuff and wasted time and for a long time and when it was roughly when it was done you would write papers and so I I don't know how to in and I don't think that everybody has to work in that mode but I think there's some problems that are hard enough that it's important to have a longer research Horizon and I'm worried that we don't incentivize that at all at this point in this current structure yeah so what do you see as uh what are your hopes and fears about the future of AI and continuing on this theme so AI has gone through a few Winters ups and downs do you see another winter of AI coming or do you more hopeful uh about making robots work as he said I think the Cycles are inevitable but I think each time we we get higher right I mean so you know it's it's like climbing some kind of landscape with a noisy uh Optimizer yeah so it's clear that the the you know the Deep learning stuff has made deep and important improvements and so the high water mark is now higher there's no question but of course I think people are overselling and eventually uh investors I guess and other people look around and say well you're not quite delivering on this Grand claim and that wild hypothesis so probably it's going to crash some amount and then it's okay I mean it but I don't I I can't imagine that there's like some awesome monotonic improvement from here to human level AI so in uh you know I have to ask this question I probably anticipate answers the answers but uh do you have a worry shortterm or long term about the existential threats of AI and U maybe shortterm less existential but more uh robots taking away jobs well actually let let me talk a little bit about utility actually I had an interesting conversation with some military ethicists who wanted to talk to me about autonomous weapons and they they were interesting smart well-educated guys who didn't know too much about AI or machine learning and the first question they asked me was has your robot ever done something you didn't expect and I like burst out laughing because anybody who's ever done something on the robot right knows that they don't do much and what I realized was that their model of how we program robot was completely wrong their model of how we can program a robot was like LEGO Mindstorms like oh go forward a meter turn left take a picture do this do that and so if you have that model of programming then it's true it's kind of weird that your robot would do something that you didn't anticipate but the fact is and and actually so now this is my new educational Mission if I have to talk to non-experts I try to teach them the idea that we don't operate we operate at least one or maybe many levels of abstraction above that and we say oh here's a hypothesis class maybe it's a space of plans or maybe it's a space of classifiers or whatever but there's some set of answers in an objective function and then we work on some optimization method that tries to optimize a solution solution in that class and we don't know what solution is going to come out right so I think it's important to communicate that so I mean of course probably people who listening to this they they know that lesson but I think it's really critical to communicate that lesson and then lots of people are now talking about you know the value alignment problem so you want to be sure as robots or software systems get more competent that their objectives are aligned with your objectives or that uh our objectives are compatible in some way or we have a good way of mediating when they have different objectives and so I think it is important to start thinking in terms like you don't have to be freaked out by the robot apocalypse to accept that it's important to think about objective functions of value alignment yes and that you have to really everyone who's done optimization knows that you have to be careful what you wish for that you know sometimes you get the optimal solution and you realize man that was that objective was wrong so pragmatically in the shortish term it seems to me that that that those are really interesting and critical questions and the idea that we're going to go from being people who engineer algorithms to being people who engine your objective functions I think that's that's definitely going to happen and that's going to change our thinking and methodology and stuff we're going to you started in Standford philosophy that switch to computer science and then we'll go back to philosophy philosophy maybe well designing object I mean they're mixed together because because as we also know as machine learning people right when you design in fact this is the lecture I gave in class today when you design an objective function you have to wear both hats there's the hat that says what do I want and then there's the hat that says Ah but I know what my Optimizer can do to some degree and I have to take that into account right so it's it's always a trade-off and we have to kind of be mindful of that the part about taking people's jobs I understand that that's important I don't understand sociology or economics or people very well so I don't know how to think about that so that's yeah so there might be a sociological aspect there the economical aspect that's very difficult to think about okay I mean I think other people should be thinking about it but I'm just that's not my strength so what do you think is the most exciting area of research in the short term for the community and for your for yourself well so I mean there's the story I've been telling about how to engineer intelligent robots right so that's what we want to do we all kind of want to do well I mean some set of us want to do this and the question is what's the most effective strategy and we've tried and there's a bunch of different things you could do at the extremes right one super extreme is we do introspection and we write a program okay that has not worked out very well another extreme is we take a giant bunch of neural goo and we try it train it up to do something I don't think that's going to work either so the question is what's the middle ground and and again this isn't a a theological question or anything like that it's just like how do just how do we what's the best way to make this work out and I think it's it's clear It's a combination of learning to me it's clear It's a combination of learning and not learning and what should that combination be and what's the stuff we build in so to me that's the most compelling question and when you say engineer robots you mean Engineering Systems that work in the real world is that that that's the emphasis last question which robots or robot is your favorite from science fiction so you can go with Star Wars RTD2 or you can go with more modern uh maybe Hal from so I don't think I have a favorite robot from science fiction this is this is back to uh you you like to make robots work in the real world here not uh not in I mean I love the process and I care more about the process the Engineering Process yeah I mean I do research because it's fun not because I care about what we produce well that's a that's a beautiful note actually to end on lesie thank you so much for talking today sure it's been fun
Kyle Vogt: Cruise Automation | Lex Fridman Podcast #14
the following is a conversation with convoked he is the president and the CTO of Cruz Automation leading an effort to solve one of the biggest robotics challenges of our time vehicle automation he's a co-founder of two successful companies twitch and crews that have each sold for a billion dollars and he's a great example of the innovative spirit that flourishes in Silicon Valley and now is facing an interesting and exciting challenge of matching that spirit with the mass production and the safety centric culture of a major automaker like General Motors this conversation is part of the MIT artificial general intelligence series and the artificial intelligence podcast if you enjoy it please subscribe on youtube itunes or simply connect with me on twitter at Lex Friedman spelled Fri D and now here's my conversation with Kyle vote grew up in Kansas right yeah and I just saw that picture you had you know there's them a little bit a little bit worried about that yeah so in high school in Kansas City you joined Shawnee Mission North High School Robotics team yeah now that wasn't your high school that's right that was that was the only high school in the area that had a like a teacher who was willing to sponsor a FIRST Robotics team I was gonna troll you a little bit jog your mess trying to look super cool and intense because you know this was BattleBots it's a serious business so we're standing there with a welded steel frame and looking tough so go back there what is that jury to robotics well I think I've been trying to figure this out for a while but I've always liked building things with Legos and when I was really really young I wanted the Legos I had motors and other things and then you know Lego Mindstorms came out and for the first time you could program Lego contraptions and I think things just sort of snowballed from that but I remember seeing you know the battle bots TV show on Comedy Central and thinking that is the coolest thing in the world I want to be a part of that and not knowing a whole lot about how to build these 200-pound fighting robots so I sort of obsessively pored over the internet forums where all the creator's for battle bots would sort of hang out and talk about you know document their build progress and everything and I think I read I must have read like you know tens of thousands of forum posts from from basically everything that was out there on what these people were doing and eventually like sort of triangulated how to how to put some of these things together and and ended up doing battle bots which was you know I was like 13 or 14 which is pretty awesome I'm not sure if the show is still running but the battle bots is there's not an artificial intelligence component it's remotely controlled and yeah it's an almost like a mechanical generic challenge yeah I think things that can be broken they're radio-controlled so and I think that they allowed some limited form of autonomy but you know in a two-minute match you're in and the way these things ran you're really doing yourself a disservice by trying to automate it versus just you know do the practical thing which is drive it yourself the entertainment aspect just going on YouTube there's like and some of them wield an axe some of them I mean there's that fun so what drew you to that aspect it wasn't the mechanical engineering was it the dream to create like Frankenstein and sentient being I was just like the Lego you like tinkering with stuff I mean that that was just building something I think the the idea of you know this this radio-controlled machine that that can do various things if it has like a weapon or something was pretty interesting I agree it doesn't have the same appeal as you know autonomous robots which I which I you know sort of gravitated towards later on but it was definitely an engineering challenge because everything you did in in that competition was pushing components to their limits so we would buy like these $40 DC motors that came out of a winch like on the front of a pickup truck or something and we'd power the car with those and we'd run them at like double or triple their rated voltage so they immediately start overheating but for that 2-minute match you can get you know a significant increase in the power output of those motors before they burn out and so you're doing the same thing for your battery packs all the materials in the system and I think there was something something intrinsically interesting about just seeing like where things break and did you all fly and see where they break did you take it to the testing point like how did you know two minutes or was there a reckless let's just go with it and see we weren't very good at BattleBots we lost all of our matches that woody first round like the one I built first both of them were these wedge-shaped robots because a wedge even though it's sort of boring to look at is extremely effective you drive towards another robot and the front edge of it gets under him and then they sort of flip over kind of like a door stopper and the first one had a pneumatic polished stainless steel spike on the front that would shoot out about eight inches the purpose of which is what pretty pretty ineffective actually but it looks cool and was it helpful to lift no it was it was just to try to poke holes in the other robot and then the second time I did it which is the following I think maybe 18 months later we had well a titanium axe with a with a hardened steel tip on it that was powered by a hydraulic cylinder which we were activating with liquid co2 which was had its own set of problems so great so that's kind of on the hardware side I mean at a certain point there must have been born a fascination on the software side so what was the first piece of coal you've written go back there see what language was it what what was that was a Emacs vim was it a more respectable modern ID do you remember any of this yeah well I remember I think maybe when I was in third or fourth grade school I was at elementary school had a bunch of Apple 2 computers and we'd play games on those and I remember every once in a while something mood would would crash it wouldn't start up correctly and it would dump you out to what I later learned was like sort of a command prompt and my teacher would come over and type actually remember this to this day for some reason like PR number six or PR pound six which is peripheral 6 which is the disk drive which would fire up the disk and load the program and I just remember thinking wow she's like a hacker like teach me these these codes these error codes what I called him at the time but she had no interest in that so it wasn't until I think about fifth grade that I had a school where you could actually go on these Apple twos and learn to program and so it's all in basic you know where every line you know the line numbers are all number that every line is numbered and you have to like leave enough space between the numbers so that if you want to tweak your code you go back and the first line was 10 and the second line is 20 now you have to go back and insert and 15 and if you need to add code in front of that you know 11 or 12 and you hope you don't run out of line numbers and have to redo the whole thing there's go-to statements yeah go to and it's very basic maybe it's a name but a lot of fun and that was like that was you know it's fun that's when you know when your first program you see the magic of it it's like it just just like this world opens up with you know endless possibilities for the things you could build or or accomplish with that computer so you got the bug then so even starting with basic and then what C++ throughout what did you it was a computer program in computer science classes in high school not not where I went so it was a self-taught but I did a lot of programming the thing that you know sort of pushed me in the path of eventually working on self-driving cars is actually one of these really long trips driving from my house in Kansas to I think Las Vegas where we did the Battle Watts competition and I had just gotten my I think my learner's permit or early driver's permit and so I was driving this you know 10 hour stretch across western Kansas where it's just you're going straight on a highway and it is mind-numbing ly boring and I remember thinking even then with my sort of mediocre programming background that this is something that a computer can do right let's take a picture of the road let's find the yellow lane markers and you know steer the wheel and you know later I've come to realize this had been done you know since since the 80s or the 70s or even earlier but I still wanted to do it and sort of immediately after that trip switched from sort of BattleBots which is more radio-controlled machines to thinking about building you know autonomous vehicles of some scale start off with really small electric ones and then you know progress to what we're doing now so what was your view of artificial intelligence at that point what did you think so this is uh before there's been ways in artificial intelligence right the the current wave with deep learning makes people believe that you can solve in a really rich deep way the computer vision perception problem but like in before the deep learning craze you know how do you think about how would you even go about building a thing that perceives itself in the world local as itself in the world moves around the world like when you were younger and yeah as what was your thinking about it well prior to deep neural networks our convolutional neural as these modern techniques we have or at least ones that are in use today it was all heuristic space and so like old-school image processing and I think extracting you know yellow lane markers out of an image of a road is one of the problems that lends itself reasonably well to those heuristic based methods you know like just do a threshold on the color yellow and then try to fit some lines to that using a Hough transform or something and then go from there traffic like detection and then stop signs detection red yellow green and I think you can you could I mean if you wanted to do a full I was just trying to make thing that would stay in between the lanes on a highway but if you wanted to do the full the full you know set of capabilities needed for a driverless car I think you could and we done this at cruise you know in the very first days you can start off with a really simple you know human written heuristic just to get the scaffolding in place for your system traffic light detection probably a really simple you know color threshold injustice system up and running before you migrate to you know a deep learning based technique or something else and you know back in when I was doing this my first one it was on Pentium 203 233 megahertz computer in it and I I think I wrote the first version in basic which is like an interpreted language it's extremely slow because that's the thing I knew at the time and so there was no no chance at all of using there was no computational power to do any sort of reasonable deep nets like you have today so I don't know what kids these days are doing our kids these days you know at age 13 using neural networks in their garage I mean I also I get emails all the time from you know like 11 12 year old saying I'm having you know I'm trying to follow this tensorflow tutorial and I'm having this problem and their general approach in the deep learning community is of extreme optimism of as opposed to you mentioned like heuristics you can you can separate the autonomous driving problem into modules and try to solve it sort of rigorously or you just do it end to end and most people just kind of love the idea that you know us humans do a tenth and we just perceive and act we should be able to use that do the same kind of thing when you're on that's and that that kind of thinking you don't want to criticize that kind of thinking because eventually they will be right yeah and so it's exciting and especially when they're younger to explore that is a really exciting approach but yeah it's it's changed the the language the kind of stuff you turned green with it it's kind of exciting to see when they seniors grow up yeah I can only imagine if you if your starting point is you know Python and tensorflow at age 13 where you end up you know after 10 or 15 years of that that's that's pretty cool because of github because this they're tools for solving most of the major problems and artificial intelligence are within a few lines of code for most kids and that's incredible to think about also on the entrepreneurial side and and and at that point was there any thought about entrepreneurship before you came to college is sort of doing your building this into a thing that impacts the world on the large scale yeah I've always wanted to start a company I think that's you know just a cool concept of creating something and exchanging it for value or creating value I guess so in high school I was I was so trying to build like you know a servo motor drivers little circuit boards and sell them online or other other things like that and certainly knew at some point I wanted to do a startup but it wasn't really I'd say until college until I felt like I had the I guess the right combination of the environment the smart people around you and some free time and a lot of free time at MIT so you came to MIT as an undergrad 2004 that's right and that's when the first DARPA Grand Challenge was happening yeah the the timing of that is beautifully poetic so how did you get yourself involved in that one originally there wasn't a official entry yeah faculty sponsored thing and so a bunch of undergrads myself included I started meeting and got together and tried to haggle together some sponsorships we got a vehicle donated a bunch of sensors and tried to put something together and so we had our team was probably mostly freshmen and sophomores you know which was not really a fair fair fight against maybe the you know postdoc and faculty-led teams from other schools but we we got something up and running we had our vehicle drive by a wire and you know very very basic control and things but on the day of the qualifying for pre qualifying round the one and only steering motor that we had purchased the thing that we had you know retrofitted to turn the steering wheel on the truck died and so our vehicle was just dead in the water couldn't steer so we didn't make it very far on the hardware side so was there a software component was there like how did your view of autonomous vehicles in terms of artificial intelligence evolve in this moment I mean you know like you said from the 80s has been autonomous vehicles but really that was the birth of the modern wave the the thing that captivated everyone's imagination that we can actually do this so what how were you captivated in that way so how did your view of autonomous vehicles change at that point I'd say at that point in time it was it was a the curiosity as in like is this really possible and I think that was generally the spirit and the the purpose of that original DARPA Grand Challenge which was to just get a whole bunch of really brilliant people exploring the space and pushing the limits and and I think like to this day that DARPA challenge with its you know million dollar prize pool was probably one of the most effective you know uses of taxpayer money dollar for dollar that I've seen you know because that that small sort of initiative that DARPA put put out sort of in my view was the catalyst or the tipping point for this this whole next wave of autonomous vehicle development so that was pretty cool so let me jump around a little bit on that point they also did the urban challenge where I was in the city but it was very artificial and there's no pedestrians and there's very little human involvement except a few professional drivers yeah do you think there's room and then there was the Robotics Challenge with humanoid robots right so in your now role is looking at this you're trying to solve one of the you know autonomous driving one of the harder more difficult places of San Francisco is there a role for DARPA to step in to also kind of help out they challenge with new ideas specifically a pedestrians and so on all these kinds of interesting things well I haven't I haven't thought about it from that perspective is there anything DARPA could do today to further accelerate things and I would say my instinct is that that's maybe not the highest and best use of their resources in time because like kick starting and spinning up the flywheel is I think what what they did in this case for a very very little money but today this has become this has become like commercially interesting to very large companies and the amount of money going into it and the amount of people like going through your class and learning about these things and developing these skills is just you know orders of magnitude more than it was back then and so there's enough momentum and inertia and energy and investment dollars into this space right now that I don't I don't I think they're I think they're they can just say mission accomplished and move on to the next area of technology that that needs help so then stepping back to MIT you left on my teaching a junior year what was that decision like as I said I always wanted to do a company in or start a company and this opportunity landed in my lap which was a couple guys from Yale we're starting a new company and I googled them and found that they had started a company previously and sold it actually on eBay for about a quarter million bucks which was a pretty interesting story but so I thought to myself these guys are you know rock star entrepreneurs they've done this before they must be driving around in Ferraris because they sold their company and you know I thought I could learn a lot from them so I teamed up with those guys and you know went out during went out to California during IIP which is my tease month off on one on one way ticket and basically never went back we were having so much fun we felt like we were building something and creating something and it was going to be interesting that you know I was just all in and got completely hooked and that that business was justin.tv which is originally a reality show about a guy named Justin which morphed into a live video streaming platform which then morphed into what is twitch today so that was that was quite a an unexpected journey so no regrets no looking back it was just an obvious I mean one-way ticket I mean if we just pause on that for a second there was no how did you know these were the right guys this is the right decision you didn't think it was just follow the heart kind of thing well I didn't know but you know just trying something for a month during IEP he seems pretty little risk right right and then you know well maybe I'll take a semester off and my teas pretty flexible about that you can always go back right and then after two or three cycles of that I eventually threw in the towel but you know I think it's I guess in that case I felt like I could always hit the undo button if I had to right but it never lasts from from when you look in retrospect I mean it seems like a brave decision that you know it's difficult it would be difficult for a lot of people to make it wasn't as popular I'd say that the general you know flux of people out of MIT at the time was mostly into you know financier consulting jobs in Boston or New York and very few people were going to California to start companies but today I'd say that's it's probably inverted which is just a sign of a sign of the times I guess yeah so there's a story about midnight of March 18 2007 where whether we're TechCrunch I guess and I was just in TV earlier than was supposed to a few hours the site didn't work I don't know if any of this is true you can tell me and I you and one of the folks adjusted to e I'm a shear coated through the night can you take me through that experience so let me let me say a few nice things that the article I read quoted Justin Kahn said that you were known for mural coding through problems and being a creative quote creative genius so on that night what was going through your head or maybe I put another way how do you solve these problems what's your approach to solving these kinds of problems were the line between success and failure seems to be pretty thin that's a good question well first of all that's that's a nice of Justin to say that I think you know I would have been maybe twenty-one years old then and not very experienced at programming but as with with everything in a start-up you're sort of racing against the clock and so our plan was the second we had this live streaming camera backpack up and running where Justin could wear it and no matter where he went in a city it would be streaming live video and this is even before the iPhones this is like hard to do back then we would launch and so we thought we were there and and the backpack was working and then we sent out all the emails to launch the launch the company and do the press thing and then you know we weren't quite actually there and then we thought oh well you know they're not going to announce it until maybe 10 a.m. the next morning and it's I don't know it's 5 p.m. now so how many hours do we have left what is that like you have 17 hours to go and and and that was that was gonna be fine was the problem obvious did you understand what could possibly like how complicated was the system at that point it was it was pretty messy so to get a live video feed that looked decent working from anywhere in San Francisco I put together the system where we had like three or four cell phone data modems and they were like we take the video stream and you know sort of spray it across these three or four modems and then try to catch all the packets on the other side you know with unreliable cell phone networks pretty low level networking yeah and and putting his like you know sort of protocols on top of all that to reassemble and reorder the packets and have time buffers and error correction and all that kind of stuff and the night before it was just staticky every once in while the image would would go staticky and there would be this horrible like screeching audio noise because the audio was also corrupted and this would happen like every five to ten minutes or so and it was a really you know off-putting to the viewers how do you tackle that problem what was the just freaking out behind a computer there's the word are there other other folks working on this problem like we behind a whiteboard were you doing uh yes a little hair coding it has a little only because there's four of us working on the company and only two people really wrote code and Emmett wrote the website in the chat system and I wrote the software for this video streaming device and video server and so I you know it's my sole responsibility to figure that out yeah and I think I think it's those you know setting setting deadlines trying to move quickly and everything where you're in that moment of intense pressure that sometimes people do their best and most interesting work and so even though that was a terrible moment I look back on it fondly because that's like you know that's one of those character defining moments I think so in 2013 October you founded cruise automation yeah so progressing forward another exception successful company was acquired by GM in 16 for 1 billion dollars but in October 2013 what was on your mind what was the plan how does one seriously start to tackle one of the hardest robotics most important impact for robotics problems of our age after going through twitch twitch was was and it is today pretty successful but the the work was the result was entertainment mostly like the the better the product was the more we would entertain people and then you know make money on them ad revenues and other things and that was that was a good thing it felt felt good to entertain people but I figured like you know what is really the point of becoming a really good engineer and developing these skills other than you know my own enjoyment and I realized I wanted something that scratched more of an existential itch like something that that truly matters and so I basically made this list of requirements for a new if I was going to do another company and the one thing I knew in the back of my head that twitch took like eight years to become successful and so whatever I do I better be willing to commit you know at least ten years to something and when you think about things from that perspective you certainly I think raised the bar on weight you choose to work on so for me the three things where it had to be something where the technology itself determines the success of the product like hard really juicy technology problems because that's what motivates me and then it had to have a direct and positive impact on society in some way so an example would be like you know healthcare self-driving cars because they save lives other things where there's a clear connection to somehow improving other people's lives and the last one is it had to be a big business because for the positive impact to matter it's got to be a large scale scale yeah and I was thinking about that for a while and I made like I tried writing a gmail clone and looked at some other ideas and then it just sort of light bulb went off like self-driving cars like that was the most fun I had ever had in college working on that and like well what's the state of the technology has been ten years maybe maybe times have changed and maybe now is the time to make this work and I poked around and looked at the only other thing out there really at the time was the Google self-driving car project and I thought surely there's a way to you know have an entrepreneur mindset and sort of solve the Minimum Viable Product here and so I just took the plunge right then in there and said this this is something I know I can commit ten years to it's the probably the greatest applied AI problem of our generation it's right and if it works it's going to be both a huge business and therefore like probably the most positive impact I can possibly have on the world so after that light bulb went off I went all in on crews immediately and got to work did you have an idea how to solve this problem which aspect of the problem to solve you know slow like what we just had Oliver for voyage here slow-moving retirement communities urban driving highway driving did you have like did you have a vision of the city of the future or you know the transportation is largely automated that kind of thing or was it sort of more fuzzy and gray area than that my analysis of the situation is that Google is putting a lot it had been putting a lot of money into that project that a lot more resources and so and they still hadn't cracked the fully driverless car you know this is 20 2013 I guess so I thought what what can I do to sort of go from zero to you know significant scale so I can actually solve the real problem which is the driverless cars and I thought here's the strategy we'll start by doing a really simple problem or solving a really simple problem that creates value for people so eventually ended up deciding on automating highway driving which is relatively more straightforward as long as there's a backup driver there and I'll you know the go-to-market will be able retrofit people's cars and just sell these products directly and the idea was we'll take all the revenue and profits from that and use it to do the social reinvest that in research for doing fully fabulous cars and that was the plan the only thing that really changed along the way between then and now is we never really launched the first product we had enough interest from investors in enough of a signal that this was something that we should be working on that after about a year of working on the highway autopilot we had it working you know on a prototype stage but we just completely abandoned that and said we're gonna go all in on driverless cars now is the time can't think of anything that's more exciting and if it works more impactful so we're just gonna go for it the idea of retrofit is kind of interesting yeah being able to it's how you achieve scale it's a really interesting idea is it's something that's still in the in the back of your mind as a possibility not at all I've come full circle on that one trying to build a retrofit product and I'll touch on some of the complexities of that and then also having been inside in OEM and seeing how things work and how a vehicle is developed and validated when it comes to something that has safety critical implications like controlling the steering and the other control inputs on your car it's pretty hard to get there with with a retrofit or if you did even if you did it it creates a whole bunch of new complications around liability or how did you truly validate that or you know something in the base vehicle fails and causes your system to fail whose fault is it or if the cars anti-lock brake systems or other things kick in or the software has been it's different in one version of the car you retrofit versus another and you don't know because the manufacturer has updated it behind the scenes there's basically an infinite list of longtail issues that can get you and if you're dealing with a safety critical product that's not really acceptable that's a really convincing summary of why it's really challenging but I didn't at the time so we tried it anyway but it's a pitch also at the time it's a really strong one yes that's how you achieve scale and that's how you beat the current the the leader at the time of Google or the only one in the market the other big problem we ran into which is perhaps the biggest problem from a business model perspective is we had kind of assumed that we'd we started with an Audi s4 as the vehicle we retrofitted with his highway driving capability and we had kind of assumed that if we just knock out like three make and models of vehicle that'll cover like eighty percent of a San Francisco market doesn't everyone there drive I don't know a BMW or a Honda Civic or one of these three cars and then we surveyed our users we found out that it's all over the place we would to get even a decent number of units sold we'd have to support like you know 20 or 50 different models and each one is a little butterfly that takes time and effort to maintain you know that retrofit integration and custom hardware and all this so is it there's a tough business so GM manufactures and sells over nine million cars a year and what you with crews are trying to do some of the most cutting-edge innovation in terms of applying AI and so hot out of those you've talked about a little bit before but it's also just fascinating to me we'll work a lot of automakers you know the difference between the gap between Detroit and Silicon Valley let's say just to be sort of poetic about it I guess what how do you close that gap how do you take GM into the future where a large part of the fleet would be autonomous perhaps I want to start by acknowledging that that GM is made up of you know tens of thousands of really brilliant motivated people who want to be a part of the future and so it's pretty fun to work within the attitude inside a car company like that is you know embracing this this transformation and change rather than fearing it and I think that's a testament to the leadership at GM and that's flown all the way through to to everyone you talk to even the people in this in blue plants working on these cars so that's really great so that starting from that position makes a lot easier so then when the the people in San Francisco at Cruz interact with the people at GM at least we have this common set of values which is that we really want this stuff to work because we think it's important and we think it's the future not to say you know those two cultures don't clash they absolutely do there's different different sort of value systems like in a car company the thing that gets you promoted and so the reward system is following the processes delivering the the program on-time and on-budget so any sort of risk-taking is discouraged in many ways because if a program is late or if you shut down the plant for a day it's you know you can count the millions of dollars that burn by pretty quickly whereas I think you know most Silicon Valley companies and crews in the methodology we were employing especially around the time of the acquisition the reward structure is about trying to solve these complex problems in any way shape or form or coming up with crazy ideas that you know 90% of them won't work and and so so meshing that culture of sort of continuous improvement and experimentation with one where everything needs to be you know rigorously defined upfront so that you never slip a deadline or miss a budget was a pretty big challenge and that we're over three years in now after the acquisition and I'd say like you know the investment we made in figuring out how to work together successfully and who should do what and how we bridge the gaps between these very different systems and way of doing engineering work is now one of our greatest assets because I think we have this really powerful thing but for a while it was both both GM and crews were very steep on the learning curve yes I'm sure it was very stressful it's really important work because that's that's how to revolutionize the transportation it really to revolutionize any system you know you look at the healthcare system or you look at the legal system I have people like lawyers come up to me all the time like everything they're working on can easily be automated but then that's not a good feeling yeah that was it's not a good feeling but also there's no way to automate because the the the entire infrastructure is really you know based is older and it moves very slowly and so how do you close the gap between I haven't how can I replace of course lawyers don't wanna be replaced with an app but you could replace a lot of aspect when most of the data is still on paper and so the same thing was with automotive I mean it's fundamentally software so it's is basically hiring software engineers it's thinking a software world I mean I'm pretty sure nobody in Silicon Valley's ever hit a deadline so and then it's probably true yeah and GSI is probably the opposite yeah so that's that culture gap is really fascinating so you're optimistic about the future of that yeah I mean from what I've seen it's impressive and I think like especially in Silicon Valley it's easy to write off building cars because you know people have been doing that for over a hundred years now in this country and so it seems like that's a solved problem but that doesn't mean it's an easy problem and I think it would be easy to sort of overlook that and think that you know we're Silicon Valley engineers we can solve any problem you know building a car it's been done therefore it's you know it's it's it's not it's not a real engineering challenge but after having seen just the sheer scale and magnitude and industrialization that occurs inside of an automotive assembly plant that is a lot of work that I am very glad that we don't have to reinvent to make self-driving cars work and so to have you know partners who have done that for a hundred years now these great processes and this huge infrastructure and supply base that we can tap into is just remarkable because the scope in surface area of the problem of deploying fleets of self-driving cars is so large that we're constantly looking for ways to do less so we can focus on the things that really matter more and if we had to figure out how to build an assemble in you know test and build the cars themselves I mean we work closely with Jim on that but if we had to develop all that capability in-house as well you know that that would just make make the problem really intractable I think mmm so yeah just like your first entry mit DARPA challenge when there was what the motor that failed and somebody that knows what they're doing with the motor did it that would have been nice if you focus on the software and not the hardware platform yeah right so from your perspective now you know there's so many ways that autonomous vehicles can impact Society in the next year five years ten years what do you think is the biggest opportunity to make money in autonomous driving sort of make it a financially viable thing in the near-term what do you think would be the biggest impact there well the things that that drive the economics for fleets of self-driving cars or they're sort of a handful of variables one is you know the cost to build the vehicle itself so the material cost how many you know what's the cost of all your sensors plus the cost of the vehicle and every all the other components on it another one is the lifetime of the vehicle it's very different if your vehicle drives one hundred thousand miles and then it falls apart versus you know two million and then you know if you have a fleet it's kind of like an airplane where or airline where once you produce the vehicle you want it to be in operation as many hours a day as possible producing revenue and then a you know the other piece of that is how are you generating revenue I think that's kind what you're asking and I think the obvious things today are you know the ride-sharing business because that's pretty clear that there's demand for that there's existing markets you can tap into and larger urban areas that kind of thing yeah yeah and and and I think that there are some real benefits to having cars without drivers compared to through the status quo for people who use ride share services today you know you get privacy consistency hopefully significant improve safety all these benefits versus the current product but it's it's a crowded market and then other opportunities which you've seen a lot of activity in the last really in last six to twelve months is you know delivery whether that's parcels and packages food or or groceries those are all sort of I think opportunities that are that are pretty ripe for these you know once you have this core technology which is the fleet of autonomous vehicles there's all sorts of different business opportunities you can build on top of that but I think the important thing of course is that there's zero monetization opportunity until you actually have that fleet of very capable driverless cars that are that are as good or better than humans and that's sort of where the entire industry is sort of in this holding pattern right now yeah the trend achieved that baseline so but you said sort of rely not reliability consistency it's kind of interesting I think I heard you say somewhere I'm not sure if that's what you meant but you know I can imagine a situation where you would get an autonomous vehicle and you know when you get into an uber or lyft you don't get to choose the driver in a sense that you don't get to choose the personality of the driving do you think there's a there's room to define the personality of the car the way drives you in terms of aggressiveness for example in terms of sort of pushing the bomb the one of the biggest challenges in Toms driving is the is a trade-off between sort of safety and and do you think there's any room for the human to take a role in that decision to accept the liability I guess we III wouldn't it no I'd say within reasonable bounds as in we're not gonna I think it'd be highly unlikely we did expose any nob that would let you you know significantly increase safety risk I think that's that's just not something we'd be willing to do but I think driving style or like you know are you gonna relax the comfort constraints slightly or things like that all of those things make sense and are plausible I see all those is you know nice optimizations once again we get the core problem solved and these fleets out there but the other thing we've sort of observed is that you have this intuition that if you sort of slam your foot on the gas right after the light turns green and aggressively accelerate you're gonna get there faster but the actual impact of doing that is pretty small you feel like you're getting there faster but so that so the same would be true for ABS even if they don't slam there you know the pedal to the floor when the light turns green they're gonna get you they're within you know if it's a 15-minute trip within 30 seconds of what you would have done otherwise if you were going really aggressively so I think there's this sort of self-deception that that my aggressive driving style is getting me there faster well so that's you know some of the things I study some things I'm fascinated by the psychology of that I don't think it matters that it doesn't get you there faster it's it's the emotional release driving is is a place being inside or a car somebody said it's like the real world version of being a troll so you have this protection this mental protection you're able to sort of yell at the world like release your anger whatever is but so there's an element of that that I think autonomous vehicles would also have to you know have giving an outlet to people but it doesn't have to be through through through driving or honking or so on there might be other outlets but I think to just sort of even just put that aside the baseline is really you know that's the focus that's the thing you need to solve and then the fun human things can be solved after but so from the baseline of just solving autonomous driving and you're working in San Francisco one of the more difficult cities to operate in what what is what is the any of you currently the hardest aspect of autonomous driving and negotiated with pedestrians is that edge cases of perception is it planning is there a mechanical engineering is it data fleet stuff like what are your thoughts on the challenge the more challenging aspects there that's a good that's a good question I think before before we go to that though I just wanted I like what you said about the psychology aspect of this because I think one observation I made is I think I read somewhere that I think it's maybe Americans on average spend you know over an hour a day on social media like staring at Facebook and so that's just you know 60 minutes of your life you're not getting back and it's probably not super productive and so that's 3,600 seconds right and that's that's time you know it's a lot of time you're giving up and if you compare that to people being on the road if another vehicle whether it's a human driver or autonomous vehicle delays them by even three seconds they're laying in on the horn you know even though that's that's you know one one thousandth of the time they waste looking at Facebook every day so there's there's definitely some you know psychology aspects of this I think that are pre interesting road rage in general and then the question of course is if everyone is in self-driving cars do they even notice these three-second delays anymore because they're doing other things or reading or working or just talking to each other so it'll be interesting to see where that goes in a certain aspect people people need to be distracted by something entertaining something useful inside the car so they don't pay attention to the external world and then and then and it can take whatever psychology and bring it back to Twitter and then focus on that as opposed to sort of interacting sort of putting the emotion out there into the world so it's a it's an interesting problem but baseline autonomy I guess you could say self-driving cars you know at scale will lower the collective blood pressure of society probably by a couple points yeah without all that road rage and stress so that's a good good externality so back to your question about the technology in the the I guess the biggest problems and I have a hard time answering that question because you know we've been at this like specifically focusing on driverless cars and all the technology needed to enable that for a little over four and a half years now and even a year or two in I felt like we had completed the functionality needed to get someone from point A to point B as in if we need to do a left turn maneuver or if we need to drive around a you know a double parked vehicle into oncoming traffic or navigate through construction zones the the scaffolding and the building blocks where it was there pretty early on and so the challenge is not any one scenario or situation for which you know we fail at 100% of those it's more you know we're benchmarking against a pretty good or pretty high standard which is human driving all things considered humans are excellent at handling the edge cases and unexpected scenarios whereas computers the opposite and so beating that that baseline set by humans is the challenge and so what we've been doing for quite some time now is basically it's this continuous improvement process where we find sort of the the most you know uncomfortable or the things that that could lead to a safety issue other things all these events and then we sort of categorize them and rework parts of our system to make incremental improvements and do that over and over and over again and we just see sort of the overall performance of the system you know actually increasing in a pretty steady clip but there's no one thing there's actually like thousands of little things and just like polishing functionality and making sure that it handles you know every version impossible permutation of a situation by either applying more deep learning systems or just by you know adding more tests coverage or new scenarios that that we develop against and just grinding on that it's we're sort of in the the unsexy phase of development right now which is doing the real engineering work that it takes to go from prototype to production you're basically scaling the the grinding so has sort of taking seriously that the process of all those edge cases both with human experts and machine learning methods to cover to cover all those situations yeah and the exciting thing for me is I don't think that grinding ever stops right because there's a moment in time where you you cross that threshold of human performance and become superhuman but there's no reason there's no first principles reason that AV capability will tap out anywhere near humans like there's no reason it couldn't be 20 times better whether that's you know just better driving or safer driving a more comfortable driving or even a thousand times better given enough time and we intend to basically chase that you know forever to build the best possible product better and better and better and always new educators come up and you experiences so and you want to automate that process as much as possible mhm so what do you think in general in society when do you think we may have hundreds of thousands of fully autonomous vehicles driving around so first of all predictions nobody knows the future you're a part of the leading people trying to define that future but even then you still don't know but if you think about a hundreds of thousands of heat so a significant fraction of vehicles in major cities are autonomous do you think I would Rodney Brooks who is 2050 and beyond are you more with Elon Musk who is we should have had that two years ago well I mean I don't want me to have it two years ago but we're not there yet so I guess the the way I would think about that is let's let's flip that question around so what would prevent you to reach hundreds of thousands of vehicles and that's a goodness a good rephrasing yeah so the I'd say the it seems the consensus among the people developing self-driving cars today is to sort of start with some form of an easier environment whether it means you know lacking inclement weather or you know mostly sunny or whatever it is and then add add capability for more complex situations over time and so if you're only able to deploy in areas that that meet sort of your criteria or that the current domain you know operating domain of the software you developed that may put a cap on how many cities you could deploy in but then as those restrictions start to fall away like maybe you add you know capability to drive really well and and safely in heavy rain or snow you know that that probably opens up the market by - two or three fold in terms of the cities you can expand into and so on and so the real question is you know I I know today if we wanted to we could produce that that many autonomous vehicles but we wouldn't be able to make use of all of them yet because we would sort of saturate the demand in the cities in which we would want to operate initially so if I were to guess like what the timeline is for those things falling away and reaching hundreds of thousands of vehicles maybe a range is but I would I would say less than five years that's in five years yeah and of course you're working hard to make that happen so you started two companies that were eventually acquired for each for a billion dollars so you're pretty good person to ask what does it take to build a successful startup mmm-hmm I think there's there sort of survivor bias here a little bit but I can try to find some common threads for the the things that worked for me which is you know in in both of these companies it was really passionate about the core technology I actually like you know lay awake at night thinking about these problems and how to solve them and I think that's helpful because when you start a business there are like to this day they're they're these crazy ups and downs like one day you think the business is just on you're just on top of the world and unstoppable and the next day you think okay this is all gonna and you know it's it's just it's just going south and it's gonna be over tomorrow and and so I think like having a true passion that you can fall back on and knowing that you would be doing it even if you weren't getting paid for it helps you whether those those tough times so that's one thing I think the other one is really good people so I've always been surrounded by really good co-founders that are logical thinkers are always pushing their limits and have very high levels of integrity so that's Dan Khan in my current company and actually his brother and a couple other guys for Justin TV and twitch and then I think the last thing is just uh I guess persistence or perseverance like and and that that can apply to sticking to sort of a or having conviction around the original premise of your idea and and sticking around to do all the you know the unsexy work to actually make it come to fruition including dealing with you know whatever it is that that you're not passionate about whether that's finance or or HR or or operations or those things as long as you are grinding away in working towards you know that North Star for your business whatever it is and you don't give up and you're making progress every day it seems like eventually you'll end up in a good place and the only things that can slow you down are you know running out of money or I suppose your competitors destroying you but I think most of the time it's people giving up or or somehow destroying things themselves rather than being beaten by their competition or running out of money yeah if you never quit eventually you'll arrive so working size version of what I was trying to say yeah so you want the Y Combinator out twice yeah what do you think in a quick question do you think is the best way to raise funds in the early days or not just funds but just community develop your idea and so on can you do it solo or maybe with a co-founder with like self-funded do you think Y Combinator is good it's good to do VC route is there no right answer was there for the Y Combinator experience something that you could take away that that was the right path to take there's no one-size-fits-all answer but if your ambition I think is to you know see how big you can make something or or or rapidly expand and capture market or solve a problem or whatever it is then then you know going to venture back route is probably a good approach so that so that capital doesn't become your primary constraint Y Combinator I love because it puts you in this sort of competitive environment while you're where you're surrounded by you know the top maybe one percent of other really highly motivated you know peers who are in the same same place and that that environment I think just breeds breed success right if you're surrounded by really brilliant hard-working people you're gonna feel you know sort of compelled or inspired to try to emulate them and/or beat them and so even though I had done it once before and I felt like yeah I'm pretty self-motivated I thought like I look this is gonna be a hard problem I can use all the help I can get so surrounding myself with other entrepreneurs is gonna make me work a little bit harder or push a little harder than it's worth it when Saba white why I did it you know for example a second time let's let's go philosophical existential if you'd go back and do something differently in your life starting in high school than MIT leaving MIT you could have gone the PG route doing startup I'm gonna see about a start-up in California and youth or maybe some aspects of fundraising is there something you'll regret something you need not necessarily grab but if you go back it could do differently I think I've made a lot of mistakes like you know pretty much everything you can screw up I think I've screwed up at least once but I you know I don't regret those things I think it's hard to hard to look back on things even if they didn't go well and call it a regret because hopefully took away some new knowledge or learning from that so I would say there was a period yeah the closest I can I can come to us is there's a period in in justin.tv I think after seven years where that the company was going one direction which is sorts twitch in video gaming and I'm not a video gamer I don't really even use twitch at all and I was still working on the core technology there but my heart was no longer in it because the business that we were creating was not something that I was personally passionate about it didn't meet your bar of existential impact yeah and I'd say III probably spent an extra year or two working on that and and I'd say like I would have just tried to do something different sooner because those are those were two years where I felt like you know from this philosophical or existential thing I I just I just felt something was missing and so I would have I would have if I could look back now and tell myself it's like I would have said exactly that like you're not getting any meaning out of your work personally right now you should you should find a way to change that and that's part of the pitch I use to basically everyone who joins crews today it's like hey you've got that now by coming here well maybe you needed the two years of that existential dread to develop the feeling that ultimately was the fire that created crews so you never know you can be good theory yeah so last question what does 2019 hold for crews after this I guess we're gonna go and I'll talk to your class but one of the big things is going from prototype to production for autonomous cars and what does that mean once that look like in 2019 for us is the year that we try to cross over that threshold and reach you know superhuman level of performance to some degree with the software and have all the other of the thousands of little building blocks in place to launch you know our first commercial product so that's that's what's in score for us are in store for us and we've got a lot of work to do we've got a lot of brilliant people working on it so it's it's all up to us now yeah from Charlie Miller and Chris fells like the people I have crossed paths with if you know it sounds like you have an amazing team so I'm like I said it's one of the most I think one of the most important problems in artificial intelligence of the century you'll be one of the most defining the super exciting that you work on it and the best of luck in 2019 I'm really excited to see what Cruz comes up with thank you thanks for having me today nice call you
Tomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
the following is a conversation with Tommaso poggio he's the professor at MIT and as a director of the Center for brains minds and machines sited over 100,000 times his work has had a profound impact on our understanding of the nature of intelligence in both biological and artificial neural networks he has been an advisor to many highly impactful researchers and entrepreneurs in AI including demis hassabis of deep mind I'm nacho of mobile eye and Christof Koch of the Allen Institute for brain science this conversation is part of the MIT course on artificial general intelligence and the artificial intelligence podcast if you enjoy it subscribe on youtube itunes or simply connect with me on twitter at Lex Friedman spelled Fri D and now here's my conversation with Tommaso poggio you've mentioned that in your childhood you've developed a fascination with physics especially the theory of relativity and that Einstein was also a childhood hero to you what aspect of Einstein's genius the nature was genius do you think was essential for discovering the theory of relativity you know Einstein was a hero to me and I'm sure to many people because he was able to make of course a major major contribution to physics with simplifying a bit just a Gedanken experiment a fourth experiment you know imagining communication with Lights between a stationary observer and somebody on a train and I thought you know the the fact that just with the force of his fault of his thinking of his mind he could guide to some something so deep in terms of physical reality how time depend on space and speed it was something absolutely fascinating was the power of intelligence the power of the mind do you think the ability to imagine to visualize as he did as a lot of great forces sister do you think that's in all of us human beings or is there something special to that one particular human being I think you know all of us can learn and have in principle similar breakthroughs there are lesson to be learned from Einstein he was one of five PhD students at ETA and the ID Canarsie technician actua in Zurich in physics and he was the worse of the five but the only one who did not get an academic position when he graduated well finished his PhD and he went to work as everybody knows for the Patent Office and so it's not so much the work for the Patent Office but the fact that obviously it was marked but he was not the top student obviously was the anti conformist I was not thinking in the traditional way that probably stitches and the other students were doing so there is a lot to be said about you know trying to be to do the opposite or something quite different from what other people are doing that's actually true for the stock market never never buy for very bodies by and also true for science yes so you've also mentioned staying on a theme of physics that you were excited and a young age by the mysteries of the universe that physics could uncover such as I saw mentioned the possibility of time travel so the most out-of-the-box question I think I'll get to ask today do you think time travel is possible well it would be nice if it were possible right now you know you in science you never say no but your understanding of the nature of time yeah it's very likely that it's not possible to travel in time you may be able to travel forward in time if we can for instance freeze ourselves or you know go on some spacecraft traveling close to the speed of light but in terms of activity traveling for instance back in time I find probably very unlikely so do you still hold the underlying dream of the engineering intelligence that will build systems that are able to do such huge leaps like discovering the kind of mechanism that would be required to travel through time do you still hold that dream or are echoes of it from your childhood yeah I you know I don't think whether there are certain problems that probably cannot be solved depending what what you believe about the physical reality like you know maybe totally impossible to create energy from nothing or to travel back in time but about making machines that can think as well as we do or better or more likely especially in the short and midterm helped us think better which is in a sense is happening already with the computers we have and it will happen more and more but that I certainly believe and I don't see in principle why computers at some point could not become more intelligent than we are although the word intelligence it's a tricky one and one who should discuss which I mean with that in intelligence consciousness yeah words like love is all these are very you know you need to be disentangled so you've mentioned also that you believe the problem of intelligence is the greatest problem in science greater than the origin of life and the origin the universe you've also in the talk I've listened to said that you're open to arguments against against you so what do you think is the most captivating aspect of this problem of understanding the nature of intelligence why does it captivate you as it does well originally I think one of the motivation that I had as I guess a teenager when I was infatuated with theory of relativity was really that I I found that there was the problem of time and space and general relativity but there were so many other problems of the same level of difficulty and importance that I could even if I were I stein it was difficult to hope to solve all of them so what about solving a problem whose solution allowed me to solve all the problems and this was what if we could find the key to an intelligence you know ten times better or faster than Einstein so that's sort of seeing artificial intelligence as a tool to expand our capabilities but is there just an inherent curiosity in you and just understanding what is in our in here that makes it all all work yes absolutely all right so I was starting I started saying this was the motivation when I was a teenager but you know soon after I think the problem of human intelligence became a real focus of you know of my sent my science and my research because I think he's for me the most interesting problem is really asking oh we we are right is asking not only a question about science but even about the very tool we are using to do science which is our brain how does our brain work from where does it come from after its limitation can we make it better and that in many ways is the ultimate question that underlies this whole effort of science so you've made significant contributions in both the science of intelligence and the engineering event in a hypothetical way let me ask how far do you think we can get in creating intelligent systems without understanding the biological the understanding how the human brain creates intelligence put another way do you think we can build a strong-ass system without really getting at the core the functionally understanding the functional nature of the brain well this is a real difficult question you know we did solve problems like flying without really using too much our knowledge about how birds fly it was important I guess to know that you could have things heavier than than air being able to fly like like birds but beyond that probably we did not learn very much you know some you know the brothers right did learn a lot of observation about birds and designing their their aircraft but you know you can argue we did not use much of biology in that particular case now in the case of intelligence I think that it's it's a bit of a bet right now if you are if you ask okay we we all agree we'll get at some point maybe soon maybe later to a machine that is indistinguishable from my secretary say in terms of what I can ask the machine to do I think we get there and now the question is and you can ask people do you think we'll get there without any knowledge about you know the human brain or that is the best way to get there is to understand better the human brain yeah okay this is I think an educated bet that different people with different background will decide in different ways the recent history of the progress in AI in the last out say five years or ten years is has been that the main breakthroughs the main recent breakthroughs I really start from neuroscience mention reinforcement learning as one is one of the algorithms at the core of alphago which is the system that beat the kind of an official world champion of go lee sedol and two three years ago in seoul that's one and that started related with the work of Pavlov and I'll or hundred Marvin Minsky in the sixties many other neuroscientists later on and deep learning started which is the core again of alphago and systems like autonomous driving systems for cars like the systems that mobile I which is a company started by one of my exposed or Colonel Joshua did so that is a core of those things and deep learning really the initial ideas in terms of the architecture of this layered ARCIC on networks started with work of Torsten Wiesel and David Hubel at Harvard up the river in the 60s so recent history suggests the neuroscience played a big role in these breakthroughs my personal bet is that there is a good chance they continue to play a big role maybe not in all the future breakthroughs but in some of them at least in inspiration so at least in a new spirit absolutely yes so you see you studied both artificial and biological neural networks you said these mechanisms that underlie deep learning deeper and reinforcement learning but there is nevertheless significant differences between biological and artificial neural networks as they stand now so between the two what he finds the most interesting mysterious maybe even beautiful difference as it currently stands in our understanding I must confess that until recently I found that the artificial networks too simplistic relative to real neural networks but you know recently I've been started to think that yes there are a very big simplification of what you find in the brain but on the other hand there are much closer in terms of the architecture to the brain than other models that we had that computer science used as model of thinking which were mathematical logics you know Lisp Prolog and those kind of things yeah so in comparison to those they're much closer to the brain you have networks of neurons which is what the brain is about and the artificial neurons in the models are as I said caricature of the biological neurons but they're still neurons single units communicating with other units something that is absent in you know the traditional computer type models of mathematics reasoning and so on so what aspect is would you like to see in artificial neural networks added over time as we try to figure out ways to improve them so one of the main differences and you know problems in terms of deep learning today and it's not only deep learning and the brain is the need for deep learning techniques to have a lot of labeled examples you know for Easter for imagenet you have like a training site which is 1 million images each one labeled by some human in terms of which object is there and it's it's clear that in biology a baby may be able to see million of images in the first years of life but will not have million of labels given to him or her by parents or take take care takers so how do you solve that you know I think there is this interesting challenge that today deep learning and related techniques are all about big data big data meaning a lot of examples labeled by humans whereas in nature you have so that this big data is n going to infinity that's the best you know and meaning labeled data but I think the biological world is more n going to one Hey a child can learn the beautiful wrote a very small number of you know labeled examples like you tell a child this is a car you don't need to say like imagenet you know this is a car this is a car this is not a car this is not a cat 1 million times so and of course with alphago and or at least alpha 0 variants there's because of the because the world of go is so simplistic that you can actually learn by yourself through self play you could play against each other and the real world i meet the visual system that you've studied extensively is a lot more complicated than the game of go so under comment about children which are fascinatingly good at learning new stuff how much of it do you think is hardware how much of it is software you know that's a good deep question is in a sense is the old question of nurture and nature how much isn't in the gene and how much is in the experience of an individual obviously it's both that play a role and i believe that the way evolution gives put prior information so to speak hard while it's not really hard while but that's essentially an hypothesis I think what's going on is that evolution as you know almost necessarily if you believe in Darwin is very opportunistic and and think about our DNA and the DNA of Drosophila our DNA does not have many more genes than resolve around the fly the fly the fruit fly now we know that the fruit fly does not learn very much during its individual existence it looks like one of this machinery that it's really mostly not hundred percent but you know 95 percent hard coded by the genes but since we don't have many more genes than Drosophila as evolution could encoding as a kind of general learning machinery and then had to give very weak priors like for instance let me take give a specific example which is recent to work by a member of our Center for brains minds and machines we know because of work of other people in our group and other groups that there are cells in a part of our brain neurons that are tuned to phases they seems to be involved in face recognition now this face area exists seems to be present in young children and adults and one question is is there from the beginning is hardwired by evolution or you know somehow is learned very quickly so what's your by the way a lot of the questions I'm asking with the answer is we don't really know but as a person who has contributed some profound ideas in these fields you're a good person to guess at some of these so of course there's a caveat before a lot of the stuff we talk about but what is your hunch is the face the part of the brain that that seems to be concentrated on face recognition are you born with that or you just is designed to learn that quickly like the face of the mother and I my hand shimmer by bias was the second one learned very quickly and it turns out that Marge Livingstone at Harvard has done some amazing experiments in which she raised baby monkeys depriving them of faces during the first weeks of life so they see technicians but the technician have a mask yes and and so when they looked at the area in the brain of this monkeys that were usually find faces they found no face preference so my guess is that what evolution does in this case is there is a plastic Canaria which is plastic which is kind of predetermined to be imprinted very easily but the command from the gene is not detailed circuitry for a face template could be but this will require probably a lot of bits you had to specify a lot of connection of a lot of neurons instead that the command that commands from the gene is something like imprint memorized what you see most often in the first two weeks of life especially in connection with food and maybe nipples I don't write well source of food and so in then that area is very plastic at first and in the otherwise I'd be interesting if a variant of that experiment would show a different kind of pattern associated with food than a face pattern well whether that quite stick there are indications that during that experiment what the monkey saw quite often where the blue gloves of the technicians that were giving to the baby monkeys the milk and some of the cells see instead of being face sensitive in that area or a hand sensitive that's fascinating can you talk about what are the different parts of the brain and in your view sort of loosely and how do they contribute to intelligence do you see the brain as a bunch of different modules and they together come in the human brain to create intelligence or is it all one mush of the same kind of fundamental architecture yeah that's you know that's an important question and there was a phase in neuroscience by in the 1950 or so in which it was believed for a while that the brain was equipotential this was the term you could cut out a piece and nothing special happened apart a little bit less performance there was a a surgeon Lashley did a lot of experiments of this type with mice and rats and concluded that every part of the brain was essentially equivalent to any other one it turns out that that's that's really not true it's there are very specific modules in the brain as you said and you know people may lose the ability to speak if you have a stroke in a certain region or may lose control of their legs in another region or so they're very specific the brain is also quite flexible and redundant so often it can correct things and you know the kind of takeover functions from one part of the brain to the other but but but really there are specific modules of the answer that we know from this old work which was basically on based on lesions either on animals or very often there were a mine of well it there was a mine a very interesting data coming from from the war from different types of injuries injuries that soldiers had in the brain and more recently functional MRI which allow you to to check which part of the brain are active when you are doing different tasks as you know can replace some of this you can see that certain parts of the brain are involved or active in this language yeah yeah that's right but sort of taking a step back to that part of the brain that discovers that specializes in the face and how that might be learned what's your intuition behind you you know is it possible that the sort of from a physicists perspective when you get lower and lower that it's all the same stuff and it just when you're born it's plastic and it quickly figures out this part is going to be about vision this is gonna be about language this is about common sense reasoning do you have an intuition that that kind of learning is going on really quickly or is it really kind of solidified in hardware that's a great question so there are parts of the brain like the cerebellum or they put campus that are quite different from each other they clearly have different Anatomy different connectivity that then there is the cortex which is the most developed part of the brain in humans and in the cortex you have different regions of the cortex that are responsible for vision for audition for motor control for language now one of the big puzzles of of this is that in the cortex is the cortex is the cortex it looks like it is the same in terms of hardware in terms of type of neurons and connectivity across these different modalities so for the cortex letting aside these other parts of the brain like spinal cord upon campus or bedroom and so on for the cortex I think your question about hardware and software and learning and so on it's it I think is rather open and you know it I find very interesting for easy to think about an architecture computer architecture that is good for vision and the symptom is good for language seems to be you know so different problem areas that you have to solve but the underlying mechanism might be the same that's really instructive for it maybe artificial neural networks so you've done a lot of great work in vision and human vision computer vision and you mentioned the problem of human vision is really as difficult as the problem of general intelligence and maybe that connects to the cortex discussion can you describe the human visual cortex and how the humans begin to understand the world through the raw sensory information the woods for folks enough familiar especially in on the computer vision side we don't often actually take a step back except saying what the sentence or two that one is inspired by the other well what is it that we know about the human visual cortex that's interest so we know quite a bit at the same time we don't know a lot but the the bit we know you know in a sense we know a lot of the details and Men we don't know and we know a lot of the top level the answer the top level question but we don't know some basic ones even in terms of general neuroscience forgetting vision you know why do we sleep it's such a basic question and we really don't have an answer to that do you think so taking a step back on that so sleep for examples fascinating do you think that's a neuroscience question or if we talk about abstractions what do you think is an interesting way to study intelligence or are most effective on the levels of abstractions the chemicals the biological is electro physical mathematical as you've done a lot of excellent work on that side which psychology is sort of like at which level of abstraction do you think well in terms of levels of abstraction I think we need all of them all hits when you know it's like if you ask me what does it mean to understand the computer right that's much simpler but in a computer I could say well I understand how to use PowerPoint that's my level of understanding a computer it's it has reasonable you know give me some power to produce lights and beautiful slides and now the class on body exercise well I I know how the transistor work that are inside the computer I can write the equation for you know transistor and diodes and circuits logical circuits and I can ask this guy do you know how to operate PowerPoint no idea so do you think if we discovered computers walking amongst us full of these transistors that are also operating under windows and have PowerPoint do you think it's digging in a little bit more how useful is it to understand the transistor in order to be able to understand PowerPoint and these higher-level very good intelligence I see so I think in the case of computers because they were made by engineers by us this different level of understanding are rather separate on purpose you know you there are separate modules so that the engineer that designed the circuit for the chips does not need to know what power is inside PowerPoint and somebody you can write the software translating from one to the end to the other and so in that case I don't think understanding the transistor help you understand PowerPoint or very little if you want to understand the computer this question you know I would say you have to understanding a different levels if you really want to build one right but but for the brain I think these levels of understanding so the algorithms which kind of computation you know the equivalent of PowerPoint and the circuits you know the transistors I think they are more much more intertwined with each other there is not you know in Italy level of the software separate from the hardware and so that's why I think in the case of the brain a problem is more difficult or more than four computers requires the interaction the collaboration between different types of expertise that's a big the brain is a big mess you can't just on disentangle a level I think you can but is is much more difficult and it's not you know it's not completely obvious and and I said I think he's one of the person everything is the greatest problem in science so yeah you know I think he's it's fair that it's difficult one that said you do talk about compositionality and why I might be useful and when you discuss what why these neural networks in artificial or biological sense learn anything you talk about compositionality see there's a sense that nature can be disentangled our purpura well all aspects of our cognition could be disentangled a little to some degree so why do you think what first of all how do you see compositionality and why do you think it exists at all in nature it spoke about I use the the term compositionality when we looked at deep neural networks multi-layers and trying to understand when and why they are more powerful than more classical one layer network like linear classifier kernel machines so-called and what we found is that in terms of approximating or learning or representing a function a mapping from an input to an output like from an image to the label in the image if this function as a particular structure then deep networks are much more powerful than shallow networks to approximate the underlying function and the particular structure is a structure of compositionality if the function is made up of functions of function so that you need to look on when you are interpreting an image classifying an image you don't need to look at all pixels at once but you can compute something from small groups of pixels and then you can compute something on the output of this local computation and so on that is similar to what you do when you read the sentence you don't need to read the first and the last letter but you can read syllables combine them in words combine the words in sentences so this is this kind of structure so that's as part of the discussion of why deep neural networks may be more effective than the shallow methods and is your sense for most things we can use neural networks for those problems are going to be compositional in nature like like language like vision how far can we get in this kind of right so here is almost philosophy well you know there yeah let's go there so a friend of mine max tegmark who is a physicist at MIT I've talked to him on this thing yeah and he disagrees with you right yeah but we you know we agree most but the conclusion is a bit differently he is conclusion is that for images for instance the compositional structure of this function that we have to learn or to solve these problems comes from physics comes from the fact that you have local interactions in physics between atoms and other atoms between particle of matter and other particles between planets and other planets between stars that it's all local and that's true but you could push this argument a bit further not this argument actually you could argue that you know maybe that's part of the true but maybe what happens is kind of the opposite is that our brain is wired up as a deep network so it can learn understand solve problems that I have this compositional structure and I cannot do they cannot solve problems that don't have this compositional stretch so the problem is we are accustomed to we think about we test our algorithms on our this compositional structure because our brain is made up in that's in a sense an evolutionary perspective as we've so the ones that didn't have the they weren't dealing with a compositional nature of reality died off yes it also could be may be the reason why we have this local connectivity in the brain like simple cells in cortex looking only the small part of the B image each one of them and another says looking at it small number of these simple cells and so on the reason for this may be purely that was difficult to grow longer range connectivity so suppose it's you know for biology it's possible to grow short range connectivity but not longer and also because there is a limited number of long range the Duke and so you have at this this limitation from the biology and this means you build a deep convolutional neck this would be something like deep convolutional network and this is great for solving certain class of problem these are the ones we are we find easy and important for our life and yes they were enough for us to survive and and you can start a successful business on solving those problems right mobile a driving is a compositional problem right so on the unlearning task i mean we don't know much about how the brain learns in terms of optimization but so the thing that's stochastic gradient descent is what artificial neural networks used for the most part to adjust the parameters in such a way that it's able to deal based on the label data it's able to solve the problem yeah so what's your intuition about why it works at all a heart of a problem it is to optimize in your own network artificial neural network is there other alternatives you're just in general your intuition is behind this very simplistic algorithm that seems to do pretty good surprising yes yes so I find near of science the the architecture of cortex it's a really similar to the architecture of deep networks so that there is a nice correspondence there between the biology and this kind of local connectivity hierarchical architecture the stochastic gradient descent as you said is is a very simple technique it seems pretty unlikely that biology could do that from from what we know right now about you know cortex and neurons and synapses so it's a big question open whether there are other optimization learning algorithms that can replace stochastic gradient descent and my my guess is yes but nobody has found yet a real answer I mean people are trying still trying and there are some interesting ideas the fact that stochastic gradient descent is so successful this has become clear is not so mysterious and the reason is that it's an interesting fact you know it's a change in a sense in how people think about statistics and and this is the following is that typically when you had data and you had say a model with parameters you are trying to fit the model to the data you know to fit the parameter typically the kind of kind of crowd wisdom type idea was you should have at least you know twice the number of data than the number of parameters you maybe 10 times is better now the way you train neural net or this disease that I have they have 10 or 100 times more parameters than did exactly the opposite and which you know it is it has been one of the puzzles about neural networks how can you get something that really works when you have so much freedom in its in from that Laura Derek in general right somehow right exactly do you think this the stochastic nature is essential to randomness so I think we have some initial understanding why this happens but one nice side effect of having this over parameterization more parameters than data is that when you look for the minima of a loss function like stochastic gradient descent is doing in find I I made some calculations based on some old basic theorem of algebra called bazoo theorem and that gives you an estimate of the number of solution of a system of polynomial equation anyway the bottom line is that there are probably more minima for a typical deep networks than atoms in the universe just to say there are lost because of the over parametrization a more global minimum zero meaning good meaning so it's not just local minima yeah a lot of them so you have a lot of solutions so it's not so surprising that you can find them relatively easily and this is why this is because of the overall parameterization the organization sprinkles an entire space for solutions pretty good and so not so surprising right is like you know if you have a system of linear equation and you have more unknowns than equations then you have we know you have an infinite number of solutions and the question is to pick one that's another story but you have an infinite number of solutions so there are a lot of value of your unknowns that satisfy the equations but it's possible that there's a lot of those solutions that aren't very good what's surprising so that's a good question why can you pick one the generalizes one yeah that's a separate question with separate answers one one theorem that people like to talk about that kind of inspires imagination of the power in your networks is the universality a universal approximation theorem you can approximate any computable function with just a finite number of neurons and a single hidden layer see you find this theorem one surprising you find it useful interesting inspiring now this one you know I never found it very surprising it's was known since the 80s since I entered the field because it's basically the same as biased as the which says that I can approximate any continuous function with a polynomial of sufficiently with a sufficient number of terms monomials so basically the same and the proves very similar so your intuition was there's never any doubt in your networks in theory could the right be very strong approximate nicely the the question the interesting question is that if this theorem it says you can approximate fine but when you ask how many neurons for instance or in the case of polynomial how many monomials I need to get a good approximation then it turns out that that depends on the dimensionality of your function how many variables you have but it depends on the dimensionality of your function in a bad way it's for instance suppose you want an error which is no worse than 10% in your approximation you come up with a net of the approximate your function within 10% then turns out that the number of units you need are in the order of 10 to the dimensionality D how many variables so if you have you know two variables is these 2 would you have hundred units and okay but if you have say 200 by 200 pixel images now this is you know 240 thousand whatever and we can go to the sizing universe pretty quickly there are exactly 10 to the 40,000 and so this is called the curse of dimensionality not you know quite appropriate and the hope is with the extra layers you can remove the curse what we proved is that if you have deep layers or a rocky core architecture that with the local connectivity of the type of convolutional deep learning and if you are dealing with a function that has this kind of hierarchical architecture then you avoid completely the curves you've spoken a lot about supervised deep learning yeah what are your thoughts hopes views on the challenges of unsupervised learning with the with Ganz with the generator valor surround networks do you see those is distinct that the power of Ganz does is distinct from supervised methods in your networks are they really all in the same representation ballpark gains is one way to get estimation of probability densities which is somewhat new way but people have not done before I I don't know whether this will really play an important role in you know in intelligence or it's it's interesting I'm I'm less enthusiastic about it too many people in the field I have the feeling that many people in the field are really impressed by the ability to of producing realistic looking images in this generative way which describes the popularity of the methods but you're saying that while that's exciting and cool to look at it may not be the tool that's useful for yeah for so you describe it kind of beautifully current supervised methods go and to infinity in terms of number of labelled points and we really have to figure out how to go to and to one yeah and you're thinking ganz might help but they might not be the right I don't think you for that problem which I really think is important I think they may help they certainly have applications for instance in computer graphics and you know we I did work long ago which was a little bit similar in terms of saying okay 11 network and I present images and I can so input its images and output is for instead the pose of the image you know a face how much is miling is rotated 45 degrees or not what about having a network that I trained with the same dataset but now I invert input and output now the input is the pose or the expression number certain numbers and the output is the image and I train it and we did pretty good interesting results in terms of producing very realistic looking images was you know less sophisticated mechanism but the output was pretty less than gains but the output was pretty much of the same quality so I think for computer graphics type application yeah definitely gains can be quite useful and not only for that--for but for you know helping for instance on this problem of unsupervised example of reducing the number of labeled examples I think people it's like they think they can get out more than they put in you know it there's no free lunches Yeah right that's what do you think what's your intuition how can we slow the growth of n to infinity in supervised and to infinity in supervised learning so for example mobile I has very successfully I mean essentially annotated large amounts of data to be able to drive a car now one thought is so we're trying to teach machines of AI and we're trying to so how can we become better teachers maybe that's one one way now I got your you know what I like that because one again one caricature of the history of computer sites you could say is with the gains with programmers expensive yeah continuously labelers cheap yeah and the future would be schools like we have for kids yeah currently the labeling methods were not selective about which examples we we teach networks with so I think the focus of making one-shot networks that learn much faster is often on the architecture side but how can we pick better examples with wish to learn do you have intuitions about that well that's part of the quarter program but the other one is you know if we look at biology reasonable assumption I think is in the same spirit II that I said evolution is opportunistic and has weak priors you know the way I think the intelligence of child the baby may develop is by bootstrapping weak priors from evolution for instance in you can assume that you are having most organisms including human babies built in some basic machinery to detect motion and relative motion and in fact there is you know we know all insects from fruit flies other animals they have this even in the readiness of in the very peripheral part it's very conserved across species something that evolution discovered early it may be the reason why babies tend to look in the first few days to moving objects and not to not moving out now moving objects means okay they are attracted by motion but motion also means that motion gives automatic segmentation from the background so because of motion boundaries you know either the object is moving or the eye of the baby is tracking the moving object and the background is moving right yeah so just purely on the visual characteristics of the scene as seems to be the most useful right so it's like looking at an object without background it's ideal for learning the object otherwise it's really difficult because you have so much stuff so suppose you do this at the beginning first weeks then after that you can recognize the object now they're imprinted a number of even in the background even without motion so that's at the by the way I just want to ask an object recognition problem so there is this being responsive to movement and edge detection essentially what's the gap between being effectively effective at visually recognizing stuff detecting word that is and understanding the scene there is this a huge gap in many layers or is it as a close no I think that's a huge gap I think present algorithm with all the success that we have and the fact that are a lot of very useful it's I think we are we are in a golden age for applications of low level vision and low level speech recognition and so on you know Alexa and so there are many more things of similar level to be done including medical diagnosis and so on but we are far from what we call understanding of a scene of language of actions of people that is despite the claims that's I think very far or a little bit off so in popular culture and among many researchers some of which I've spoken with the sue Russell and you know a mask in and out of the AAI field there's a concern about the existential threat of AI yeah and how do you think about this concern in and is it valuable to think about large-scale long-term unintended consequences of intelligent systems we try to build I always think is better to worry first you know early rather than late so some worry is good yeah I'm not against worry at all personally I think that you know it will take a long time before there is real reason to be worried but as I said I think it is good to put in place and think about possible safety against what I find a bit misleading are things like that I've been said by people I know like Elon Musk and what is boström important notice first name a neck panic poster right you know and a couple of other people that for instance a eyes more dangerous the nuclear weapons right yeah I think that's really project that can be it's misleading because in terms of priority which should still be more worried about nuclear weapons and you know what people are doing about it and some then a and he's spoken about them as obvious and yourself saying that you think you'll be about a hundred years out before we have a general intelligence system that's on par with the human being you have any updates for those predictions what I think he said he's at 28 he said it went all right this was a couple of years ago I have not asked him again so I should have your own prediction what's your prediction about when you'll be truly surprised and what's the confidence interval or not you know it's so difficult to predict the future and even the presence of it's nothing it's pretty hard to predict a bit I'll be but as I said this is completely it would be more like rod Brooks I think he's about 200 years when we have this kind of a GI system artificial general intelligence system you're sitting in a room with her him it do you think it will be the underlying design of such a system is something we'll be able to understand it will be simple do you think you'll be explainable understandable by us your intuition again we're in the realm of philosophy a little bit but probably no but it again it depends would you really mean for understanding so I think you know we don't understand what how deep networks work I think we're beginning to have a theory now but in the case of deep networks or even in the case of the simple simpler kernel machines or linear classifier we really don't understand the individual units also we but we understand you know what the computation and the limitations and the properties of it are it's similar to many things you know we what does it mean to understand how a fusion bomb works how many of us you know many of us understand the basic principle and some of us may understand deeper details in that sense understanding is as a community as a civilization can we build another copy of it okay and in that sense you think there'll be there will need to be some evolutionary component where it runs away from our understanding or do you think it could be engineered from the ground up the same way you go from the transistor to our point all right so many years ago this was actually 40 41 years ago I wrote a paper with David Marr who was one of the founding father of computer vision of computational dish I wrote a paper about levels of understanding which is related to the question I discussed earlier about understanding power point understanding transistors and so on and you know in that kind of framework we had the level of the hardware and the top level of the algorithms we did not have learning recently I updated adding levels and one level I added to those free was learning so and you can imagine you could have a good understanding of how you construct learning machine like we do but being unable to describe in detail what the learning machines will discover right now that would be still a powerful understanding if I can build the learning machine even if I don't understand in detail every time made it learn something just like our children if they're if they start listening to a certain type of music I don't know Miley Cyrus or something you don't understand why they came after that particular preference but you understand the learning process that I'm very interesting yeah yeah so unlearning for systems to be part of our world it has a certain one of the challenging things that you've spoken about is learning ethics learning yeah morals and what how hard do you think is the problem of first of all humans understanding our ethics what is the origin and the neural a low level of ethics what is it at a higher level is it something that's learner before machines in your intuition I think yeah ethics is learnable very likely I I think I is one of these problems were think understanding the neuroscience of ethics you know people discuss there is an ethics of neuroscience yes you know how a neuroscientist should or should not behave can you think of a neurosurgeon and the ethics are you Rory has to behavior he she has to be but I'm more interested on the neuroscience of you blow my mind right now the neuroscience of ethics is very matter yeah and you know I think that would be important to understand also for being able to to design machines that have that are ethical machines in our sense of ethics and you think there is something in your science there's patterns tools in your science that can help us shed some light on ethics or yeah mostly on the psychology sociology much higher level no there is a culture but there is also in the meantime there are there is evidence fMRI of specific areas of the brain that are involved in certain ethical judgment and not only this you can stimulate those area with magnetic fields and change the ethical decisions yeah Wow so that's work by a colleague of mine Rebecca Saxe and there is a other researchers doing similar work and I think you know this is the beginning but ideally at some point we'll have an understanding of how this works and white of all right the big y question yeah it must have some some purpose yeah obviously test you know some social purpose is is probably if neuroscience holds the key to at least eliminate some aspect of ethics that means it could be a learn about problem yeah exactly and as we're getting into harder and harder questions let's go to the hard problem of consciousness yeah is this an important problem for us to think about and solve on the engineering of intelligence side of your work of our dream you know it's unclear so you know again this is a deep problem part because it's very difficult to define consciousness and and there is the debate among neuroscientist and about whether consciousness and philosophers of course whether consciousness is something that requires flesh and blood so to speak yes or could be you know that we could have silicon devices that are conscious or up to statement like everything has some degree of consciousness and some more than others this is like Giulio Tononi and she would just recently talk to Christophe Koch okay so he a crystal force my first graduate student yeah do you think it's important to illuminate aspects of consciousness in order to engineer intelligence systems do you think an intelligent system would ultimately have consciousness are they to the interlinked you know most of the people working in artificial intelligence I think who'd answer we don't strictly need the consciousness to have an intelligent system that's sort of the easier question because yeah because it's it's a very engineering answer to the question yes that's the Turing test will run in consciousness but if you were to go do you think it's possible that we need to have so that kind of self-awareness we may yes so for instance I I personally think that when test a machine or a person in a Turing test in an extended to interesting I think consciousness is part of what we require in that test you know in priestly to say that this is intelligent Christophe disagrees so as he does yeah it despite many other romantic notions he who he disagrees with that one yes that's right so you know we would see do you think as a quick question Ernest Becker fear of death do you think mortality and those kinds of things are important for well for consciousness and for intelligence the finiteness of life finiteness of existence or is that just the side effect of evolutionary side effect is useful to a for natural selection do you think this kind of thing that we're gonna this interview is gonna run out of time soon our life will run out of time soon do you think that's needed to make this conversation good and in life good you know I never thought about it is it a very interesting question I think Steve Jobs in his commencement speech at Stanford argued that you know having a finite life was important for for stimulating achievement so I was a different yeah I live every day like it's your last right yeah yeah so I rationally I don't think strictly you need mortality for consciousness but oh no they seem to go together in our biological system yeah you've mentioned before and students are associated with alpha go immobilize the big recent success stories in the eye and I think it's captivated the entire world of what I can do so what do you think will be the next breakthrough and what's your intuition about the next breakthrough of course I don't know where the next breakthroughs is I think that there is a good chance as I said before that the next breakthrough would also be inspired by you know neuroscience but which one I don't know and there's so MIT has this quest for intelligence you know and there's a few moon shots which in that spirit which ones are you excited about what which projects kind of well of course I'm excited about one of the moon shots with it which is our Center for brains minds and machines history the one which is filip fully funded by NSF and it's a it is about visual intelligence it's an area that one has a particularly about understanding visual intelligence or visual cortex and and visual intelligence in the sense of how we look around ourselves and understand the word around ourselves you know meaning what what is going on how we could go from here to there without hitting obstacles you know whether there are other agents people in the market these are all things that we perceive very quickly and and it's something actually quite close to being conscious not quite but now there is this interesting experiment that was run at Google X which is in a sense is just a virtual reality experiment but in which they had subject sitting in a chair with goggles like oculus and so on earphones and they were seeing through the eyes of a robot nearby two cameras microphones for a/c mossad their sensory system was there and the impression of all the subject very strong they could not shake it off was that they were where the robot was they could look at themselves from the robot and still feel they were they were where the robot is they were looking their body their self were had moved so some aspect of scene understanding has to have ability to place yourself have a self-awareness about your position in the world and what the world is right so yeah so we may have to solve the hard problem of consciousness on their way yes but it's quite quite quite a moonshot eyes so if you've been an adviser to some incredible minds including demis hassabis Christophe Co I'm not sure like you said all went on to become seminal figures in their respective fields from your own success as a researcher and from perspective as a mentor of these researchers having guided them Madhvi so what does it take to be successful in science and engineering careers whether you're talking to somebody in their teens 20s and 30s what does that path look like it's curiosity and having fun and I think is important also having fun with other curious minds it's the the people you surround with - so yeah fun and curiosity is there mentioned Steve Jobs is there also an underlying ambition that's unique that you saw or is it really does boil down to insatiable curiosity and fun well of course you know it's been cured using active and ambitious way yes definitely but I think sometime in in science there are friends of mine who are like this you know there are some of the scientists like to work by themselves and kind of communicate only when they complete their work or discover something I think I always found the the actual process of you know discovering something is more fun if it's together with other intelligent and curious and fun people so if you see the fun in that process of the side effect of that process will be the election of discovering something yes so as you've led many incredible efforts here what's the secret to being a good advisor mentor leader in a research setting is that similar spirit or yeah what what advice could you give to people young faculty and so on it's partly repeating what I said about an environment that should be friendly and fun and ambitious and you know I think I learned a lot from some of my advisers and friends and some of our physicists and there was reason this behavior that was encouraged of when somebody comes with a new idea in the group you're unless is really stupid but you are always enthusiastic and then and the other two just for a few minutes for a few hours then you start you know asking critically a few questions testing but you know this is a process that is I think it's very very good this you have to be enthusiasm time people are very critical from beginning that's that's that's not yes you have to give it a chance yes let's see to grow that said with some of your ideas which are quite revolutionary so there's a witness especially in the human vision side and neuroscience side there could be some pretty heated arguments do you enjoy these dessert a part of science and I could academic pursue see you enjoy yeah is it is that something that happens in your group as well yeah absolutely I also spent some time in Germany again that is this tradition in which people are more forthright less kind than here so you know in the u.s. you when you write a bad letter you still say this guy's nice yes so yet here in America its degrees of nice yes it's all just degrees of Nicaea right right so as long as this does not become personal and it's really like you know a football game with these rules that's great so if you somehow found yourself in a position to ask one question of an Oracle like a genie maybe a god whoa and you're guaranteed to get a clear answer what kind of question would you ask what what would be the question you would ask in the spirit of our discussion it could be how could be how could I become ten times more intelligent and so but see you only get a clear short answer so do you think there's a clear short answer to that no and that's the answer you'll get yeah okay so you've mentioned flowers of Algren odd oh yeah this is a story that inspires you in your childhood as this story of a mouse and human achieving genius-level intelligence and then understanding what was happening while slowly becoming not intelligent again in this tragedy of intelligence and losing intelligence do you think in that spirit and that story do you think intelligence is a gift or curse from the perspective of happiness and meaning of life you try to create intelligence system that understands the universe but at an individual level the meaning of life do you think intelligence is a gift it's a good question I don't know as one of this as one people consider the smartest people in the world in some in some dimension at the very least what do you think no no it may be invariant to intelligence likely of happiness would be nice if it were that's the hope yeah you could be smart and happy and clueless unhappy yeah as always on the discussion of the meaning of life it's probably a good place to end Tommaso thank you so much for talking today thank you this was great you
Tuomas Sandholm: Poker and Game Theory | Lex Fridman Podcast #12
the following is a conversation with Thomas sent home he's a professor same you and co-creator of lebra's which is the first AI system to be top human players in the game of heads-up No Limit Texas Hold'em he has published over 450 papers on game theory and machine learning including a best paper in 2017 at nips now renamed to new reps which is where I caught up with him for this conversation his research and companies have had wide reaching impact in the real world especially because he and his group not only proposed new ideas but also build systems to prove that these ideas work in the real world this conversation is part of the MIT course on artificial general intelligence and the artificial intelligence podcast if you enjoy subscribe on youtube itunes or simply connect with me on Twitter at Lex Friedman spelled Fri D and now here's my conversation with Thomas sent home can you describe at the high level the game of poker Texas Hold'em heads-up Texas Hold'em for people who might not be familiar at this card game yeah happy to so heads up No Limit Texas Hold'em has really emerged in the AI community as a main benchmark for testing these application independent algorithms for imperfect information game solving and this is a game that's actually played by humans you don't see that much on TV or casinos because well for obvious reasons but you do see it in some expert level casinos and you see it in the best poker movies of all time it's actually an event in the World Series of Poker but mostly it's played online and typically for pretty big sums of money and this is a game that usually only experts play so if you recall to your home game on a Friday night it probably is not gonna be hits up no Limit Texas Hold'em it might be no let me it takes us Hold'em in some cases but typically for a big group and it's not as competitive well heads up means it's two-player so it's really like me against you Am I you better much like chess or or or go in that sense but an imperfect information game which makes it much harder because I have to deal with issues of you knowing things that I don't know and I know things that you don't know instead of pieces being nicely laid on the board for both of us to see so in Texas Hold'em there's a two cards that you only see the game on to you yeah there is they gradually lay out some cards that add up overall to five cards that everybody can see yeah the imperfect nature of the information is the two cards that you're holding on front yeah so as you said you know you first get two cards in private each and then you this a betting round then you get three clubs in public on the table then there's a betting round then you get the fourth card in public on the table they're spitting around then you get the five fifth card on the table there's a bending drop so there's a total of four betting rounds and four torrontés of information revelation if you will the only the first tranche is private and they omits public from there and this is probably probably by far the most popular game in AI and just the general public in terms of imperfect information so it's probably the most popular spectator game to watch right so which is why it's a super exciting game tackle so it sits on the order of chess I would say in terms of popularity in terms of AI setting it as the bar of what is intelligence so in 2017 labret does how do you pronounce it Liberato lebra das lebra does beats little laughing they're a little bit Latin LeBron is beat a few for expert human players can you describe that event what you learned from it what was it like what was the process in general for people who have not read the papers and study yeah so the event was that we invited four of the top 10 players with these are specialist players in heads-up no Limit Texas Hold'em which is very important because this game is actually quite different than the the multiplayer version we brought me in to Pittsburgh to play at the reverse casino for twenty days we wanted to get a hundred and twenty thousand hands in because we wanted to get statistical significance so it's a lot of hands for humans to play even for this top pros who play fairly quickly normally so we couldn't just have one of them play so many hands twenty days they were playing basically morning to evening and he raised two hundred thousand as a little incentive for them to play and the setting was so that they didn't all get fifty thousand we actually paid them out based on how they did against the AI each so they had an incentive to play as hard as they could whether they're way ahead the way behind or right at the mark of beating the AI and you don't make any money unfortunately right no we can't make any money so so originally a couple of years earlier I actually explored whether we could actually play for money because that would be of course interesting as well to play against the top people for money but the Pennsylvania Gaming Board said no so so if we couldn't so this is much like an exhibit like for a musician or a boxer or something like that nevertheless you're keeping track of the money and brought us one close to two million dollars I think so so if there if it was for real money if you were able to earn money that was a quite impressive and inspiring achievement just a few details what what were the players looking at I mean were they behind the computer what was the interface like yes there they were playing much like they normally do these top players when they play this game they play mostly online so they used to playing through what UI yes and they did the same thing here so there was this layout you could imagine there's a table on the screen this the the human sitting there and then there's the AI sitting there and the the screen source everything is happening the cards coming out and so the bets being made and we also had the betting history for the human so if the human for what what had happened in the ham so far they could actually reference back and and and so forth is there a reason they were given access to the betting history for well we just uh it's a it didn't really matter that they wouldn't have forgotten anyway these are top quality people but we just want to put out there so it's not a question for human for getting and the AI somehow trying to get advantage of better memory so what was that like I mean that was an incredible accomplishment so what did it feel like before the event did you have doubt hope where was your confidence at yeah that's great so a great question so eighteen months earlier I had organized the similar brains versus AI competition with our previous a I call clerical and we couldn't beat the humans so this time around it was only eighteen months later and I knew that this new AI Lovato's was way stronger but it's hard to say how you'll do against the top humans before you try so I thought we had about a 50/50 shot and the international betting sites put us a us as a four to one or five to one underdog so it's kind of interesting that people really believe in people and I get over AI not just people people don't just believe over believing themselves but they have overconfidence in other people as well compared to the performance of AI and yeah so we were afford to 105 to 108 beating the humans in a row we were still 50/50 on the international betting sites do you think there's something special and magical about poker and in the way people think about it in a sense you have I mean even in chess there's no Hollywood movies poker is this the star of many movies and there's this feeling that certain human facial expressions and body language eye movement all these tells are critical to poker you can look into somebody's soul understand their betting strategy and so on there so that's probably why the possibly do you think that is why people have a confidence that humans will outperform because AI systems cannot in construct perceive these kinds of tells they're only looking at betting patterns and and nothing else the betting patterns and and statistics so what's more important to you if you step back and human players human versus human what's the role these tells of these ideas that we romanticize yeah so I split it into two parts so one is why do humans trust he much more than AI and all have overconfidence in humans yes I think that's that's not really related to tell a question it's just that they've seen these top players how good they are and they're really fantastic so it's just hard to believe therefore that the Navy I could beat them yeah so I think that's where that comes from and and that's actually maybe a more general lesson about the AI that until you've seen it over perform a human it's hard to believe it it could but then the tails a lot of these top players they're so good at hiding tails that among the top players it's actually not really worth it for them to invest a lot of effort trying to find tails in each other because there's a so good at hiding them so yes at the kind of Friday evening game tells are gonna be a huge thing you can read other people and if you're a good reader you you'll read them like an open book but at the top levels of poker no details become a list of the much much smaller and smaller aspect of the game as you go to the top levels the the amount of strategies the amount of possible actions is is very large ten to the power of one hundred plus so there has to be some I've read a few the papers related it has it has to form some abstractions of various hands and actions so what kind of abstractions are effective for the game of poker yeah so you're exactly right so when you go from a game tree that's ten to the 161 especially in an imperfect information game it's way too large to solve directly even with our fastest ik finding algorithms so you wanna abstract it first and abstraction in games is much trickier than abstraction in mdps or other single agent settings because you have these abstraction pathologies that if I have a finer grained abstraction the strategy that I can get from that for the real game might actually be worse than the strategy I can get from the coarse-grained abstraction if you have to be very careful now the the kinds of abstractions just to zoom out we're talking about there's the hands abstractions and then there's betting strategies yeah what I think actions yeah baiting access or so there's information obstruction to talk about general games information abstraction which is the abstraction of what chance does and this would be the cards in the case of poker and then there's action abstraction which is abstracting the actions of the actual players which would be bits in the case of poker yourself and the other players yes yourself and other players and for information abstraction we were completely automated so these were these are algorithms but they do what we call potential aware abstraction where we don't just look at the value of the hand but also how it might materialize in the good or bad hands over time and it's a certain kind of bottom-up process with integer programming there and clustering and various aspects how do you build build this abstraction and then in the action abstraction there it's largely based on how humans other and other AIS have played this game in the past but in the beginning we actually use an automated action abstraction technology which is provably convergent that it finds the optimal combination of eight sizes but it's not very scalable so we couldn't use it for the whole game but we used it for the first couple of betting actions so what's more important the strength of the hand so the information retraction or the how you play them the actions does you know the romanticized notion again is that it doesn't matter what hands you have that the actions the betting may be the way you win no matter what hands you have yeah so that's why you have to play a lot of hands so that the role of luck gets smaller so you could otherwise get lucky and get some good hands and then you're gonna win the match even with thousands of hands you can get lucky because there's so much variance in No Limit Texas Hold'em because if we both go all-in it's a huge stack or variant so there are these massive swings in No Limit Texas Hold'em so that's why you have to play not just thousands but over a hundred thousand hands don't get statistical significance let me ask another way this question if you didn't even look at your hands but they didn't know that the your opponents didn't know that how well would you be able to do oh that's a good question there's actually I heard this story that this is Norwegian female poker player goal and at uber stud who's actually won a tournament by doing exactly that but that would be extremely rare so so I cannot really play well the hands do have some role to play oh yes so LeBron is does not use as far as I understand a used learning methods deep learning is there room for learning in you know there's no reason why lab artist doesn't you know combined with an alphago type approach for estimating the quality for function estimator what are your thoughts on this maybe as compared to another algorithm which I'm not that familiar with deep stack the the engine that does use deep learning that it's unclear how well it does but nevertheless uses deep learning so what are your thoughts about learning methods to aid in the way that teller Broadus plays the game of poker yeah so as you said Lee barratto's did not use learning methods and played very well without them since then we have actually actually here we have a couple of papers on things that do use learning technique Saxon so and deep learning in particular and the sort of the way you're talking about where it's learning an evaluation function but in imperfect information games unlike let's say in Co or now now also in chess and shogi it's not some sufficient to learn an evaluation for a state because the value of an information set depends not only on the exact state but it also depends on both players beliefs like if I have a bad hand I'm much better off if the opponent thinks I'm have a good hand and vice versa if I have a good hand I'm much better off if the opponent believes I have a bad hand so the value of a state is not just a function of the cards it depends on if you will the path of play but only to the extent that is captured in the belief distributions so so that's why it's not as simple as as it is imperfect information games another one I'd say it's simple there either it's of course very complicated computationally there too but at least conceptually it's very straightforward there's a state there's an evaluation function you can try to learn it here you have to do something more and what we do is in one of these papers we're looking at allowing where we allow with the opponent to actually take different strategies at the leaf of the search tree as F if you will and and that is a different way of doing it and it doesn't assume therefore a particular way that the opponent plays but it allows opponent to choose from a set of different continuation strategies and that forces us to not be too optimistic in our local head search and that's that's one way you can do sound look ahead search in imperfect information games which is very different difficult and in us you were asking about deep stack what they did it was very different than what we do either in Lee brothers or in this new work they were gender and Umrah generating various situations in the game then they were doing Luca head from there to the end of the game as if that was a start of a different game and then they were using deep learning to learn those values of those states but the states were not just the physical states they include the belief distributions when you talk about look ahead for deep stack or with libertas does it mean considering every possibility that the game can involve is that we're talking about extremely sort of like this exponentially growth of a tree yes so we're talking about exactly that much like you do in Alpha Beta search or want to crawl to research but with different techniques so there's a different search algorithm and then we have to deal with the leaves differently so if you think about what Lee brothers did we didn't have to worry about this because we only did it at the end of the game so we would always terminate into a real situation and we would know what to payout this it didn't do this depth limited loka heads but now in this new paper which is called depth limited I think it's called depth limited research for imperfect information games we can actually do sound depth limited look at it so we can actually started with a look ahead from the beginning of the game on because that's too complicated to do for this whole long game so in Lee brothers we were just doing it for the end so and then the other side this belief distribution so is it explicitly modeled what kind of beliefs that the opponent might have yeah yeah it is explicitly modeled but it's not assumed the beliefs are actually output not input of course the starting beliefs are input but they just fall from the rules of the game because we know that the dealer deals uniformly from the dick so I know that every pair of cards that you might have is equally likely I know that for a fact that's as follows from the rules of the game of course except the two cards that I have I know you don't have those yes you have to take that into account that's called card removal and that's very important is the dealing always coming from a single deck in the heads up so you can assume single deck know that if some if if I have the ace of spades I know you don't have an ace of spades okay so in the beginning your belief is basically the fact that it's a fair dealing of hands but how do you adjust start to adjust that belief well that's a where this beauty of games here it comes so nash equilibrium which john nash introduced in 1950 introduces what rational play is when you have more than one player and these are pairs of strategies where strategies are contingency plans one for each player so neither player wants to deviate to a different strategy given that the other doesn't deviate but as a side effect you get the beliefs from Bayes rule so Nash equilibrium really isn't just deriving in these imperfect information games Nash equilibrium doesn't just define strategies it also defines beliefs for both us and it defines beliefs for each state so at the each state it's if they take all information sets at each information set in the game there's a set of different states that we might be in but I don't know which one we're in Nash equilibrium tells me exactly what is a probability distribution over those real world states in my mind how does naturally give you that distribution so why I'll do a simple example so you know the game rock-paper-scissors so we can draw it as player 1 moves first and then player 2 moves but of course it's important that player 2 doesn't know what player 1 moved otherwise player 2 would win every time so we can draw that as an information set where player 1 makes one of three moves first and then there's an information set for player 2 so player 2 doesn't know which of those nodes the world is it but once we know the strategy for player 1 Nash equilibrium will say that you play 1/3 Rock 1/3 paper 1/3 caesars from that I can derive my beliefs of the information set that they wanted 1/3 wants it though so Bayes gives you that basis you but is that specific to a particular player or is it is there something you quickly update with the game theory isn't really player specific so that's what also why we don't need any data we don't need any history how these particular humans played in the past or how any AI or even had played before it's all about rationality so we just think the AI just thinks about what would a rational opponent do and what would I do if I were right I am rational and that that's that's the idea of game theory so it's really a data free opponent free approach sir comes from the design of the game as opposed to the design of the player exactly if there's no opponent modeling per se I mean we've done some work on combining opponent modeling with game theory so you couldn't exploit weak players even more but that's another strand and in the Lee brothers we didn't turn that on because I decided that these players are too good and when you start to exploit an opponent you'll typically open yourself up self up to exploitation and these guys have so few holes to exploit and they're world's leading experts in counter exploitation so I decided that we're not gonna turn that stuff on actually I saw a few papers exploiting opponents it sound very interesting to explore do you think there's room for exploitation generally outside of LeBron us is is there subject or people differences that could be exploited maybe not just in poker but in general interactions negotiations all these other domains that yours considering yeah I definitely we've done some work on that and I really like their work at hybridize is the two so you figure out what would a rational opponent do and by the way that's safe in these zero-sum games two player zero-sum games because if the opponent does something irrational yes it might show throw off my beliefs but the amount that the player can gain by throwing off my belief is always less than they lose by playing poorly so so it's safe but still if somebody's weak as a player you might want to play differently to exploit them more so that you can think about it this way a game theoretic strategies are unbeatable but it doesn't maximally beat the other opponent so the winnings per hand might be better with a different strategy and the hybrid is that you start from a game theoretic approach and then as you gain data from about the opponent in certain parts of the game tree that in those parts of the game tree you start to tweak your strategy more and more towards exploitation while still staying fairly close to the game theoretic strategy so as to not open yourself up to exploitation too much how do you do that do you try to vary up strategies make it unpredictable it's like what is it tit-for-tat strategies in prisoner's dilemma or well it doesn't that that's a repeated game kind of prisoner's dilemma repeats it games but but even there there's no proof that says that that's the best thing but experimentally it actually does does does well so what kind of games are there first of all I don't know if this is something that you could just summarize there's perfect information games or all the informations on the table there is imperfect information games there's repeated games you play over and over there's zero-sum games there's nonzero-sum games yeah and then there's a really important distinction you're making two-player versus more players so what are what other games out there and what's the difference for example with this two-player game versus more players yeah what are the key differences right here so let me start from the the basic so a repeated game is a game where the same exact game is played over and over in these extensive form games where think about three form maybe with these information says to represent incomplete information you can have kind of repetitive interactions even repeated games are a special case of that by the way but if the game doesn't have to be exactly the same selectively sourcing all trips yes we kind of see it the same supply base year to year but what I'm buying is a little different every time and the supply base is a little different every time and so on so it's not really repeated so to find a purely repeated game is actually very rare in the world so they're really a very coarse model of what's going on then if you move up from repeat just repeated simple repeated matrix games not all the way to extensive form games but in between they're stochastic games where you know this these think about it like these little matrix games and when you take an action and your home takes an action they determine not which next state I'm going to next game I'm going to but the distribution over next games where I might be going to so so that's the stochastic game but it's like matrix games repeated stochastic games extensive form games that is from less to more general and and poker is an example of the last one so it's really the most general setting extensive form games and that's kind of what the AI community has been working on and being benched marked on with this heads-up No Limit Texas Hold'em can you describe extensive form games what was the motto here yeah so if you imagine with the tree form so it's really the tree form like in chess there's a search tree versus a matrix is a matrix yeah and that's the new matrix is called the matrix form or by matrix form or normal form game and here you have the tree form so you can actually do certain types of reasoning there that you'll lose the information when you go to normal form there's a certain form of equivalence like if you go from three form and you say it every possible contingency plan is the strategy then I can actually go back to the normal form but I lose some information from the lack of sequentiality then the multiplayer versus two-player distinction is an important one so two-player games in zero-sum are conceptually easier and computationally easier there's still huge like this one this one but they're conceptually easier and computationally easier in that conceptually you don't have to worry about which equilibrium is the other guy going to play when there are multiple because any equilibrium strategy is a best response to any other equilibrium strategy so I can play a different equilibrium from you and we'll still get the right values of the game that falls apart even with two players when you have general some games even without cooperation just even without cooperation so there's a big gap from two player zero-sum to two-player general sum or even to three player zero-sum that's that's a big gap at least in theory can you maybe not mathematically provide the intuition why it all falls apart with three or more players it seems like you should still be able to have a Nash equilibrium that yeah that's instructive that holds okay so it is true that all finite games have a Nash equilibrium so this is what your Nash actually proved so they do have a Nash equilibrium that's not a problem the problem is that there can be many and then there's a question of which equilibrium to select so and if you select your strategy from a different equilibrium and I select mind then did what does that mean I and in this non zero sum games we may lose some joint benefits we hope by being just simply stupid we could actually both be better off if we did something else yes and in three player you get other problems also like collusion that maybe you and I can get up on a third player and we can do radically better by colluding so that there are lots of issues that come up there so no Brown student you workers on this has mentioned I looked through the AMA and read it he mentioned that the ability of poker players to collaborate will make the game he was asked the question of how would you make the game of poker or both of you were asked the question how would you make the game of poker beyond being solvable by current AI methods and he said that there's not many ways of making poker more difficult but collaboration or cooperation between players would make it extremely difficult so can you provide the intuition behind why that is if you agree with that idea yeah so we've done a lot of work coalitional games and we actually have a paper here with my other student cappella Farina and some other collaborators on after net nips on that actually just came back from the poster session where we present life so when you have a collusion it's a it's a different problem yes and it typically gets even harder then even the game representations some of the game representations don't really allow go to computation so we actually introduced a new game representation for for that is that kind of cooperation part of the model is are you do you have do you have information about the fact that other players are cooperating or is it just this chaos that where nothing is known so there's some something's unknown can you give an example of a collusion type game or Z you select breach that so think about bridge it's like when you and I are on a team our payoffs are the same the problem is that we can't talk so so when I get my cards I can't whisper to you what my cards are that would not be allowed so we have to somehow coordinate our strategies ahead of time and only ahead of time and then there are certain signals we can talk about but they have to be such that the other team also understands them so so that that's that's an example where the coordination is already built into the rules of the game but in many other situations like auctions or negotiations or diplomatic relationships poker it's not really built-in but it still can be very helpful for the coders I've read you right somewhere the negotiations you come to the table with prior like a strategy that like that you're willing to do and not willing to do those kinds of things so how do you start to now moving away from poker movie beyond poker into other applications like negotiations how do you start applying this to other and to other domains yeah even real world domains that you've worked on yeah I actually have two start-up companies doing exactly that one is called strategic machine and that's for kind of build applications gaming sports all sorts of things like that any applications of this to business and to sports and to gaming to various types of things for in finance electricity markets and so on and the other is called strategy robot where we are taking this to military secure the cyber security and intelligence applications I think you worked a little bit in how he put it advertisement sort of suggesting ad kind of thing yeah auction that's another component optimized markets optimized but that's much more about a combinatorial market and optimization based technology that's not using these game theoretic reasoning technologies I think okay so what sort of high level do you think about our ability to use game theoretic concepts to model human behavior do you think do you think human behavior is amenable to this kind of modeling so outside of the poker games and where have you seen it done successfully in your work I'm not sure the goal really is modeling humans like for example if I'm playing a zero-sum game yes I don't really care that the opponent is actually following my model of rational behavior because if they're not that's even better for me right so so they see with the opponents and games there's a the prerequisite is that you've formalized the interaction in some way that can be amenable to analysis and you've done this amazing work with mechanism design designing games that have certain outcomes but so I'll tell you an example for my for my world of autonomous vehicles right we're studying pedestrians and pedestrians and cars negotiating this nonverbal communication there's this weird and game dance of tension where pedestrians are basically saying I trusted you won't kill me and so as a jaywalker I will step onto the road even though I'm breaking the law and there's this tension and the question is we really don't know how to model that well in trying to model intent and so people sometimes bring up ideas of game theory and so on do you think that aspect of human behavior can use these kinds of imperfect information approaches modeling how do we how do you start to attack a problem like that when you don't even know how the game design the game to describe the situation in order to solve it okay so I haven't really thought about jaywalking but one thing that I think could be a good application in an autonomous vehicles is the following so let's say that you have fleets of autonomous cars operated by different companies so maybe here's the way more fleet and here's the uber fleet if you think about the rules of the road they define certain little rules but that still leaves a huge strategy space open like as a simple example when cars merge you know how he must merge you know they slow down and look at each other and try to I try to merge wouldn't it be better if these situations would all repeat pre-negotiated so we can actually merge at full speed and we know that this is the situation this is how we do it and it's all gonna be faster but there are way too many situations to negotiate manually so you could do use automated negotiation this is the idea at least you could use automated negotiation to negotiate all of these situations or many of them in advance and of course it might be that hey maybe you're not gonna always let me go first maybe you said okay well in these situations all let you go first but in exchange you're gonna give me - how much you're gonna let me go first in this situation yes so it's this huge combinatorial negotiation and do you think there's room in that example of merging to model this whole situation is an imperfect information game or do you really want to consider it to be a perfect no that's a good question yeah that's a good question I'm paid the price of assuming that you don't know everything yeah I don't know it's certainly much easier games with perfect information are much easier so if you can get away with it you should but if the real situation is of imperfect information then you're going to have to deal with in for imperfect information great so what lessons have you learned the annual computer poker competition an incredible accomplishment of AI you know you look at the history of deep blue go these kind of moments when I stepped up in an engineering effort and a scientific effort combined to beat the best human players so what do you take away from this whole experience what have you learned about designing it has systems that play these kinds of games and what does that mean for sort of AI in general for the future of IAI development yeah so that's a good question so there's so much to say about it I do like this type of performance oriented research although in my group we go all the way from like idea to theory to experiments to big system fielding the commercialization so we spend that spectrum but I think that in a lot of situations in AI you really have to build the big systems and evaluate them at a scale before you know what works and doesn't and we've seen that in the computational game theory community that there are a lot of techniques that look good in the small but then they cease to look good in the large and we've also seen that there are a lot of techniques that look superior in theory and I really mean in terms of convergence rates better like first-order methods better convergence rates like the CFR based based algorithms yet the CFR pay based algorithms are the fastest in practice so it really tells me that you have to test this in reality the theory isn't tight enough if you will to tell you which algorithms are better than the others and you have to look at these things that in the large because any sort of projections you do from the small and at least in this domain be very misleading so that that's kind of from from a kind of science and engineering perspective from personal perspective it's been just a wild experience in that with the first poker competition the first or first brains versus AI man-machine poker competition that we organized there had been by the way for other poker games there had been previous competitions but this was for heads up No Limit this was the first and I probably became the most hated person in the world of Poker and I didn't mean to III size that they cracked in the game for yeah it was a lot of people felt that it was a real threat to the whole game the whole existence of the game if AI becomes better than humans people would be scared to play poker because there are the superhuman AI is running around taking their money and you know all of that so so I just it's just really aggressive just in the comments were super aggressive I got everything it's just short of death threats do you think the same was true for chess because right now they just completed the World Championships and chess and humans just started ignoring the fact that there's AI systems now that I'll perform humans and they still enjoy the game is still a beautiful game that's what I think yeah and I think the same thing happens in poker and so I didn't think of myself as somebody was gonna kill the game and I don't think I did yeah I've really learned to love this game I wasn't a poker player before but learn so many new ones is about it from these AIS and they've really changed how the game is played by the way so they have these very Martian ways of playing poker and the top humans are now incorporating those types of strategies into their own play so if anything to me our work has made poker a richer more interesting game for humans to play not something that is gonna steer him as away from it entirely just a quick comment and something you said which is if I may say so in academia is a little bit rare sometimes it's pretty brave to put your ideas to the test in the way you described saying that sometimes good ideas don't work when you actually try to apply them at scale and so where does that come from I mean what if you could do a advice for people what what drives you in that sense were you always this way I mean it takes a brave person I guess is what I'm saying to test their ideas and to see if this thing actually works against human top human players and so on yeah I don't know about brave but it takes a lot of work it takes a lot of work and a lot of time to organize do make something big and to organize an event and stuff like that and what drives you in that effort because you could still I would argue get a best paper award and nips as you did in 17 without doing this that's right yes and so so in general I believe it's very important to do things in in the real world and at scale and that's really where the the the pudding if you will proof is in the pudding that's what that's where it is in this particular case it was kind of a competition between different groups and for many years as to who can be the first one to beat the top humans that heads up No Limit Texas Hold'em so it became it became kind of a like a competition who can get there yeah so a little friendly competition could be I can do wonders for progress yes so the topic of mechanism design which is really interesting also kind of new to me except as an observer if I don't know politics and any I'm an observer of mechanisms but you write in your paper an automated mechanism design that I quickly read so mechanism design is designing the rules of the game so you get a certain desirable outcome and you have this work on doing so in an automatic fashion as opposed to fine-tuning it so what have you learned from those efforts if you look say I don't know at complex it's like our political system can we design our political system to have in an automated fashion to have outcomes that we want can we design something like traffic lights to be smart where it gets outcomes that we want so what are the lessons you draw from that work yeah so I still very much believe in the automated mechanism design direction yes but it's not a panacea there are impossibility results in mechanism design saying that there is no mechanism that accomplishes objective X in Class C so so they it's not gonna there's no way using any mechanism design tools manual or automated to do certain things in mechanism design he can't describe that again so meaning there it's impossible to achieve that yeah yes it was likely impossible so so so these are these are not statements about human ingenuity who might come up with something smart these are proofs that if you wanna accomplish properties X in Class C that is not to oppose with any mechanism the good thing about automated mechanism design is that we're not really designing for a class we're designing for specific settings at the time so even if there's an impossibility result for the whole class it just doesn't mean that all of the cases in the class are impossible it just means that some of the cases are impossible so we can actually carve these islands of possibility within these known impossible classes and we've actually done that so what one of the famous results in mechanism design is a Meyer sham set its weight theorem for pi Roger Myerson and Mark Satterthwaite from 1983 so it's an impossibility of efficient trade under imperfect information we show that you can in many settings avoid that and get the efficient trade anyway depending on how they design the game okay so depending how you design the game and of course it's not it doesn't in any way any way contradict to impossibility result or impossibility results is still there but it just finds spots within this impossible class where in those spots you don't have time possibility sorry if I'm going a bit philosophical but what lessons you draw towards like I mentioned politics or human interaction and designing mechanisms for outside of just these kinds of trading or auctioning or purely formal games our human interaction like a political system what how do you think it's applicable to yeah politics or to business to negotiations these kinds of things designing rules that have certain outcomes yeah yeah I do think so have you seen success that successfully done yes and really oh you mean mechanism design or automated make automated mechanism design but so so mechanism design itself has had fairly limited success so far there are certain cases but most of the real-world situations are actually not sound from a mechanism design perspective even in those cases where they've been designed by very knowledgeable mechanism design people the people are typically just taking some insights from the theory and applying those insights into the real world rather than applying the mechanisms directly so one famous example of is the FCC spectrum auctions so I've also had a small role in that and very good economists have been where excellent economists have been working on that with no game theory yet the rules that are designed in practice they're they're such that bidding truthfully is not the best strategy usually mechanism design we try to make things easy for the participants so telling the truth is the best strategy but but even in those very high stakes auctions where you have tens of billions of dollars worth of expect from being auctioned truth-telling is not the best strategy and by the way nobody knows even a single optimal bidding strategy for those auctions what's the challenge of coming up with an optimum because there's a lot of players and there's a lot of players but many items for sale and the these mechanisms are such that even with just two items or one item bidding truthfully wouldn't be the best strategy if you look at the history of AI it's marked by seminal events and alphago being a world champion human go player I would put librettist winning the heads of no-limit hold'em as one of such event thank you and what do you think is the next such event whether it's in your life or in the broadly AI community that you think might be out there that would surprise the world so that's a great question and I don't really know the answer in terms of game solving hits up No Limit Texas Hold'em really was the one remaining widely agreed-upon benchmark so that was the big milestone now are there other things yes certainly there are but there there is not one that the community has kind of focused on so what could be other things there are groups working on StarCraft there are groups working on dota2 these are video games yes or you could have like diplomacy or Hanavi you know things like that these are like recreational games but none of them are really acknowledged that's kind of the main next challenge problem like chess or go or heads-up No Limit Texas Hold'em was so I don't really know in the game solving space what is or what will will be the next benchmark I hope kind of hope that there will be a next benchmark because really the different groups working on the same problem really drove these application independent techniques for put very quickly over ten years do you think there's an open problem that excites you that you start moving away from games into real world games like say the stock market trading yeah that's that's kind of how I am so I am probably not going to work as hard on these recreational benchmarks I'm doing to startups on game solving technology strategic machine and strategy robot and we're really interested in pushing this stuff into practice what do you think would be really you know a powerful result that would be surprising that would be if you can say I mean you know five years ten years from now something that statistically you would say is not very likely but if there's a breakthrough what achieve yeah so I think that overall we're in a very different situation in game theory than we are in let's say machine learning yes so in machine learning it's a fairly mature technology and it's very broadly applied and proven success in the real world in game solving there are almost no applications yet we have just become superhuman which machine learning you could argue happened in the 90s if not earlier and at least some supervised learning at certain complex supervised learning applications now I think a next challenge problem I know you're not asking about this way you're you're asking about the technology breakthrough but I think the big big breakthrough is to be able to show it hey maybe most of let's say military planning or most of business strategy will actually be done strategically using computational game theory that that's what I would like to see as a next five or ten year goal maybe you can explain to me again forgive me if this is an obvious question but you know machine learning methods neural networks are suffer from not being transparent not being explainable a game theoretic methods you know Nash equilibria do they generally when you see the different solutions are they when you talk about military operations are they once you see the strategies do they make sense that they explainable or do they suffer from the same problems as neural networks do so that's that's a good question I would say a little bit yes and no and what I mean by that is that these games are ethic strategies let's say Nash equilibrium it has provable properties so it's unlike let's say deep learning where you kind of cross your fingers hopefully it'll work and then after the fact when you have the weights you're still crossing your fingers and I hope it will work here you know that the solution quality is there this provable or Souls from quality guarantees now that doesn't necessarily mean that the strategies are human understandable that's a whole other problem so that's also I think it deep learning and computational game theory are in the same boat in that sense that both are difficult to understand but at least the game theoretic techniques they have this guarantees of guarantee quality so did you see business operations to achieve your corporations or even military in the future being at least the strong candidates being proposed by automated systems do you see that yeah I do I do but that's more of a really belief than a substantiated fact depending on where you land and optimism or pessimism that's a relief to me that's an exciting future especially if they're provable things in terms of optimality so looking into the future there's a a few folks worried about the especially you look at the game of poker which is probably one of the last benchmarks in terms of games being solved they they worry about the future and the existential threats of artificial intelligence so the negative impact in whatever form on society is that something that concerns you as much are you more optimistic about the positive impacts of AI oh I am much more optimistic about the positive impacts so just in my own work what we've done so far we run the nationwide kidney exchange hundreds of people are walking around alive today who would it be and it's increased employment you had you have a lot of people now running kidney changes and at the transplant centers interacting with the kidney exchange you have extra surgeons nurses anesthesiologists hospitals all of that as so so employment is increasing from that and the world is becoming a better place another example is combinatorial sourcing auctions we did 800 large-scale combinatorial sourcing auctions from 2001 to 2010 in a previous startup of mine called combine it and we increased the supply chain efficiency on that sixty billion dollars of spend by twelve point six percent so that's over six billion dollars of efficiency improvement in the world and this is also like shifting value from somebody to somebody else just efficiency improvement like in trucking less empty driving so there's less waste less carbon footprint and so on it's a huge positive impact in the near term but sort of to stay in it for a little longer because I think game theory is a role to play here well let me actually come back and tell you this is one thing I think Asia is also going to make the world much safer so so so that's another aspect that often gets overlooked well let me ask this question maybe you can speak to the the safer so I talked to max tegmark is do a Russell who are very concerned about the resume yeah and often the concern is about value misalignment so AI systems basically working operating towards goals that are not the same as human civilization human beings so it seems like game theory has a role to play there to to make sure the values are aligned with human beings I don't know if that's how you think about it if not how do you think AI might help with this problem how do you think a i'ma make the world safer yeah I think this value misalignment is a fairly theoretical worry and I haven't really seen it in it because I do a lot of real applications I don't see it anywhere the closest I've seen it was the following type of mental exercise really where I had this argument in the late 80s when we were building these transportation optimization systems and somebody had heard that it's a good idea to have high utilization of assets so they told me that hey why don't you put that as objective and we didn't even pull it as an objective because I just showed him it you know if you had that as your objective the solution would be to load your trucks full and driving circles nothing would ever get delivered you'd have a hundred percent utilization so yeah I know this phenomenon I've known this for over 30 years in but I've never seen it actually be a problem reality in reality and yes if you have the wrong objective the AI will optimize that to the hilt and it's gonna fit more than some human who's kind of trying to so within a half-baked way with some human insight to but I just haven't seen that materialize in practice there's this gap that you actually put your finger on very clearly just now between theory and reality that's very difficult to put into words I think it's what you can theoretically imagine the worst possible case or even yeah I mean bad cases and what usually happens in reality so for example to me maybe it's something you can comment on having grown up and I had grew up in the Soviet Union you know there's currently 10,000 nuclear weapons in the world and for many decades it's theoretically surprising to me that the nuclear war is not broken out do you think about this aspect from a game theoretic perspective in general why is that true why in theory you could see how things would go terribly wrong and somehow yet they have not yeah how do you think so so I do think that about that a lot I think the biggest two threats that we're facing as mankind one is climate change and the other is nuclear war so I saw so those are my main two worries that they're worried about and I've tried to do something about climate I thought about trying to do something for climate change twice actually before two of my startups had actually commissioned studies of what we could do on those things and we didn't really find a sweet spot but I'm still keeping an eye out on that if there's something where we could actually provide a market solution or optimization solution or some other technology solution to problems right now like for example pollution critic markets was what we were looking at then and it was much more the lack of political will by those markets were not so successful rather than bad market design so I could go in and make a better market design but that wouldn't really move the needle on the world very much if there's no political will and in the u.s. you know the market at least the Chicago market was just shut down and and so on so it and then it doesn't really help create your market design was there any nuclear side it's more so global warming is more encroaching problem you know nuclear weapons have been here it's an obvious problem has just been sitting there so how do you think about what is the mechanism design there that just made everything seem stable and are you still extremely worried I am still extremely worried so you probably know the simple game theory of mad so solar so this was a mutually assured destruction and it's like it doesn't require any computation with small matrices you can actually convince yourself that the game is such that nobody wants to initiate yeah that's a very coarse-grained analysis and it really works in a situation where you have two superpowers or small number of superpowers now things are very different you have a smaller nuke so the threshold of initiating is smaller and you have smaller countries and non non nation actors who make it Nokes and so on so it's I think it's riskier now than it was maybe ever before and what idea application by I you've talked about a little bit but what is the most exciting to you right now I mean you you're here at nips europe's now you have a few excellent pieces of work but what are you thinking into the future with several companies you're doing what's the most exciting thing or one of the exciting things the number one thing but for me right now is coming up with these scalable techniques for game solving and applying them into the real world they're still very interested in market design as well and we're doing that in the optimized markets but I'm most interested if number one right now is strategic machine strategy robots getting that technology out there and seeing as you were in the trenches doing applications what needs to be actually filled what technology gap still need to be filled so it's so hard to just put your feet on the table and imagine what needs to be done but when you're actually doing real applications the applications tell you what needs to be done and I really enjoy that interaction is it a challenging process to apply some of the stay the are techniques you're working on and and having the the various players in industry or the military or people who could really benefit from it actually use it what's that process like of you know in autonomous vehicles will work with automotive companies and they're in in many ways they're a little bit old-fashioned it's difficult they really want to use this technology there's clearly will have a significant benefit but the systems aren't quite in place to easily have them integrated in terms of data in terms of compute in terms of all these kinds of things so deuce is that one of the bigger challenges that you're facing and how do you tackle that challenge yeah I think that's always a challenge that that's gonna slowness and inertia really of let's do things the way we've always done it you just have to find the internal champions that the customer who understand that hey things can't be the same way in the future otherwise bad things are going to happen and it's in order most vehicles it's actually very interesting that the car makers are doing that then they're very traditional but at the same time you have tech companies who have nothing to do with cars or transportation like Google and Baidu really pushing on autonomous cars I find it fascinating clearly you're super excited about actually these ideas having an impact in the world in terms of the technology in terms of ideas and research their directions that you're also excited about whether that's on the some of the approaches you talked about for the imperfect information games whether it's applying deep learning just some of these problems is there something that you're excited in in the research side of things yeah yeah lots of different things in the game solving so solving even bigger games games will you have more hidden action of the play your actions as well poker is a game where really the chance actions are hidden or some of them are hidden but the player actions are public the multiplayer games of various sorts collusion opponent exploitation all and even longer games some games that basically go forever but they're not repeated so seek extensive phone games that go forever whoa what what would that even look like how do you represent that how do you solve that what's an example of a game like that or is this some of the stochastic games the imagine let's say business strategy so it's and not just modeling like a particular interaction but thinking about the business from here to eternity or I think or let's let's say military strategy so it's not like war is going to go away how do you think about military strategy that's going to go forever how do you even model that how do you know whether a move was good that you somebody made and and and so on so that that's kind of one direction I'm also very interested in learning much more scalable techniques for integer programming so we had a nice email paper this summer on that for the first automated algorithm configuration paper that has theoretical generalization guarantees so if I see these many training examples and I told my algorithm in this way it's going to have good performance on the real distribution which have not seen so which is kind of interesting that you know algorithm configuration has been going on now for at least 17 years seriously and there has not been any generalization theory before well this is really exciting and it's been it's a huge honor to talk to you thank you so much to us thank you for bringing livadas to the world and all the great work you're done well thank you very much it's been fun good questions you
Juergen Schmidhuber: Godel Machines, Meta-Learning, and LSTMs | Lex Fridman Podcast #11
the following is a conversation with jurgen schmidhuber he's the co-director of a CSA a lab and a co-creator of long short term memory networks LS TMS are used in billions of devices today for speech recognition translation and much more over 30 years he has proposed a lot of interesting out-of-the-box ideas a meta learning adversarial networks computer vision and even a formal theory of quote creativity curiosity and fun this conversation is part of the MIT course and artificial general intelligence and the artificial intelligence podcast if you enjoy it subscribe on youtube itunes or simply connect with me on twitter at Lex Friedman spelled Fri D and now here's my conversation with jurgen schmidhuber early on you dreamed of AI systems that self-improve recursively when was that dream born when I was a baby no it's not true I mean it was a teenager and what was the catalyst for that birth what was the thing that first inspired you when I was a boy I'm I was thinking about what to do in my life and then I thought the most exciting thing is to solve the riddles of the universe and and that means you have to become a physicist however then I realized that there's something even grander you can try to build a machine that isn't really a machine any longer that learns to become a much better physicist than I could ever hope to be and that's how I thought maybe I can multiply my tiny little bit of creativity into infinity but ultimately that creativity will be multiplied to understand the universe around us that's that's the the curiosity for that mystery that that drove you yes so if you can build a machine that learns to solve more and more complex problems and more and more general problems older then you basically have solved all the problems at least all the solvable problems so how do you think what is the mechanism for that kind of general solver look like obviously we don't quite yet have one or know how to build one who have ideas and you have had throughout your career several ideas about it so how do you think about that mechanism so in the 80s I thought about how to build this machine that learns to solve all these problems I cannot solve myself and I thought it is clear that has to be a machine that not only learns to solve this problem here and problem here but it also has to learn to improve the learning algorithm itself so it has to have the learning algorithm in a representation that allows it to inspect it and modify it such that it can come up with a better learning algorithm so I call that meta learning learning to learn and recursive self-improvement that is really the pinnacle of that why you then not only alarm how to improve on that problem and on that but you also improve the way the machine improves and you also improve the way it improves the way it improves itself and that was my 1987 diploma thesis which was all about that hierarchy of metal or knows that I have no computational limits except for the well known limits that Google identified in 1931 and for the limits our physics in the recent years meta learning has gained popularity in a in a specific kind of form you've talked about how that's not really meta learning with Newall networks that's more basic transfer learning can you talk about the difference between the big general meta learning and a more narrow sense of meta learning the way it's used today the ways talked about today let's take the example of a deep neural networks that has learnt to classify images and maybe you have trained that network on 100 different databases of images and now a new database comes along and you want to quickly learn the new thing as well so one simple way of doing that as you take the network which already knows 100 types of databases and then you would just take the top layer of that and you retrain that using the new label data that you have in the new image database and then it turns out that it really really quickly can learn that to one shot basically because from the first 100 data sets it already has learned so much about about computer vision that it can reuse that and that is then almost good enough to solve the new task except you need a little bit of adjustment on the top so that is transfer learning and it has been done in principle for many decades people have done similar things for decades meta-learning true mental learning is about having the learning algorithm itself open to introspection by the system that is using it and also open to modification such that the learning system has an opportunity to modify any part of the learning algorithm and then evaluate the consequences of that modification and then learn from that to create a better learning algorithm and so on recursively so that's a very different animal where you are opening the space of possible learning algorithms to the learning system itself right so you've like in this 2004 paper you described get all machines and programs that we write themselves yeah right philosophically and even in your paper mathematically these are really compelling ideas but practically do you see these self referential programs being successful in the near term to having an impact where sort of a demonstrates to the world that this direction is a is a good one to pursue in the near term yes we had these two different types of fundamental research how to build a universal problem solver one basically exploiting [Music] proof search and things like that that you need to come up with asymptotic Liam optimal theoretically optimal self-improvement and problems all of us however one has to admit that through this proof search comes in an additive constant an overhead an additive overhead that vanishes in comparison to what you have to do to solve large problems however for many of the small problems that we want to solve in our everyday life we cannot ignore this constant overhead and that's why we also have been doing other things non universal things such as recurrent neural networks which are trained by gradient descent and local search techniques which aren't universal at all which aren't provably optimal at all like the other stuff that we did but which are much more practical as long as we only want to solve the small problems that we are typically trying to solve in this environment here yes so the universal problem solvers like the girdle machine but also Markos who does fastest way of solving all possible problems which he developed around 2012 - in my lab they are associated with these constant overheads for proof search which guarantee is that the thing that you're doing is optimal for example there is this fastest way of solving all problems with a computable solution which is due to Marcus Marcus jota and to explain what's going on there let's take traveling salesman problems with traveling salesman problems you have a number of cities in cities and you try to find the shortest path through all these cities without visiting any city twice and nobody know is the fastest way of solving Traveling Salesman problems tsps but let's assume there is a method of solving them within n to the 5 operations where n is the number of cities then the universal method of Marcus is going to solve the same trolley salesman problem also within n to the 5 steps plus o of 1 plus a constant number of steps that you need for the proof searcher which you need to show that this particular class of problems that Traveling Salesman salesman problems can be solved within a certain time bound within order into the five steps basically and this additive constant doesn't care for in which means as n is getting larger and larger as you have more and more cities the constant overhead pales in comparison and that means that almost all large problems I solved in the best possible way our way today we already have a universal problem solver like sound however it's not practical because the overhead the constant overhead is so large that for the small kinds of problems that we want to solve in this little biosphere by the way when you say small you're talking about things that fall within the constraints of our computational systems thinking they can seem quite large to us mere humans right that's right yeah so they seem large and even unsolvable in a practical sense today but they are still small compared to almost all problems because almost all problems are large problems which are much larger than any constant do you find it useful as a person who is dreamed of creating a general learning system has worked on creating one has done a lot of interesting ideas there to think about P versus NP this formalization of how hard problems are how they scale this kind of worst-case analysis type of thinking do you find that useful or is it only just a mathematical it's a set of mathematical techniques to give you intuition about what's good and bad mm-hmm so P versus NP that's super interesting from a theoretical point of view and in fact as you are thinking about that problem you can also get inspiration for better practical problems always on the other hand we have to admit that at the moment as he best practical problem solvers for all kinds of problems that we are now solving through what is called AI at the moment they are not of the kind that is inspired by these questions you know there we are using general-purpose computers such as recurrent neural networks but we have a search technique which is just local search gradient descent to try to find a program that is running on these recurrent networks such that it can or some interesting problems such as speech recognition machine translation and something like that and there is very little theory behind the best solutions that we have at the moment that can do that do you think that needs to change you think that world change or can we go can we create a general intelligence systems without ever really proving that that system is intelligent in some kind of mathematical way solving machine translation perfectly or something like that within some kind of syntactic definition of a language or can we just be super impressed by the thing working extremely well and that's sufficient there's an old saying and I don't know who brought it up first which says there's nothing more practical than a good theory and um yeah and a good theory of problem-solving under limited resources like here in this universe or on this little planet has to take into account these limited resources and so probably that is locking a theory in which is related to what we already have sees a sim totally optimal comes almost which which tells us what we need in addition to that to come up with a practically optimal problem so long so I believe we will have something like that and maybe just a few little tiny twists unnecessary to to change what we already have to come up with that as well as long as we don't have that we mmm admit that we are taking sub optimal ways and we can y'all not Verizon long shorter memory for equipped with local search techniques and we are happy that it works better than any competing method but that doesn't mean that we we think we are done you've said that an AGI system will ultimately be a simple one a general intelligent system will ultimately be a simple one maybe a pseudocode of a few lines to be able to describe it can you talk through your intuition behind this idea why you feel that uh at its core intelligence is a simple algorithm experience tells us that this stuff that works best is really simple so see asymptotic team optimal ways of solving problems if you look at them and just a few lines of code it's really true although they are these amazing properties just a few lines of code then the most promising and most useful practical things maybe don't have this proof of optimality associated with them however they are so just a few lines of code the most successful mmm we can neural networks you can write them down and five lines of pseudocode that's a beautiful almost poetic idea but what you're describing there is this the lines of pseudocode are sitting on top of layers and layers abstractions in a sense hmm so you're saying at the very top mmm you'll be a beautifully written sort of algorithm but do you think that there's many layers of abstractions we have to first learn to construct yeah of course we are building on all these great abstractions that people have invented over the millennia such as matrix multiplications and real numbers and basic arithmetic and calculus and derivations of error functions and derivatives of error functions and stuff like that so without that language that greatly simplifies our way our thinking about these problems we couldn't do anything so in that sense as always we are standing on the shoulders of the Giants who in the past simplified the problem of problem solving so much that now we have a chance to do the final step the final step will be a simple one oh if we if you take a step back through all of human civilization in just the universe in check how do you think about evolution and what if creating a universe is required to achieve this final step what if going through the very painful and an inefficient process of evolution is needed to come up with this set of abstractions that ultimately to intelligence do you think there's a shortcut or do you think we have to create something like our universe in order to create something like human level intelligence hmm so far the only example we have is this one this universe and you live you better maybe not but we are part of this whole process right so apparently so it might be the key is that the code that runs the universe as really really simple everything points to that possibility because gravity and other basic forces are really simple laws that can be easily described also in just a few lines of code basically and and then there are these other events that the apparently random events in the history of the universe which as far as we know at the moment don't have a compact code but who knows maybe somebody and the near future is going to figure out the pseudo-random generator which is which is computing whether the measurement of that spin up or down thing here is going to be positive or negative underlying quantum mechanics yes so you ultimately think quantum mechanics is a pseudo-random number generator monistic there's no randomness in our universe does God play dice so a couple of years ago a famous physicist quantum physicist Anton Zeilinger he wrote an essay in nature and it started more or less like that one of the fundamental insights our theme of the 20th century was that the universe is fundamentally random on the quantum level and that whenever you measure spin up or down or something like that a new bit of information enters the history of the universe and while I was reading that I was already typing the responds and they had to publish it because I was right that there's no evidence no physical evidence for that so there's an alternative explanation where everything that we consider random is actually pseudo-random such as the decimal expansion of pi supply is interesting because every three-digit sequence every sequence of three digits appears roughly one in a thousand times and every five digit sequence appears roughly one in ten thousand times what do you really would expect if it was run random but there's a very short algorithm short program that computes all of that so it's extremely compressible and who knows maybe tomorrow somebody some grad student at CERN goes back over all these data points better decay and whatever and figures out oh it's the second billion digits of pi or something like that we don't have any fundamental reason at the moment to believe that this is truly random and not just a deterministic video game if it was a deterministic video game it would be much more beautiful because beauty is simplicity and many of the basic laws of the universe like gravity and the other basic forces are very simple so very short programs can explain what these are doing and and it would be awful and ugly the universe would be ugly the history of the universe would be ugly if for the extra things the random the seemingly random data points that we get all the time that we really need a huge number of extra bits to destroy all these um these extra bits of information so as long as we don't have evidence that there is no short program that computes the entire history of the entire universe we are a scientists compelled to look further for that Swiss program your intuition says there exists a shortest a program that can backtrack to the to the creation of the universe so the shortest path to the creation yes including all the entanglement things and all the spin up-and-down measurements that have been taken place since 13.8 billion years ago and so yeah so we don't have a proof that it is random we don't have a proof of that it is compressible to a short program but as long as we don't have that proof we are obliged as scientists to keep looking for that simple explanation absolutely so you said simplicity is beautiful or beauty is simple either one works but you also work on curiosity discovery you know the romantic notion of randomness of serendipity of being surprised by things that are about you kind of in our poetic notion of reality we think as humans require randomness so you don't find randomness beautiful you use you find simple determinism beautiful yeah okay so why why because the explanation becomes shorter a universe that is compressible to a short program is much more elegant and much more beautiful than another one which needs an almost infinite number of bits to be described as far as we know many things that are happening in this universe are really simple in terms are from short programs that compute gravity and the interaction between elementary particles and so on so all of that seems to be very very simple every electron seems to reuse the same sub program all the time as it is interacting with other elementary particles if we now require an extra Oracle injecting new bits of information all the time for these extra things which are currently no understood such as better decay then the whole description length our data that we can observe out of the history of the universe would become much longer and therefore uglier and uglier again the simplicity is elegant and beautiful all the history of science is a history of compression progress yes so you've described sort of as we build up abstractions and you've talked about the idea of compression how do you see this the history of science the history of humanity our civilization and life on earth as some kind of path towards greater and greater compression what do you mean by there how do you think of that indeed the history of science is a history of compression progress what does that mean hundreds of years ago there was an astronomer whose name was Keppler and he looked at the data points that he got by watching planets move and then he had all these data points and suddenly turnouts that he can greatly compress the data by predicting it through an ellipse law so it turns out that all these data points are more or less on ellipses around the Sun and another guy came along whose name was Newton and before him hook and they said the same thing that is making these planets move like that is what makes the apples fall down and it also holds form stones and for all kinds of other objects and suddenly many many of these compression of these observations became much more compressible because as long as you can predict the next thing given what you have seen so far you can compress it you don't have to store that data extra this is called predict coding and then there was still something wrong with that theory of the universe and you had deviations from these predictions of the theory and 300 years later another guy came along whose name was Einstein and he he was able to explain away all these deviations from the predictions of the old theory through a new theory which was called the general theory of relativity which at first glance looks a little bit more complicated and you have to warp space and time but you can't phrase it within one single sentence which is no matter how fast you accelerate and how fast are hard you decelerate and no matter what is the gravity in your local framework Lightspeed always looks the same and from from that you can calculate all the consequences so it's a very simple thing and it allows you to further compress all the observations because suddenly there are hardly any deviations any longer that you can measure from the predictions of this new theory so all of science is a history of compression progress you never arrive immediately at the shortest explanation of the data but you're making progress whenever you are making progress you have an insight you see all first I needed so many bits of information to describe the data to describe my falling apples my video are falling apples I need so many data so many pixels have to be stored but then suddenly I realize no there is a very simple way of predicting the third frame in the video from the first tool and and maybe not every little detail can be predicted but more or less most of these orange blocks blobs that are coming down they accelerate in the same way which means that I can greatly compress the video and the amount of compression progress that is the depth of the insight that you have at that moment that's the fun that you have the Scientific fun that fun in that discovery and we can build artificial systems that do the same thing they measure the depth of their insights as they are looking at the data which is coming in through their own experiments and we give them a reward an intrinsic reward and proportion to this depth of insight and since they are trying to maximize the rewards they get they are suddenly motivated to come up with new action sequences with new experiments that have the property that the data that is coming in as a consequence are these experiments has the property that they can learn something about see a pattern in there which they hadn't seen yet before so there's an idea of power play you've described a training general problem solver in this kind of way of looking for the unsolved problems yeah can you describe that idea a little further it's another very simple idea so normally what you do in computer science you have you have some guy who gives you a problem and then there is a huge search space of potential solution candidates and you somehow try them out and you have more less sophisticated ways of moving around in that search space until you finally found a solution which you consider satisfactory that's what most of computer science is about power play just goes one little step further and says let's not only search for solutions to a given problem but let's search two pairs of problems and their solutions where the system itself has the opportunity to phrase its own problem so we are looking suddenly at pairs of problems and their solutions or modifications are the problems over that is supposed to generate a solution to that new problem and and this additional degree of freedom allows us to build Korea systems that are like scientists in the sense that they not only try to solve and try to find answers to existing questions no they are also free to impose their own questions so if you want to build an artificial scientist we have to give it that freedom and power play is exactly doing that so that's that's a dimension of freedom that's important to have but how do you are hardly you think that how multi-dimensional and difficult the space of them coming up in your questions is yeah so as as it's one of the things that as human beings we consider to be the thing that makes us special the intelligence that makes us special is that brilliant insight yeah that can create something totally new yes so now let's look at the extreme case let's look at the set of all possible problems that you can formally describe which is infinite which should be the next problem that a scientist or power-play is going to solve well it should be the easiest problem that goes beyond what you already know so it should be the simplest problem that the current problems all of that you have which can already sold 100 problems that he cannot solve yet by just generalizing so it has to be new so it has to require a modification of the problem solver such that the new problem solver can solve this new thing but the old problem solver cannot do it and in addition to that we have to make sure that the problem solver doesn't forget any of the previous solutions right and so by definition power play is now trying always to search and this pair of in in the set of pairs of problems and problems over modifications for a combination that minimize the time to achieve these criteria so as always trying to find the problem which is easiest to add to the repertoire so just like grad students and academics and researchers can spend the whole career in a local minima hmm stuck trying to come up with interesting questions but ultimately doing very little do you think it's easy well in this approach of looking for the simplest unsolvable problem to get stuck in a local minima is not never really discovering new you know really jumping outside of the hundred problems the very solved in a genuine creative way no because that's the nature of power play that it's always trying to break its current generalization abilities by coming up with a new problem which is beyond the current horizon just shifting the horizon of knowledge a little bit out there breaking the existing rules search says the new thing becomes solvable but wasn't solvable by the old thing so like adding a new axiom like what Google did when he came up with these new sentences new theorems that didn't have a proof in the phone system which means you can add them to the repertoire hoping that that they are not going to damage the consistency of the whole thing so in the paper with the amazing title formal theory of creativity fun in intrinsic motivation you talk about discovery as intrinsic reward so if you view humans as intelligent agents what do you think is the purpose and meaning of life far as humans is you've talked about this discovery do you see humans as an instance of power play agents yeah so humans are curious and that means they behave like scientists not only the official scientists but even the babies behave like scientists and they play around with toys to figure out how the world works and how it is responding to their actions and that's how they learn about gravity and everything and yeah in 1990 we had the first systems like the hand would just try to to play around with the environment and come up with situations that go beyond what they knew at that time and then get a reward for creating these situations and then becoming more general problem solvers and being able to understand more of the world so yeah I think in principle that that that curiosity strategy or sophisticated versions of whether chess is quiet they are what we have built-in as well because evolution discovered that's a good way of exploring the unknown world and a guy who explores the unknown world has a higher chance of solving problems that he needs to survive in this world on the other hand those guys who were too curious they were weeded out as well so you have to find this trade-off evolution found a certain trade-off apparently in our society there are as a certain percentage of extremely exploitive guy and it doesn't matter if they die because many of the others are more conservative and and and so yeah it would be surprising to me if if that principle of artificial curiosity wouldn't be present and almost exactly the same form here in our brains so you're a bit of a musician and an artist so continuing on this topic of creativity what do you think is the role of creativity and intelligence so you've kind of implied that it's essential for intelligence if you think of intelligence as a problem-solving system as ability to solve problems but do you think it's essential this idea of creativity we never have a program a sub program that is called creativity or something it's just a side effect of when our problem solvers do they are searching a space of problems or a space of candidates of solution candidates until they hopefully find a solution to have given from them but then there are these two types of creativity and both of them are now present in our machines the first one has been around for a long time which is human gives problem to machine machine tries to find a solution to that and this has been happening for many decades and for many decades machines have found creative solutions to interesting problems where humans were not aware of these particularly in creative solutions but then appreciated that the machine found that the second is the pure creativity that I would call what I just mentioned I would call the applied creativity like applied art where somebody tells you now make a nice picture off of this Pope and you will get money for that okay so here is the artist and he makes a convincing picture of the Pope and the Pope likes it and gives him the money and then there is the pure creative creativity which is more like the power play and the artificial curiosity thing where you have the freedom to select your own problem like a scientist who defines his own question to study and so that is the pure creativity of UL and opposed to the applied creativity which serves another and in that distinction there's almost echoes of narrow AI versus general AI so this kind of constrained painting of a pope seems like the the approaches of what people are calling narrow AI and pure creativity seems to be maybe I'm just biased as a human but it seems to be an essential element of human level intelligence is that what you're implying to a degree if you zoom back a little bit and you just look at a general problem-solving machine which is trying to solve arbitrary problems then this machine will figure out in the course of solving problems that it's good to be curious so all of what I said just now about this prewired curiosity and this will to invent new problems that the system doesn't know how to solve yet should be just a byproduct of the general search however apparently evolution has built it into us because it turned out to be so successful a pre-wiring a buyer's a very successful exploratory buyers that that we are born with and you've also said that consciousness in the same kind of way may be a byproduct of problem-solving you know do you think do you find it's an interesting by-product you think it's a useful by-product what are your thoughts on consciousness in general or is it simply a byproduct of greater and greater capabilities of problem-solving that's that's similar to creativity in that sense yeah we never have a procedure called consciousness in our machines however we get as side effects of what these machines are doing things that seem to be closely related to what people call consciousness so for example in 1990 we had simple systems which were basically recurrent networks and therefore universal computers trying to map incoming data into actions that lead to success maximizing reward in a given environment always finding the charging station in time whenever the battery's low and negative signals are coming from the battery always finds the charging station in time without bumping against painful obstacles on the way so complicated things but very easily motivated and then we give these little a separate we can all network which is just predicting what's happening if I do that in that what will happen as a consequence of these actions that I'm executing and it's just trained on the long and long history of interactions with the world so it becomes a predictive model loss of art basically and therefore also a compressor our theme observations after what because whatever you can predict you don't have to store extras or compression is a side effect of prediction and how does this record Network impress well it's inventing little sub programs little sub Network networks that stand for everything that frequently appears in the environment like bottles and microphones and faces maybe lots of faces in my environment so I'm learning to create something like a prototype face and a new face comes along and all I have to encode are the deviations from the prototype so it's compressing all the time the stuff that frequently appears there's one thing that appears all the time that is present all the time when the agent is interacting with its environment which is the agent itself so just for data compression reasons it is extremely natural for this we can network to come up with little sub networks that stand for the properties of the agents the hand you know the the other actuators and all the stuff that you need to better encode the data which is influenced by the actions of the agent so they're just as a side effect of data compression during problem-solving you have inter myself models now you can use this model of the world to plan your future and that's what yours have done since 1990 so the recurrent Network which is the controller which is trying to maximize reward can use this model as a network of the what is this model network as a wild this predictive model of the world to plan ahead and say let's not do this action sequence let's do this action sequence instead because it leads to more predictor to rewards and whenever it's waking up these layers of networks let's stand for itself and it's thinking about itself and it's thinking about itself and it's exploring mentally the consequences of its own actions and and now you tell me what is still missing missing the next the gap to consciousness yeah hi there there isn't that's a really beautiful idea that you know if life is a collection of data and in life is a process of compressing that data to act efficiently you in that data you yourself appear very often so it's useful to form compressions of yourself and it's a really beautiful formulation of what consciousness is a necessary side-effect it's actually quite compelling to me you've described our nen's developed LST aims long short-term memory networks the there type of recurrent neural networks they have gotten a lot of success recently so these are networks that model the temporal aspects in the data temporal patterns in the data and you've called them the deepest of the Newell networks right so what do you think is the value of depth in the models that we use to learn since you mentioned the long short-term memory and the lsdm I have to mention the names of the brilliant students of course that's worse first of all and my first student ever set for writer who had fundamental insights already in this diploma thesis then Felix Kias had additional important contributions Alex gray is a guy from Scotland who is mostly responsible for this CTC algorithm which is now often used to to train the Alice TM to do the speech recognition on all the Google Android phones and whatever and Siri and so on so these guys without these guys I would be nothing it's a lot of incredible work what is now the depth what is the importance of depth well most problems in the real world are deep in the sense that the current input doesn't tell you all you need to know about the environment mm-hmm so instead you have to have a memory of what happened in the past and often important parts of that memory are dated they are pretty old and so when you're doing speech recognition for example and somebody says eleven then that's about half a second or something like that which means it's already fifty-eight time steps and another guy or the same guy says seven so the ending is the same Evan but now the system has to see the distinction between seven and eleven and the only way I can see the differences it has to store that fifty steps ago there wasn't or a nerve eleven or seven so there you have already a problem of depth fifty because for each time step you have something like a virtual a layer and the expanded unrolled version of this Riccar network which is doing the speech recognition so these long time lags they translate into problem depth and most problems and this world Asajj that you really have to look far back in time to understand what is the problem and to solvent but just like with our CMS you don't necessarily need to when you look back in time remember every aspect you just need to remember the important aspects that's right the network has to learn to put the important stuff in into memory and to ignore the unimportant noise so but in that sense deeper and deeper is better or is there a limitation is is there I mean LCM is one of the great examples of architectures that do something beyond just deeper and deeper networks there's clever mechanisms for filtering data for remembering and forgetting so do you think that that kind of thinking is necessary if you think about LCM is a leap a big leap forward over traditional vanilla are nuns what do you think is the next leap hmm it within this context so LCM is a very clever improvement but LCM still don't have the same kind of ability to see far back in the future in the in the past as us humans do the credit assignment problem across way back not just 50 times steps or a hundred or a thousand but millions and billions it's not clear what are the practical limits of the lsdm when it comes to looking back already in 2006 I think we had examples where it not only looked back tens of thousands of steps but really millions of steps and who won Paris artists in my lab I think was the first author of a paper where we really was a 2006 or something had examples word learn to look back for more than 10 million steps so for most problems of speech recognition it's not necessary to look that far back but there are examples where it does now so looking back thing [Music] that's rather easy because there is only one past but there are many possible futures and so a reinforcement learning system which is trying to maximize its future expected rewards and doesn't know yet which of these many possible future should I select given this one single past it's facing problems that the LCN by itself cannot solve so the other sim is good for coming up with a compact representation of the history so far of the history and observations in action so far but now how do you plan in an efficient and good way among all these how do you select one of these many possible action sequences that a reinforcement learning system has to consider to maximize reward in this unknown future so again it behaves this basic setup where you have one week on network which gets in the video and the speech and whatever and it's executing actions and is trying to maximize reward so there is no teacher who tells it what to do at which point in time and then there's the other network which is just predicting what's going to happen if I do that then and that could be an LCM Network and it allows to look back all the way to make better predictions of the next time step so essentially although it's men predicting only the next time step it is motivated to learn to put into memory something that happened maybe a million steps ago because it's important to memorize that if you want to predict that at the next time step the next event you know how can a model of the world like that a predictive model of the world be used by the first guy let's call it the controller and the model the controller and the model how can the model be used by the controller to efficiently select among these many possible futures so naive way we had about 30 years ago was let's just use the model of the world as a stand-in as a simulation of the wall and millisecond by millisecond we planned the future and that means we have to roll it out really in detail and it will work only as the model is really good and it will still be inefficient because we have to look at all these possible futures and and there are so many of them so instead what we do now since 2015 and our cm systems controller model systems we give the controller the opportunity to learn by itself how to use the potentially relevant parts of the M of the model network to solve new problems more quickly and if it wants to it can learn to ignore the M and sometimes it's a good idea to ignore the the M because it's really bad it's a bad predictor in this particular situation of life where the control is currently trying to maximize r1 however it can also allow and to address and exploit some of the sub programs that came about in the model network through compressing the data by predicting it so it now has an opportunity to reuse that code the ethnic information in the modern are trying to reduce its own search space such that it can solve a new problem more quickly than without the model compression so you're ultimately optimistic and excited about the power of ära of reinforcement learning in the context of real systems absolutely yeah so you see RL as a potential having a huge impact beyond just sort of the M part is often develop on supervised learning methods you see RL as a four problems of cell traffic cars or any kind of applied cyber BOTS X that's the correct interesting direction for research in your view I do think so we have a company called Mason's Mason's which has applied to enforcement learning to little Howdy's there are DS which learn to park without a teacher the same principles were used of course so these little Audi's they are small maybe like that so I'm much smaller than the real Howdy's but they have all the sensors that you find the real howdy is you find the cameras that lead on sensors they go up to 120 20 kilometres an hour if you if they want to and and they are from pain sensors basically and they don't want to bump against obstacles and other Howdy's and so they must learn like little babies to a park take the wrong vision input and translate that into actions that lead to successful packing behavior which is a rewarding thing and yes they learn that they are salt we have examples like that and it's only in the beginning this is just the tip of the iceberg and I believe the next wave of a line is going to be all about that so at the moment the current wave of AI is about passive pattern observation and prediction and and that's what you have on your smartphone and what the major companies on the Pacific of em are using to sell you ads to do marketing that's the current sort of profit in AI and that's only one or two percent of the world economy which is big enough to make these company is pretty much the most valuable companies in the world but there's a much much bigger fraction of the economy going to be affected by the next wave which is really about machines that shape the data through our own actions and you think simulation is ultimately the biggest way that that though those methods will be successful in the next 10 20 years we're not talking about a hundred years from now we're talking about sort of the near-term impact of RL do you think really good simulation is required or is there other techniques like imitation learning you know observing other humans yeah operating in the real world where do you think this success will come from so at the moment we have a tendency of using physics simulations to learn behavior for machines that learn to solve problems that humans also do not know how to solve however this is not the future because the future is and what little babies do they don't use a physics engine to simulate the world no they learn a predictive model of the world which maybe sometimes is wrong in many ways but captures all kinds of important abstract high-level predictions which are really important to be successful and and that's what is what was the future thirty years ago when you started that type of research but it's still the future and now we are know much better how to go there to to move there to move forward and to really make working systems based on that where you have a learning model of the world a model of the world that learns to predict what's going to happen if I do that and that and then the controller uses that model to more quickly learn successful action sequences and then of course always this crazy thing in the beginning the model is stupid so the controller should be motivated to come up with experiments with action sequences that lead to data that improve the model do you think improving the model constructing an understanding of the world in this connection is the in now the popular approaches have been successful you know grounded in ideas of neural networks but in the 80s with expert systems there's symbolic AI approaches which to us humans are more intuitive in a sense that it makes sense that you build up knowledge in this knowledge representation what kind of lessons can we draw in our current approaches mmm for from expert systems from symbolic yeah so I became aware of all of that in the 80s and back then a logic program logic programming was a huge thing was inspiring to yourself did you find it compelling because most a lot of your work was not so much in that realm mary is more in learning systems yes or no but we did all of that so we my first publication ever actually was 1987 was a the implementation of genetic algorithm of a genetic programming system in prologue prologue that's what you learn back then which is a logic programming language and the Japanese the anthers huge fifth-generation AI project which was mostly about logic programming back then although a neural networks existed and were well known back then and deep learning has existed since 1965 since this guy and the UK and even anko started it but the Japanese and many other people they focus really on this logic programming and I was influenced to the extent that I said okay let's take these biologically inspired rules like evolution programs and and and implement that in the language which I know which was Prolog for example back then and then in in many ways as came back later because the Garuda machine for example has approved search on board and without that it would not be optimal well Marcus what does universal algorithm for solving all well-defined problems as approved search on board so that's very much logic programming without that it would not be a Centanni optimum but then on the other hand because we have a very pragmatic is also we focused on we cannula networks and and and some optimal stuff such as gradient based search and program space rather than provably optimal things the logic programming does it certainly has a usefulness in when you're trying to construct something provably optimal or probably good or something like that but is it useful for for practical problems it's really useful at volunteer improving the best theorem provers today are not neural networks right no say our logic programming systems and they are much better theorem provers than most math students and the first or second semester on but for reasoning to for playing games of go or chess or for robots autonomous vehicles that operate in the real world or object manipulation you know you think learning yeah as long as the problems have little to do with with C or improving themselves then as long as that is not the case you you just want to have better pattern recognition so to build a self-driving car you want to have better pattern recognition and and pedestrian recognition and all these things and you want to your minimum you want to minimize the number of false positives which is currently is slowing down self-driving cars in many ways and and all that has very little to do with logic programming yeah what are you most excited about in terms of directions of artificial intelligence at this moment in the next few years in your own research and in the broader community so I think in the not so distant future we will have for the first time little robots that learn like kids and I will be able to say to the robot um look here robot we are going to assemble a smartphone it's takes a slab of plastic and the school driver and let's screw in the screw like that no no not like that like so hmm not like that like that and I don't have a data glove or something he will see me and he will hear me and he will try to do something with his own actuators which will be really different from mine but he will understand the difference and will learn to imitate me but not in the supervised way where a teacher is giving target signals for all his muscles all the time no by doing this high level imitation where he first has to learn to imitate me and then to interpret these additional noises coming from my mouth as helping helpful signals to to do that Hannah and then it will by itself come up with faster ways and more efficient ways of doing the same thing and finally I stopped his learning algorithm and make a million copies and sell it and so at the moment this is not possible but we already see how we are going to get there and you can imagine to the extent that this works economically and cheaply it's going to change everything almost all our production is going to be affected by that and a much bigger wave much bigger ai wave is coming than the one that we are currently witnessing which is mostly about passive pattern recognition on your smartphone this is about active machines that shapes data Susy actions they are executing and they learn to do that in a good way so many of the traditional industries are going to be affected by that all the companies that are building machines well equip these machines with cameras and other sensors and they are going to learn to solve all kinds of problems through interaction with humans but also a lot on their own to improve what they already can do and lots of old economy is going to be affected by that and in recent years I have seen that all the economy is actually waking up and realizing that those vacations and are you optimistic about the future are you concerned there's a lot of people concerned in the near term about the transformation of the nature of work the kind of ideas that you just suggested would have a significant impact of what kind of things could be automated are you optimistic about that future are you nervous about that future and looking a little bit farther into the future there's people like you la musk - a rustle concerned about the existential threats of that future so in the near term job loss in the long term existential threat are these concerns to you or yalta mele optimistic so let's first address the near future we have had predictions of job losses for many decades for example when industrial robots came along many people many people predicted and lots of jobs are going to get lost and in a sense say were right because back then there were car factories and hundreds of people and these factories assembled cars and today the same car factories have hundreds of robots and maybe three guys watching the robots on the other hand those countries that have lots of robots per capita Japan Korea and Germany Switzerland a couple of other countries they have really low unemployment rates somehow all kinds of new jobs were created back then nobody anticipated those jobs and decades ago I already said it's really easy to say which jobs are going to get lost but it's really hard to predict the new ones 30 years ago who would have predicted all these people making money as YouTube bloggers 200 years ago 60% of all people used to work in agriculture today maybe 1% but still only I don't know 5% unemployment lots of new jobs were created and Homo Luden's the the playing man is inventing new jobs all the time most of these jobs are not existentially necessary for the survival of our species there are only very few existentially necessary jobs such as farming and building houses and and warming up the houses but less than 10% of the population is doing that and most of these newly invented jobs are about interacting with other people in new ways through new media and so on getting new high types of kudos and forms of likes and whatever and even making money through that so homo Luden's the playing man doesn't want to be unemployed and that's why he is inventing new jobs all the time and he keeps considering these jobs as really important and is investing a lot of energy and hours of work into into those and new jobs it's quite beautifully put were really nervous about the future because we can't predict what kind of new jobs would be created but your ultimate ly optimistic that we humans are so Restless that we create and give meaning to newer in your jobs telling you likes on faith things that get likes on Facebook or whatever the social platform is so what about long-term existential threat of AI where our whole civilization may be swallowed up by this ultra super intelligent systems maybe it's not going to be smaller DUP but I'd be surprised if B were B humans were the last step and the evolution of the universe you you've actually at this beautiful comment somewhere that I've seen saying that artificial quite insightful artificial general intelligence systems just like us humans will likely not want to interact with humans they'll just interact amongst themselves just like ants interact amongst themselves and only tangentially interact with humans hmm and it's quite an interesting idea that once we create a GI that will lose interest in humans and and have compete for their own Facebook Likes on their own social platforms so within that quite elegant idea how do we know in a hypothetical sense that there's not already intelligent systems out there how do you think broadly of general intelligence greater than us how do we know it's out there mmm how would we know it's around us and could it already be I'd be surprised even with within the next few decades or something like that we we won't have a eyes that truly smarts in every single way and better problem solvers and almost every single important way and I'd be surprised as they wouldn't realize what we have realized a long time ago which is that almost all physical resources are not here and this biosphere but for thou the rest of the solar system gets 2 billion times more solar energy than our little planet there's lots of material out there that you can use to build robots and self-replicating robot factories and all this stuff and they are going to do that and there will be scientists and curious and they will explore what they can do and in the beginning they will be fascinated by life and by their own origins and our civilization they will want to understand that completely just like people today would like to understand how life works and um and also the history of our own existence and civilization and also on the physical laws that created all of that so they in the beginning they will be fascinated my life once they understand that I was interest like anybody who loses interest and things he understands and then as you said the most interesting sources information for them will be others of their own kind so at least in the long run there seems to be some sort of protection through lack of interest on the other side and now it seems also clear as far as we understand physics you need matter and energy to compute and to build more robots and infrastructure and more AI civilization and III ecology is consisting of trillions of different types of AIS and and so it seems inconceivable to me that this thing is not going to expand some AI ecology not controlled by one AI but one by trillions of different types of AI is competing and all kinds of quickly evolving and disappearing ecological niches in ways that we cannot fathom at the moment but it's going to expand limited by Lightspeed and physics it's going to expand and and now we realize that the universe is still young it's only 13.8 billion years old and it's going to be a thousand times older than that so there's plenty of time to conquer the entire universe and to fill it with intelligence and senders and receivers such that AI scan trouble the way they are traveling in our labs today which is by radio from sender to receiver and let's call the current age of the universe one Eon one Eon now it will take just a few eons from now and the entire visible universe is going to be full of that stuff and let's look ahead to a time when the universe is going to be one thousand times older than it is now they will look back and they will say look almost immediately after the Big Bang only a few eons later the entire universe started to become intelligent now to your question how do we see whether anything like that has already happened or is already in a more advanced stage in some other part of the universe of the visible universe we are trying to look out there and nothing like that has happened so far or is that her do you think we'll recognize it or how do we know it's not among us how do we know planets aren't in themselves intelligent beings how do we know ants seen as a collective are not much greater intelligence in our own these kinds of ideas no but it was a boy I was thinking about these things and I thought hmm maybe it has already happened because back then I know I knew I learned from popular physics books that the structure the large-scale structure of the universe is not homogeneous and you have these clusters of galaxies and then in between there are these huge empty spaces and I thought hmm maybe they aren't really empty it's just that in the middle of that some AI civilization already has expanded and then has covered a bottle of a billion light-years diameter and is using all the energy of all the stars within that bubble for its own unfathomable purposes and so it always happened and we just failed to interpret the signs but then alarmed effect gravity by itself explains the large-scale structure of the universe and that this is not a convincing explanation and then I thought maybe maybe it's the dark matter because as far as we know today 80% of the measurable matter is invisible and we know that because otherwise our galaxy or other galaxies would fall apart they would they are rotating too quickly and then the idea was maybe all us he is AI civilizations and hourly out there they they just invisible because they are really efficient in using the energies at their own local systems and that's why they appear dark to us but this is awesome at a convincing explanation because then the question becomes why is there are there still any visible stars left in our own galaxy which also must have a lot of dark matter so that is also not a convincing thing and today I like to think it's quite plausible that maybe are the first at least in our local light cone within a few hundreds of millions of light years that we can reliably observe is there exciting to you it will might be the first and it would make us much more important because if we mess it up through a nuclear war then then maybe this will have an effect on the on the on the development on of the entire universe so let's not mess it up let's not mess it up Union thank you so much for talking today I really appreciate it it's my pleasure you
Pieter Abbeel: Deep Reinforcement Learning | Lex Fridman Podcast #10
the following is a conversation with Petera Beal he's a professor UC Berkeley and the director of the Berkeley robotics learning lab he's one of the top researchers in the world working on how we make robots understand and interact with the world around them especially using imitation and deeper enforcement learning this conversation is part of the MIT course and artificial general intelligence and the artificial intelligence podcast if you enjoy it please subscribe on YouTube iTunes where your podcast provider of choice or simply connect with me on Twitter at Lex Friedman spelled Fri D and now here's my conversation with Peter a Biel you've mentioned that if there was one person you could meet you'll be Roger Federer so let me ask when do you think we will have a robot that fully autonomously can beat Roger Federer at tennis Roger Federer level player at tennis huh well first if you can make it happen for me to meet Roger let me know terms of getting a robot to beat him at tennis it's kind of an interesting question because for a lot of the challenges we think about in AI the software is really the missing piece but for something like this the hardware is nowhere near either like to really have a robot that can physically run around the Boston Dynamics robots are starting to get there but still not really human level ability to to run around and then swing a racket that's a hardware problem I don't think it's a harder problem only I think it's a hardware and a software problem I think it's both and I think they'll they'll have independent progress so I'd say the the hardware maybe in 10-15 years I'm just late not grass I've dressed with a sliding yeah oh plague I'm not sure what's Carter grass or clay the clay involves sliding which might be harder to master actually yeah but you're not limited to bipedal I mean I'm sure there's I can build a machine it's a whole different question of course you know you can if you can say okay this robot can be on wheels they can move around on wheels and can be designed differently then I think that that can be done sooner probably than a full humanoid type of setup what do you think is swing a racket so you've worked at basic manipulation how hard do you think is the task of swinging or racket would be able to hit a nice backhand or a forehand okay let's say let's say we just set up stationary a nice robot arm let's say you know a standard industrial arm and it can wash the ball come and then swing the racket it's a good question I'm not sure it would be super hard to do I mean I'm sure it would require a lot if we do it breed with reinforced Maleny would require a lot of trial and error it's not gonna swing it right the first time around but yeah I don't I don't see why I couldn't see the right way I think it's learn about I think if you set up a ball machine let's say on one side and then a robot with a tennis racket on the other side I think it's learn about and maybe a little bit of pre training and simulation yeah I think that's I think that's feasible I think I think the swinging the racket is feasible I'd be very interesting to see how much precision it can get listen I mean that's that's where I mean some of the human players can hit it on the lines which is very high precision with spin this win is it is an interesting whether RL can learn to put a spin on the ball well you got me interested maybe someday we'll set this is your answer is basically okay for this problem it sounds fascinating but for the general problem of a tennis player we might be a little bit farther away what's the most impressive thing you've seen a robot do in the physical world so physically for me it's the Boston Dynamics videos always just ring home and just super impressed recently the robot running up the stairs doing the parkour type thing I mean yes we don't know what's underneath they don't really write a lot of detail but even if it's hard coded underneath which you might or might not be just the physical abilities of doing that parkour that's a very impressive so a lot right there have you met spot many or any of those robots in person might spot mini last hearing in April at the Mars event that Jeff Bezos organizes they brought it out there and it was nicely falling around Jeff when Jeff left the room they had it follow him along which is pretty impressive so I think there's some confidence to know that there's no learning going on in those robots the psychology of it so while knowing that while knowing there's not if there's any learning going on it's very limited I met spot Minnie earlier this year and knowing everything that's going on having one-on-one interaction so I got to spend some time alone and there's a immediately a deep connection on the psychological level even though you know the fundamentals how it works there's something magical so do you think about the psychology of interacting with robots in the physical world even you just showed me the pr2 the the robot and and there was a little bit something like a face head a little bit something like a face there's something that immediately draws you to it do you think about that aspect of of the robotics problem well it's very hard with bread here we'll give him a name Berkeley robot for the elimination of tedious tasks is very hard to not think of the robot as a person and it seems like everybody calls him a he for whatever reason but that also makes it more a person than if it was a it and it's it seems pretty natural to think of it that way this past weekend really struck me I've seen pepper many times on on videos but then I was at an event organized by this was by fidelity and they had scripted pepper to help moderate some sessions and yet scripted pepper to have the personality of a child a little bit and it was very hard to not think of it as its own person in some sense because it was just kind of jumping it would just jump into conversation making it very interactive moderate will be saying pepper just jump in hold on how about me can I participate in this doing it just like I heard this is like like a person and I was 100% scripted and even then it was hard not to have that sense of somehow there is something there so as we have robots interact in this physical world is that a signal that can be used in reinforcement learning you've you've worked a little bit in this direction but do you think that's that psychology can be somehow pulled in now so that's a question I would say a lot a lot of people ask and I think part of why they ask it is they're thinking about how unique are we really still ask people like after they see some results they see a computer play go to say computer do this that they're like ok but can it really have emotion can it really interact with us in that way and then once you're around robots you already start feeling it and I think that kind of maybe mythologically the way that I think of it is if you run something like reinforce some Linux about optimizing some objective and there's no reason that D object couldn't be tied into how much there's a person like interacting with this system and why could not the reinforcement learning system optimized for their robot being fun to be around and why wouldn't it then naturally become more and more interactive and more and more maybe like a person or like a pet I don't know what it would exactly be but more more have those features and acquire them automatically as long as you can formalize an objective of what it means to like something what how you exhibit what's the ground truth how do you how do you get the reward from human cause you have to somehow collect that information within you human but you you're saying if you can formulate as an objective it can be learned there is no reason it couldn't emergent through learning and maybe one way to formulate has an objective you wouldn't have to necessarily score it explicitly so standard rewards are numbers and numbers are hard to come by this is a 1.5 or 0.7 on some scale it's very hard to do for a person but much easier is for a person to say okay what you did the last five minutes was much nicer than we did the previous five minutes and that now gives a comparison compare and in fact there have been some results in that for example Paul Christiana and collaborators at open e I had the hopper madoka Hopper one legged robot the Batman's little back flips yeah purely from feedback I like this better than that that's kind of equally good and after a bunch of interactions it figured out what it was the person was asking for it namely a back flip and so I think the same thing od wasn't trying to do a back flip it was just getting a score from the comparison score from the person based on hers and having a mind in their own mind what I wanted to do a back flip but the robot didn't know what it was supposed to be doing it just knew that sometimes the person said this is better this is worse and then the robot figure it out what the person was actually after was a back flip and I'd imagine the same would be true for things like more interactive robots that the robot would figure out over time oh this kind of thing apparently has appreciated more than this other kind of thing so when I first picked up Sutton's Richard Sutton's reinforcement learning book before sort of this deep learning before the re-emergence of neural networks is a powerful mechanism for machine learning IRL seemed to me like magic as a as beautiful so that seemed like what intelligence is RL reinforcement learning so how do you think we can possibly learn anything about the world when the reward for the actions is delayed is so sparse like where is why do you think RL works why do you think you can learn anything under such sparse awards whether it's regular reinforcement learning a deeper enforcement learning what's your intuition the kind of part of that is why is RL why does it need so many samples so many experiences to learn from because really what's happening is when you have a sparse reward you do something maybe for like I don't know you take a hundred actions and then you get a reward and maybe get like a score of three and I'm like okay three not sure what that means you go again and now I get to and now you know that that sequence of hundred actions that you did the second time around somehow was worse than the sequence of hundred actions you did the first time around but that's tough to now know which one of those were better or worse some might have been good and bad in either one and so that's why I need so many experience but once you have enough experiences effectively rlist easing that apart it's time to say okay when what is consistently there when you get a higher reward and what's consistently there when you get a lower reward and then kind of the magic of sums is the policy grant update is to say now let's update the neural network to make the actions that were kind of present when things are good more likely and make the actions that are present when things are not as good less likely so that's that is the counterpoint but it seems like you would need to run it a lot more than you do even though right now people could say that RL is very inefficient but it seems to be way more efficient than one would imagine on paper that the the simple updates to the policy the policy gradient that that's somehow you can learn is exactly users said what are the common actions that seem to produce some good results that that somehow can learn anything it seems counterintuitive at least did is there some intuition behind yeah so I think there's a few ways to think about this the way I Tennant about it mostly originally when so when we started working on deep reinforcement learning here at Berkeley which was maybe two thousand eleven twelve thirteen around that time challenge Schulman was a PhD student initially kind of driving it too forward here and did it the way we thought about it at the time was if you think about rectified linear units or kind of break the fire type neural networks what do you get you get something that's piecewise linear feedback control and if you look at the literature linear feedback control is extremely successful can solve many many problems surprisingly well I remember for example when we did helicopter flight if you're in a stationary flight regime not a non station by the stationary flight regime like hover you can use linear feedback control to stabilize a helicopter a very complex dynamical system but the controller is relatively simple and so I think that's a big part of is that if you do feedback control even though the system you control can be very very complex often relatively simple control architectures can already do a lot but then also just linear is not good enough and so one way you can think of these neural networks is that in sometimes they tile the space which people were already trying to do more by hand or with finite state machines say this linear controller here this leaner controller here you'll network learns that alva spins a linear controller here another linear controller here but it's more subtle than that yeah and so it's benefiting from this linear control aspect is benefiting from the tiling but it's somehow tiling it one dimension at a time because if let's say you have a two layer network even the hidden layer you make a transition from active to inactive or the other way around that is essentially one axis but not acts as a line but one direction that you change and so you have this kind of very gradual tiling of the space we have a lot of sharing between the linear controllers that tile the space and that was always my intuition s of why to expect that this might work pretty well it's essentially leveraging the fact that linear feedback control is so good but of course not enough and this is a gradual tiling of the space with linear feedback controls that share a lot of expertise across them so that that's that's really nice intuition do you think that scales to the more and more general problems of when you start going up the number of controllers dimensions when you start going down in terms of how often you get a clean reward signal does that intuition carry forward to those crazy or weird or worlds that we think of as the real world so I think where things get really tricky in the real world compared to the things we've looked at so far with great success in reinforcement learning is the time skills which takes us to an extreme so when you think about the real world I mean I don't know maybe some student decided to do a a PhD here right okay that's that's the decision that's a very high-level decision but if you think about their lives I mean any person's life it's a sequence of muscle fiber contractions and relaxations and that's how you interact with the world and that's a very high frequency control thing but it's ultimately what you do and how you affect the world until I guess we have brain readings and you can maybe do it slightly differently but typically that's how you affect the world and the decision of doing a PhD is like so abstract relative to what you're actually doing in the world and I think that's where credit assignment becomes just completely beyond what any current RL algorithm can do and we need hierarchical reasoning at a level that is just not available at all yet where do you think we can pick up hierarchical reasoning by which mechanisms yeah so maybe let me highlight what I think the limitations are of what already was done 20-30 years ago in fact you'll find reasoning systems that reason over relatively long horizons but the problems that they were not grounded in the real world so people would have to hand design some kind of logical dynamical descriptions of the world and that didn't tie into perception and so then time to real objects and so forth and so that that was a big gap now with deep learning we start having the ability to really see with sensors process that and understand what's in the world and so it's a good time to try to bring these things together one I see a few ways of getting there one way to get there would be to say deep learning can get bolted on somehow to some of these more traditional approaches now bolted on would probably mean you need to do some kind of end-to-end training where you say my deep learning processing somehow leads to a representation that in Perm uses some kind of traditional underlying dynamical systems that can be used for planning and that's for example the direction Aviv Tamar and the North Korea touch here have been pushing with causal info gone and of course other people to that that's that's one way can we somehow force it into the form factor that is amenable to reasoning another direction we've been thinking about for a long time and they didn't make any progress on was more information theoretic approaches so the idea there was that what it means to take high-level action is to take and choose a latent variable now that tells you a lot about what's gonna be the case in the future because that's what it means to to take a high-level action I say what I decide I'm gonna navigate to the gas station because need to get gas for my car well that'll now take five minutes to get there but the fact that I get there I could already tell that from the high-level action it took much earlier that we had a very hard time getting success with not saying it's a dead-end necessarily but we had a lot of trouble getting that to work and then we start revisiting the notion of what are we really trying to achieve what we're trying to achieve is non ously hierarchy per se but you could think about what does hierarchy give us what it's we hope it would give us is better credit assignment kind of what is better credit ominous is given is giving us it gives us faster learning right and so faster learning is ultimately maybe what we're after and so that's what we ended up with the RL squared paper on learning - reinforcement learn which at a time rocky duan LED and that's exactly the meta learning approach or is say okay we don't know how to design hierarchy we know what we want to get from it let's just enter an optimize for what want to get from it and see if it might emerging we saw things emerge the maze navigation had consistent motion down hallways which is what you want a hierarchical control should say I want to go down this hallway and then when there is an option to take a turn I can this art will take a turn or not and repeat even had the notion of where have you been before or not do not revisit places you've been before it still didn't scale yet to the real world kind of scenarios I think you had in mind but it was some sign of life that maybe you can meta learn these hierarchal concepts I mean it seems like through these meta learning concepts get at the what I think is one of the hardest and most important problems of AI which is transfer learning so it's generalization how far along this journey towards building general systems are we being able to do transfer learning well so there's some signs that you can generalize a little bit but do you think we're on the right path or it's totally different breakthroughs are needed to be able to transfer knowledge between different learned models yeah I'm I'm pretty tired on this and then I think there are some very many there there's just some very impressive results already right I mean yes I would say when even with the initial and a big breakthrough in 2012 with Aleks net right the initial the initial thing is okay great this does better on imagenet hands image recognition but then immediately thereafter that was of course the notion that Wow what was learned on image net and you now want to solve a new task you can fine-tune Aleks net for new tasks and that was often found to be the even bigger deal that you learned something that was reusable which was not often the case before usually machine learning you learned something for one scenario and that was it and that's really exciting I mean that's just a huge application that's probably the biggest success of transfer learning today in terms of scope and impact that was huge breakthrough and then recently I feel like similar kind of but by scaling things up it seems like this has been expanded upon like people training even bigger networks they might transfer even better if you looked at for example some of the opening eye results on language models and some of the recent Google results on language models they are learned for just prediction and then they get reused for other tasks and so I think there is something there where somehow if you train a big enough model on enough things it seems to transfer some deepmind results I thought were very impressive unreal results where it was learned to navigate mazes in ways where it wasn't just reinforcement learning going to have other objectives was optimizing for so I think there's a lot of interesting results already I think maybe words hard to wrap my head around this to which extend or when do we call something generalization right or the levels of generalization involved in these different tasks alright so you draw this by the way just to frame things you've heard you say somewhere it's the difference between learning to master versus learning to generalize that it's a nice line to think about and it guess you're saying that's a gray area of what learning to master and learning to generalize where once think I might have heard this I might have heard it somewhere else and I think it might have been one of one of your interviews and maybe the one with yo show Benjamin on hundred percent sure but I like the example I'm gonna act not sure who it was but the example was essentially if you use current deep learning techniques what we're doing to predict let's say the relative motion of our planets it would do pretty well but then now if a massive new mass enters our solar system it would prompt predict what will happen right and that's a different kind of journal is a Shahnaz a generalization that relies on the ultimate simplest simplest explanation that we have available today to explain the motion of planets where I was just pattern recognition could predict our current solar system motion pretty well no problem and so I think that's an example of a kind of generalization that is a little different from what we've achieved so far and it's not clear if just you know regularizing more I'm forcing it to come up with a simpler simpler simple experience but it's not simple but that's what physics researchers do right to say can I make this even simpler how simple can I get this what's a simplest equation I can explain everything right yeah the master equation for the entire dynamics of the universe we haven't really pushed that direction as hard in in deep learning I would say not sure if it should be pushed but it seems a kind of generalization you get from that that you don't get in our current methods so far so I just talked to vladimir vapnik for example who was a statistician the statistical learning and he kind of dreams of creating these are the a equals e equals mc-squared for learning right the general theory of learning do you think that's a fruitless pursuit in the near term in within the next several decades I think that's a really interesting pursuit and in the following sense and that there is a lot of evidence that the brain is pretty modular and so I wouldn't maybe think of it as the theory maybe the the underlying theory but more kind of the principle where there have been findings where people who are blind will use the part of the brain usually used for vision for other functions and even after some kind of if people will get rewired in some way they might I'm able to reuse parts of their brain for other functions and so what that suggests is some kind of modularity and I think it is a pretty natural thing to strive forward to see can we find that modularity can we find this thing of course it's not every part of the brain is not exactly the same not everything can be rewired arbitrarily but if you think of things like the neocortex which is pretty big part of the brain that seems fairly modular from what the findings so far can you design something equally modular and if you can just grow it it becomes more capable probably I think that would be the kind of interesting underlying principle to shoot for that is not unrealistic do you think you prefer math or empirical trial and error for the discovery of the essence of what it means to do something intelligent so reinforcement learning embodies both groups right then prove that something converges prove the bounds and then at the same time a lot of those successes are well let's try this and see if it works so which do you gravitate towards how do you think of those two parts of your brain so maybe I would prefer we could make the progress with mathematics and the reason maybe I would prefer that is because because often if you have something you can mathematically formalise you can leapfrog a lot of experimentation and experimentation takes a long time to get through and a lot of trial and error kind of reinforcement learning your research process but you need to do a lot of trial and error before you get to a success so if we can leapfrog doubt in my mind that's what the math is about and hopefully once you do a bunch of experiments you start seeing a pattern you can do some derivations that leapfrog some experiments but I agree with you I mean in practice a lot of the progress has been such that we have not been able to find the math that allows it to leapfrog ahead and we are kind of making gradual progress one step at a time a new experiment here a new experiment there that gives us new insights and gradually building up but not getting to something yet where we're just okay here's an equation that now explains how you know that would be have been two years of experimentation to get there but this tells us what the results going to be unfortunately not so much yes not so much yeah but your hope is there in trying to teach robots or systems to do everyday tasks or even in simulation what what do you think you're more excited about imitation learning or self play so letting robots learn from humans or letting robots plan their own to try to figure out in their own way and eventually play eventually interact with humans or to solve whatever problem is what's the more exciting to you what's more promising you think as a research direction so when we look at self play what's so beautiful about it is goes back to kind of the challenges in reinforcement learning so the challenge of reinforced learning is getting signal and if you don't never succeed you don't get any signal in self play you're on both sides so one of you succeeds and the beauty is also one of you fails and so you see the contrast you see the one version of me that it better the other version and so every time you play yourself you get signal and so whenever you can turn something into self play you're in a beautiful situation where you can naturally learn much more quickly than in most other reinforced learning environments so I think I think if somehow we can turn more reinforcement learning problems into self play formulations that would go real really far so far south play has been largely around games where there is natural opponents but if we could do self play if for other things and let's say I don't know a robot learns to build a house I mean that's a pretty advanced thing to try to do for a robot but maybe it tries to build a hut or something if that can be done through self play it would learn a lot more quickly if somebody can figure that out and I think that would be something where it goes closer to kind of the mathematical leap frogging where somebody figures out a formalism to it's okay any RL problem by playing this and this idea you can turn it into a self play problem where you get signal a lot more easily reality is many problems we don't know how to turn the self lay and so either we need to provide detailed reward that doesn't just reward for achieving a goal but rewards for making progress and that becomes time-consuming and once you're starting to do that let's say you want a robot to do something you need to give all this detailed reward well why not just give a demonstration right because why not just show the robot and now the question is how do you show the robot one way to show is to tally operate the robot and then the robot really experiences things and that's nice because that's really high signal-to-noise ratio data and we've done a lot of that and you teach your robot skills in just 10 minutes you can teach your robot a new basic skill like okay pick up the bottle place it somewhere else that's a skill no matter where the bottle starts maybe it always goes on to a target or something that's fairly is a teacher about with tally up now what's even more interesting if you can now teach robot through third person learning where the robot watches you do something and doesn't experience it but just watches it and says okay well if you're showing me that that means I should be doing this and I'm not gonna be using your hand because I don't get to control your hand but I'm gonna use my hand I'd do that mapping and so that's where I think one of the big breakthroughs has happened this year this was led by Chelsea Finn here it's almost like machine translation for demonstrations were you have a human demonstration and the robot learns to translated into what it means for the robot to do it and that was a meta learning for a Malaysian learn from one to get the other and that I think opens up a lot of opportunities to learn a lot more quickly so my focus is on autonomous vehicles do you think this approach of third-person watching is about the autonomous driving is amenable to this a kind of approach so for autonomous driving I would say it's third-person is slightly easier and the reason I'm gonna say slightly easier to do a third-person is because the hard dynamics are very well understood so the easier than of first-person you mean or easier so I think the distinction between third-person and first-person is not a very important distinction for autonomous driving they're very similar because the distinction is really about who turns the steering wheel and or maybe I'll let me put it differently how to get from a point where you are now to a point let's say a couple meters in front of you and that's a problem that's very well understood and that's the only distinction being third and first-person there whereas with the robot manipulation interaction forces are very complex and it's still a very different thing for autonomous driving I think there is still the question imitation versus RL so imitation gives you a lot more signal I think where imitation is lacking and needs some extra machinery is it doesn't in its normal format doesn't think about goals or objectives and of course there are versions of imitation learning inverse reinforce learning type imitation which also thinks about goals I think then we're getting much closer but I think it's very hard to think of a fully reactive car generalizing well if it really doesn't have a notion of objectives to generalize well to the kind of general that you would want you'd want more than just that reactivity that you get from just behavioral cloning / supervised learning so a lot of the work whether its self play imitation learning would benefit significantly from simulation from effective simulation and you're doing a lot of stuff in the physical world and in simulation do you have hope for greater and greater power of simulation loop being boundless eventually to where most of what we need to operate in the physical world would could be simulated to a degree that's directly transferable to the physical world are we still very far away from that so I think we could even rephrase that question in some sense please so the power of simulation right simulators get better and better of course become stronger and we can learn more in simulation but there's also another version which is where you said the simulator doesn't even have to be that precise as long as is somewhat representative and instead of trying to get one simulator that is sufficiently precise to learn in and transfer really well to the real world I'm gonna build many simulators ensemble of simulators ensemble of simulators not any single one of them is sufficiently representative of the real world such that it would work if you train in there but if you train in all of them then there is something that's good in all of them the real world will just be you know another one that's you know cannot identical to any one of them but just another one of them another sample from the distribution of simulators exact we do live in a simulation so this is just like oh one other one I'm not sure about that video it's definitely a very advanced simulator if it is yeah it's pretty good one I've talked to this to Russell is something you think about a little bit too of course you're like really trying to build these systems but do you think about the future of AI a lot of people have concerned about safety how do you think about AI safety as you build robots that are operating in the physical world what what is uh yeah how do you approach this problem in an engineering kind of way in a systematic way so what a robot is doing things you kind of have a few notions of safety to worry about one is that Throwbot is physically strong and of course could do a lot of damage same for cars which we can think of as robots do in some way and this could be completely unintentional so it could be not the kind of long-term AI safety concerns that okay a is smarter than us and now what do we do but it could be just very practical okay this robot if it makes a mistake whether the results going to be of course simulation comes in a lot there too to test in simulation it's a difficult question and I'm always wondering like I was wondering at let's go back to drivings a lot of people know driving well of course what do we do to test somebody for driving right to get a driver's license what do they really do I mean you fill out some test and then you drive and I mean perfume in suburban California the driving test is just you drive around the block pull over you do a stop sign successfully and then you know you pull over again and you pretty much done and you're like okay if a self-driving car did dad would you trust it that it can drive and be like no that's not enough for me to trust but somehow for humans we've figured out that somebody being able to do that it's representative of them being able to do a lot of other things and so I think somehow for you must we figured out representative tests of what it means if you can do this what you can really do of course testing you must you must all want to be tested at all times self-driving cars the robots can be tested more often probably you can have replicas that get testament are known to be identical because they use the same neural net and so forth but still I feel like we don't have this kind of unit tests or proper tests for for robots and I think there's something very interesting to be thought about there especially as you update things your software improves you have a better self driving car suite you updated how do you know it's indeed more capable on everything than what you had before that you didn't have any bad things creep into it so I think that's a very interesting direction of research that there is no real solution yet except that's somehow for you must we do because we say okay you have a driving test you passed you can go on the road now and you must have accents every like a million or ten million miles something something pretty phenomenal compared to that short test yeah that is being done so let me ask you've mentioned you mentioned that Andrew Aang by example showed you the value of kindness and to do you think the space of policies good policies for humans and for AI is populated by policies that with kindness or ones that are the opposite exploitation even evil so if you just look at the sea of policies we operate under as human beings or if AI system had to operate in this real world do you think it's really easy to find policies that are full of kindness like we naturally fall into them or is it like a very hard optimization problem I mean there is kind of two optimizations happening for humans right so for you most was kinda the very long-term optimization which evolution has done for us and we're kind of predisposed to like certain things and that's in sometimes what makes our learning easier because I mean we know things like pain and hunger and thirst and the fact that we know about those is not something that we were taught that's kind of innate when we're hungry were unhappy when we're thirsty were unhappy when we have pain we're unhappy and ultimately evolution built that into us to think about this thing so so I think there is a notion that it seems somehow humans evolved in general to prefer to get along in some ways but at the same time also to be very territorial and kind of centric to their own tribe is it like it seems like that's the kind of space we converge down to it I mean I'm not an expert in anthropology but it seems like we're very kind of good within our own tribe but need to be taught but to be nice to other tribes well if you look at Steven Pinker he highlights is pretty nicely in better better angels of our nature where he talks about violence decreasing over time consistently so whatever attention whatever teams we pick it seems that the long arc of history goes towards us getting along more and more so I hope so so do you think that do you think it's possible to cheat teach RL bass robots the this kind of kindness this kind of ability to interact with humans this kind of policy even - let me ask let me ask a fun one do you think it's possible to teach RL based robot to love a human being and to inspire that human to love the robot back so - like RL based algorithm that leads to a happy marriage that's interesting question maybe I'll oh I'll answer it with with another question right I mean it's it but I'll come back to it so another question you can have is okay I mean how close does some people's happiness get from interacting with just a really nice dog like I mean dogs you come home that's what dogs did they greet you they're excited it makes you happy when you're coming home to your dog just like okay this is exciting they're always happy when I'm here and if they don't greet you because maybe whatever your partner took them on a trip or something you might not be nearly as happy when you get home right and so the kind of it seems like the level of reasoning a dog houses is pretty sophisticated but then it's still not yet at the level of human reasoning and so it seems like we don't even need to achieve human love reason to get like very strong affection with humans and so my thinking is why not right why couldn't with an AI couldn't we achieve the kind of level of affection that humans feel among each other or with friendly animals and so forth it's a question is it a good thing for us or not that misses another going right because I mean but I don't see why not why not yeah so he almost says love was the answer maybe he should say love is the objective function and then RL is the answer maybe I'll Peter thank you so much I don't want to take up more of your time thank you so much for talking today well thanks for coming by great to have you visit you
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9
the following is a conversation with Stuart Russell he's a professor of computer science at UC Berkeley and a co-author of a book that introduced me and millions of other people to the amazing world of AI called artificial intelligence a modern approach so it was an honor for me to have this conversation as part of MIT course and artificial general intelligence and the artificial intelligence podcast if you enjoy it please subscribe on youtube itunes or your podcast provider of choice or simply connect with me on twitter at Lex Friedman spelled Fri D and now here's my conversation with Stuart Russell so you've mentioned in 1975 in high school you've created one year first AI programs that play chess were you ever able to build a program that beat you a chess or another board game so my program never beat me at chess I actually wrote the program at Imperial College so I used to take the bus every Wednesday with a box of cards this big and shove them into the card reader and they gave us eight seconds of CPU time it took about five seconds to read the cards in and compile the code so we had three seconds of CPU time which was enough to make one move you know with a not very deep search and then we would print that move out and then we'd have to go to the back of the queue and wait to feed the cards in again how do you post a search well I would talk to no I think we got we got an eight move eight you know depth eight with alpha beta and we had some tricks of our own about move ordering and some pruning of the tree and we were still able to beat that program yeah yeah I I was a reasonable chess player in my youth I did Anna fellow program and a backgammon program so when I go to Berkley I worked a lot on what we call meta reasoning which really means reasoning about reasoning and in the case of a game playing program you need to reason about what parts of the search tree you're actually going to explore because the search tree is enormous or you know bigger than the number of atoms in the universe and the way programs succeed and the way humans succeed is by only looking at a small fraction of the search tree and if you look at the right fraction you play really well if you look at the wrong fraction if you waste your time thinking about things that are never gonna happen the moves that no one's ever gonna make then you're gonna lose because you you won't be able to figure out the right decision so that question of how machines can manage their own computation either how they decide what to think about is the meta-reasoning question we developed some methods for doing that and very simply a machine should think about whatever thoughts are going to improve its decision quality we were able to show that both for a fellow which is a standard to play game and for backgammon which includes dice for also it's a two-player game with uncertainty for both of those cases we could come up with algorithms that were actually much more efficient than the standard alpha beta search which chess programs at the time we're using and that those programs could beat me and I think you can see same basic ideas in alphago and alpha zero today the way they explored the tree is using a former meta reasoning to select what to think about based on how useful it is to think about it is there any insights you can describe without Greek symbols of how do we select which paths to go down there's really two kinds of learning going on so as you say alphago learns to evaluate board position so it can it can look at a go board and it actually has probably a superhuman ability to instantly tell how promising that situation is to me the amazing thing about alphago is not that it can be the world champion with its hands tied behind his back but the fact that if you stop it from searching altogether so you say okay you're not allowed to do any thinking ahead right you can just consider each of your legal moves and then look at the resulting situation and evaluate it so what we call a depth one search so just the immediate outcome of your moves and decide if that's good or bad that version of alphago can still play at a professional level right and human professionals are sitting there for five ten minutes deciding what to do and alphago in less than a second instantly into it what is the right move to make based on its ability to evaluate positions and that is remarkable because you know we don't have that level of intuition about go we actually have to think about the situation so anyway that capability that alphago has is one big part of why it beats humans the other big part is that it's able to look ahead 40 50 60 moves into the future mm-hmm and you know if it was considering all possibilities 40 or 50 or 60 moves into the future that would be you know 10 to the 200 possibility so wait way more than you know atoms in the universe and and so on so it's very very selective about what it looks at so let me try to give you an intuition about how you decide what to think about it's a combination of two things one is how promising it is right so if you're already convinced that a move is terrible there's no point spending a lot more time convincing yourself that it's terrible because it's probably not gonna change your mind so the the real reason you think is because there's some possibility of changing your mind about what to do mm-hmm right and is that changing your mind that would result then in a better final action in the real world so that's the purpose of thinking is to improve the final action in the real world and so if you think about a move that is guaranteed to be terrible you can convince yourself is terrible and you're still not gonna change your mind all right but on the other hand you I suppose you had a choice between two moves one of them you've already figured out is guaranteed to be a draw let's say and then the other one looks a little bit worse like it looks fairly likely that if you make that move you're gonna lose but there's still some uncertainty about the value of that move there's still some possibility that it will turn out to be a win all right then it's worth thinking about that so even though it's less promising on average than the other move which is guaranteed to be a draw there's still some purpose in thinking about it because there's a chance that you will change your mind and discover that in fact it's a better move so it's a combination of how good the move appears to be and how much I'm certainty there is about its value the more uncertainty the more it's worth thinking about because there's a higher upside if you want to think of it that way and of course in the beginning especially in the alphago 0 formulation it's everything is shrouded in uncertainty so you're really swimming in a sea of uncertainty so it benefits you too I mean actually following the same process as you described but because you're so uncertain about everything you you basically have to try a lot of different directions yeah so so the early parts of the search tree a fairly bushy that it will when looking a lot of different possibilities but fairly quickly the degree of certainty about some of the moves I mean if a movies are really terrible you'll pretty quickly find out right you lose half your pieces or half your territory and and then you'll say okay this this is not worth thinking about any more and then so a further down the tree becomes very long and narrow and you're following various lines of play you know 10 20 30 40 50 moves into the future and you know that's again it's something that human beings have a very hard time doing mainly because they just lacked the short-term memory you just can't remember a sequence of moves that's 50 movies long and you can't you can't imagine the board correctly for that money moves into the future of course the top players I'm much more familiar with chess but the top players probably have they have echoes of the same kind of intuition instinct that in a moment's time alphago applies when they see a board I mean they've seen those patterns human beings have seen those patterns before at the top at the Grandmaster level it seems that there is some similarities or maybe it's it's our imagination creates a vision of those similarities but it feels like this kind of pattern recognition that the alphago approaches are using is similar to what human beings at the top level or using I think there's there's some truth to that but not entirely yeah I mean I think the the extent to which a human Grandmaster can reliably wreak instantly recognize the right move instantly recognize the value of a position I think that's a little bit overrated but if you sacrifice a queen for exam I mean there's these there's these beautiful games of chess with Bobby Fischer somebody where it's seeming to make a bad move and I'm not sure there's a a perfect degree of calculation involved were they've calculated all the possible things that happen but there's an instinct there right that somehow adds up to the yeah so I think what happens is you you you get a sense that there's some possibility in the position even if you make a weird-looking move that it opens up some some lines of of calculation that otherwise would be definitely bad and and is that intuition that there's something here in this position that might might yield a win down the side and then you follow that right and and in some sense when when a chess player is following a line and in his or her mind they're they mentally simulating what the other person is gonna do while the opponent is gonna do and they can do that as long as the moves are kind of forced right as long as there's a you know there's a fourth we call a forcing variation where the opponent doesn't really have much choice how to respond and then you see if you can force them into a situation where you win you know we see plenty of mistakes even even in Grandmaster games where they just miss some simple three four five move combination that you know wasn't particularly apparent in in the position but we're still there that's the thing that makes us human yeah so when you mentioned that in a fellow those games were after some meta reasoning improvements and research I was able to beat you how did that make you feel part of the meta reasoning capability that it had was based on learning and and you could sit down the next day and you could just feel that it had got a lot smarter boom you know and all the sudden you really felt like you sort of pressed against the wall because it was it was much more aggressive and was totally unforgiving of any minor mistake that you might make and and actually it seemed understood the game better than I did and you know Gary Kasparov has this quote weary during his match against deep blue he said he suddenly felt that there was a new kind of intelligence across the board do you think that's a scary or an exciting possibility that's prevent for yourself in in the context of chess purely sort of in this like that feeling whatever that is I think it's definitely an exciting feeling you know this is what made me work on AI in the first place was as soon as I really understood what a computer was I wanted to make it smart you know I started out with the first program I wrote was for the sinclair programmable calculator and i think you could write a 21 step algorithm that was the biggest program you could write something like that and do little arithmetic calculations so I say think I implemented Newton's method for square roots and a few other things like that um but then you know I thought okay if I just had more space I could make this thing intelligent and so I started thinking about AI and and I think the the the thing that's scary is not is not the chess program because you know chess programs they're not in they're taking over the world business but if you extrapolate you know there are things about chess that don't resemble the real world right we know we know the rules of chess chess board is completely visible to the programmer of course the real world is not most you most the real world is not visible from wherever you're sitting so to speak and to overcome those kinds of problems you need qualitatively different algorithms another thing about the real world is that you know we we regularly plan ahead on the timescales involving billions or trillions of steps now we don't plan that was in detail but you know when you choose to do a PhD at Berkeley that's a five-year commitment and that amounts to about a trillion motor control steps that you will eventually be committed to including going up the stairs opening doors drinking water type yeah I mean every every finger movement while you're typing every character of every paper and the thesis and everything else so you're not commuting in advance to the specific motor control steps but you're still reasoning on a timescale that will eventually reduce to trillions of motor control actions and so for all these reasons you know alphago and and deep blue and so on don't represent any kind of threat to humanity but they are a step towards it right near that and progress in AI occurs by essentially removing one by one these assumptions that make problems easy like the assumption of complete observability of the situation right we remove that assumption you need a much more complicated kind of a computing design and you need something that actually keeps track of all the things you can't see and tries to estimate what's going on and there's inevitable uncertainty in that so it becomes a much more complicated problem but you know we are removing those assumptions we are starting to have algorithms that can cope with much longer timescales they can cope with uncertainty they can cope with partial observability and so each of those steps sort of magnifies by a thousand the range of things that we can do with AI systems so the way I started me I wanted to be a psychiatrist for long time to understand the mind in high school and of course program and so on and then I showed up University of Illinois to an AI lab and they said okay I don't have time for you but here's a book AI a modern approach I think was the first edition at the time mmm here go go learn this and I remember the lay of the land was well it's incredible that we solve chess but we'll never solve go I mean it was pretty certain that go in the way we thought about systems that reason was impossible to solve and now we've solved this as a very I think I would have said that it's unlikely we could take the kind of algorithm that was used for chess and just get it to scale up and work well for go and at the time what we thought was that in order to solve go we would have to do something similar to the way humans manage the complexity of go which is to break it down into kind of sub games so when a human thinks about a go board they think about different parts of the board as sort of weakly connected to each other and they think about okay within this part of the board here's how things could go and that part about his how things could go and now you try to sort of couple those two analyses together and deal with the interactions and maybe revise your views of how things are going to go in each part and then you've got maybe five six seven ten parts of the board and that actually resembles the real world much more than chess does because in the real world you know we have work we have home life we have sport you know whatever different kinds of activities you know shopping these all are connected to each other but they're weakly connected so when I'm typing a paper you know I don't simul taneous Li have to decide which order I'm gonna get the you know the milk and the butter you know that doesn't affect the typing but I do need to realize okay better finish this before the shops closed because I don't have anything you don't have any food at home all right right so there's some weak connection but not in the way that chess works where everything is tied into a single stream of thought so the thought was that go just sort of go we'd have to make progress on stuff that would be useful for the real world and in a way alphago is a little bit disappointing right because the the program designed for alphago was actually not that different from from deep blue or even from Arthur Samuels checker playing program from the 1950s and in fact the so the two things that make alphago work is one one is is amazing ability ability to evaluate the positions and the other is the meta-reasoning capability which which allows it to to explore some paths in the tree very deeply and to abandon other paths very quickly so this word meta-reasoning while technically correct inspires perhaps the the wrong degree of power that alphago has for example the word reasonings as a powerful word let me ask you sort of so you were part of the symbolic AI world for a while like whatever the AI was there's a lot of excellent interesting ideas there that unfortunately met a winter and so it do you think it really emerges well I would say yeah it's not quite as simple as that so the the AI winter so for the first window that was actually named as such was the one in the late 80s and that came about because in the mid 80s there was a really a concerted attempt to push AI out into the real world using what was called expert system technology and for the most part that technology was just not ready for primetime they were trying in many cases to do a form of uncertain reasoning judge you know judgment combinations of evidence diagnosis those kinds of things which was simply invalid and when you try to apply invalid reasoning methods to real problems you can fudge it for small versions of the problem but when it starts to get larger the thing just falls apart so many companies found that the stuff just didn't work and they were spending tons of money on consultants to try to make it work and there were you know other practical reasons like you know they they were asking the companies to buy incredibly expensive lisp machine workstations which were literally between fifty and a hundred thousand dollars in you know in 1980s money which was would be like between a hundred and fifty and three hundred thousand dollars per workstation in current prices so then the bottom line they weren't seeing a profit from it yeah they in many cases I think there were some successes there's no doubt about that but people I would say over invested every major company was starting an AI department just like now and I worry a bit that we might see similar disappointments not because the technology is invalid but it's limited in its scope and it's almost the the dual of the you know the scope problems that expert systems had so what have you learned from that hype cycle and what can we do to prevent another winter for example yeah so when I'm giving talks these days that's one of the warnings that I give to to pot warning slide one is that you know rather than data being the new oil data is the new snake oil that's a good line and then and then the other is that we might see a kind of very visible failure in some of the major application areas and I think self-driving cars would be the flagship and I think when you look at the history so the first self-driving car was on the freeway driving itself changing lanes overtaking in 1987 and so it's more than 30 years and that kind of looks like where we are today right you know prototypes on the freeway changing lanes and overtaking now I think significant progress has been made particularly on the perception side so we worked a lot on autonomous vehicles in the early mid 90s at Berkley you know and we had our own big demonstrations you know we we put congressmen into yourself driving cars and and had them zooming along the freeway and the problem was clearly perception at the time the problem that perception yeah so in simulation with perfect perception you could actually show that you can drive safely for a long time even if the other cars are misbehaving and and so on but simultaneously we worked on machine vision for detecting cars and tracking pedestrians and so on and we couldn't get the reliability of detection and tracking up to a high enough particular level particularly in bad weather conditions nighttime rainfall good enough for demos but perhaps not good enough to cover the general the general yeah the thing about driving is you know suppose you're a taxi driver you know and you drive every day eight hours a day for ten years right that's a hundred million seconds of driving you know and any one of those seconds you can make a fatal mistake so you're talking about eight nines of reliability right now if your vision system only detects ninety eight point three percent of the vehicles right and that's sort of you know one on a bit nines and reliability so you have another seven orders of magnitude to go and and this is what people don't understand they think oh because I had a successful demo I'm pretty much done but you know you're not even within seven orders of magnitude of being done and that's the difficulty and it's it's not there can I follow a white line that's not the problem right we follow a white line all the way across the country but it's the it's the weird stuff that happens it's some of the edge cases yeah the edge case other drivers doing weird things you know so if you talk to Google right so they had actually very classical architecture where you know you had machine vision which would detect all the other cars and pedestrians and the white lines and the road signs and then basically that was fed into a logical database and then you had a classical 1970s rule-based expert system telling you okay if you're in the middle lane and there's a bicyclist in the right lane who is signaling this then then then don't need to do that yeah right and what they found was that every day they go out and there'd be another situation that the rules didn't cover you know so they they come to a traffic circle and there's a little girl riding a bicycle the wrong way around a traffic circle okay what do you do we don't have a rule oh my god okay stop and then you know they come back and had more rules and they just found that this was not really converging and and if you think about it right how how do you deal with an unexpected situation meaning one that you've never previously encountered and the sort of the the reasoning required to figure out the solution for that situation has never been done it doesn't match any previous situation in terms of the kind of reasoning you have to do well you know in chess programs this happens all the time you're constantly coming up with situations you haven't seen before and you have to reason about them you have to think about okay here are the possible things I could do here the outcomes here's how desirable the outcomes are and then pick the right one you know in the 90s we were saying okay this is how you're gonna have to do automated vehicles they're gonna have to have a look ahead capability but the look ahead for driving is more difficult than it is for chess because Huysmans the other right there's humans and they're less predictable than just a standard well then will you have an opponent in chess who's also somewhat unpredictable but for example in chess you always know the opponent's intention they're trying to beat you right whereas in driving you don't know is this guy trying to turn left or has he just forgotten to turn off his tone signal or is he drunk or is he you know changing the channel on his radio or whatever it might be you got to try and figure out the mental state the intent of the other drivers to forecast the possible evolutions of their trajectories and then you've got to figure out okay which is the directory for me that's going to be safest and those all interact with each other because the other drivers going to react to your trajectory and so on so you know they've got the classic merging onto the freeway a problem where you're kind of racing a vehicle that's already on the freeway and you are you gonna pull ahead of them or you're gonna let them go first and pull in behind and you get this sort of uncertainty about who's going first so all those kinds of things mean that you need decision-making architecture that's very different from either a rule-based system or it seems to me a kind of an end-to-end neural network system you know so just as alphago is pretty good when it doesn't do any look ahead but it's way way way way better when it does I think the same is going to be true for driving you can have a driving system that's pretty good when it doesn't do any look ahead but that's not good enough you know and we've already seen multiple deaths caused by poorly designed machine learning algorithms that don't really understand what they're doing yeah and on several levels I think it's on the perception side there's mistakes being made by those algorithms were the perception is very shallow on the planning side to look ahead like you said and the thing that we come come up against that's really interesting when you try to deploy systems in the real world is you can't think of an artificial intelligence system as a thing that responds to the world always you have to realize that it's an agent that others will respond to as well so in order to drive successfully you can't just try to do obstacle avoidance you can't pretend that you're invisible thank you right you're the invisible car right just look that way I mean but you have to assert yet others have to be scared of you just we're all there's this tension there's this game so if we studied a lot of work with pedestrians if you approach pedestrians as purely an obstacle avoidance so you either doing look ahead isn't modeling the intent that you're you they're not going to they're going to take advantage of you they're not going to respect you at all there has to be a tension a fear some amount of uncertainty that's how we have create we or at least just a kind of a resoluteness right so you have you have to display a certain amount of resoluteness you can't you can't be too tentative and yeah so the right the the solutions then become pretty complicated right you get into game theoretic yes analyses and so we're you know Berkeley now we're working a lot on this kind of interaction between machines and humans and that's exciting yeah and so my colleague and could drag an actually you know if you if you formulate the problem game theoretically and you just let the system figure out the solution you know it does interesting unexpected things like sometimes at a stop sign if no one is going first right the car will actually back up a little all right and just to indicate to the other cars that they should go and that's something it invented entirely by itself that's interesting you know we didn't say this is the language of communication at stop signs it figured it out that's really interesting so let me one just step back for a second just this beautiful philosophical notion so Pamela I'm a quartic in 1979 wrote AI began with the ancient wish to forge the gods so when you think about the history of our civilization do you think that there is an inherent desire to create let's not say gods but to create super intelligence is it inherent to us is it in our genes that the natural arc of human civilization is to create things that are of greater and greater power and perhaps no echoes of ourselves so to create the gods as Pamela said if the maybe I mean you know we're all we're all individuals certainly we see over and over again in history individuals who thought about this possibility hopefully when I'm not being too philosophical here but if you look at the arc of this you know where this is going and we'll talk about AI safety we'll talk about greater and greater intelligence do you see that there in when you created the earth Allah program and you felt this excitement what was that excitement was it excitement of a tinkerer who created something cool like a clock or was there a magic or was it more like a child being born that yeah you know yeah so I mean I certainly understand that viewpoint and if you look at the light he'll report which was commit so in the 70s there was a lot of controversy in the UK about AI and you know whether it was for real and how much the money money the government should invest and there was a lot long story but the government commissioned a report by by light Hill who was a physicist and he wrote a very damning report about AI which I think was the point and he said that that these are you know frustrated men who unable to have children would like to create and you know create life you know as a kind of replacement you know which I which I think is really pretty unfair but there is I mean there there is a kind of magic I would say you when you you build something and what you're building in is really just you're building in some understanding of the principles of learning and decision-making and to see those principles actually then turn into intelligent behavior in in specific situations it's an incredible thing and you know that is naturally going to make you think okay where does this end and so there's a there's magical optimistic views of word and whatever your view of optimism is whatever your view of utopia is it's probably different for everybody yeah but you've often talked about concerns you have of how things might go wrong so I've talked to max tegmark there's a lot of interesting ways to think about AI safety you're one of the seminal people thinking about this problem among sort of being in the weeds of actually solving specific AI problems you also think about the big picture of where we're going so can you talk about several elements of it let's just talk about maybe the control problem so this idea of losing ability to control the behavior and of a AI system so how do you see that how do you see that coming about what do you think we can do to manage it well so it doesn't take a genius to realize that if you make something that's smarter than you you might have a problem you know in Turing Alan Turing you know wrote about the gave lectures about this you know 19 1951 painted a lecture on the radio and he basically says you know once the machine thinking method stops you know very quickly they'll outstrip humanity and you know if we're lucky we might be able to I think he says if we may be able to turn off the power at strategic moments but even so a species would be humbled yeah you can actually I think was wrong about that right here is you you know if it's a sufficiently intelligent machine is not gonna let you switch it off so it's actually in competition with you so what do you think is meant just for a quick tangent if we shut off this super intelligent machine that our species will be humbled I think he means that we would realize that we are inferior right that we we only survive by the skin of our teeth because we happen to get to the off switch just in time you know and if we hadn't then we would have lost control over the earth so do you are you more worried when you think about this stuff about super intelligent AI or are you more worried about super powerful AI that's not aligned with our values so the paperclip scenario is kind of I think so the main problem I'm working on is is the control problem the the problem of machines pursuing objectives that are as you say not aligned with human objectives and and this has been it has been the way we've thought about I eyes since the beginning you you build a machine for optimizing and then you put in some objective and it optimizes right and and you know we we can think of this as the the King Midas problem right because if you know so King Midas put in this objective right everything I touch you turned to gold and the gods you know that's like the machine they said okay done you know you now have this power and of course his food and his drink and his family all turned to gold and then he's sighs misery and starvation and this is you know it's it's a warning it's it's a failure mode that pretty much every culture in history has had some story along the same lines you know there's the the genie that gives you three wishes and you know third wish is always you know please undo the first two wishes because I messed up and you know and when author Samuel wrote his chest his checkup laying program which learned to play checkers considerably better than Martha Samuel could play and actually reached a pretty decent standard Norbert Wiener who was a one of the major mathematicians of the 20th century sort of a father of modern automation control systems you know he saw this and he basically extrapolated you know as Turing did and said okay this is how we could lose control and specifically that we have to be certain that the purpose we put into the machine as the purpose which we really desire and the problem is we can't do that right you mean we're not it's a very difficult to encode so to put our values on paper is really difficult or you're just saying it's impossible your line is writing this so it's it theoretically it's possible but in practice it's extremely unlikely that we could specify correctly in advance the full range of concerns of humanity that you talked about cultural transmission of values I think is how humans to human transmission of values happens right what we learned yeah I mean as we grow up we learn about the values that matter how things how things should go what is reasonable to pursue and what isn't reasonable to pursue machines can learn in the same kind of way yeah so I think that what we need to do is to get away from this idea that you build an optimizing machine and you put the objective into it because if it's possible that you might put in a wrong objective and we already know this is possible because it's happened lots of times alright that means that the machine should never take an objective that's given as gospel truth because once it takes them the the objective is gospel truth alright then it's the leaves that whatever actions it's taking in pursuit of that objective are the correct things to do so you could be jumping up and down and saying no you know no no no you're gonna destroy the world but the machine knows what the true objective is and it's pursuing it and tough luck to you you know and this is not restricted to AI right this is you know I think many of the 20th century technologies right so in statistics you you minimize a loss function the loss function is exogenously specified in control theory you minimize a cost function in operations research you maximize a reward function and so on so in all these disciplines this is how we conceive of the problem and it's the wrong problem because we cannot specify with certainty the correct objective right we need uncertainty we the machine to be uncertain about a subjective what it is that it's post it's my favorite idea of yours I've heard you say somewhere well I shouldn't pick favorites but it just sounds beautiful we need to teach machines humility yeah I mean it's a beautiful way to put it I love it that they humble oh yeah they know that they don't know what it is they're supposed to be doing and that those those objectives I mean they exist they are within us but we may not be able to explicate them we may not even know you know how we want our future to go so exactly and the Machine you know a machine that's uncertain he's going to be deferential to us so if we say don't do that well now the machines learn something a bit more about our true objectives because something that it thought was reasonable in pursuit of our objectives turns out not to be so now it's learn something so it's going to defer because it wants to be doing what we really want and you know that that point I think is absolutely central to solving the control problem and it's a different kind of AI when you when you take away this idea that the objective is known then in fact a lot of the theoretical frameworks that we're so familiar with you know Markov decision processes goal based planning you know standard games research all of these techniques actually become inapplicable and you get a more complicated problem because because now the interaction with the human becomes part of the problem because the human by making choices is giving you more information about the 'true objective and that information helps you achieve the objective better and so that really means that you're mostly dealing with game theoretic problems where you've got the machine and the human and they're coupled together rather than a machine going off by itself with a fixed objective which is fascinating on the machine and the human level that we when you don't have an objective means you're together coming up with an objective I mean there's a lot of philosophy that you know you could argue that life doesn't really have meaning we we together agree on what gives it meaning and we kind of culturally create things that give why the heck we are in this earth anyway we together as a society create that meaning and you have to learn that objective and one of the biggest I thought that's what you were gonna go for a second one of the biggest troubles we've run into outside of statistics and machine learning and AI and just human civilization is when you look at I came from the south was born in the Soviet Union and the history of the 20th century we ran into the most trouble us humans when there was a certainty about the objective and you do whatever it takes to achieve that objective whether you talking about in Germany or communist Russia oh yeah I get the trouble I would say with you know corporations in fact some people argue that you know we don't have to look forward to a time when AI systems take over the world they already have and they call corporations right that corporations happen to be using people as components right now but they are effectively algorithmic machines and they're optimizing an objective which is quarterly profit that isn't aligned with overall well-being of the human race and they are destroying the world they are primarily responsible for our inability to tackle climate change right so I think that's one way of thinking about what's going on with with cooperations but I think the point you're making you is valid that there are there are many systems in the real world where we've sort of prematurely fixed on the objective and then decoupled the the machine from those that's supposed to be serving and I think you see this with government right government is supposed to be a machine that serves people but instead it tends to be taken over by people who have their own objective and use government to optimize that objective regardless of what people want do you have do you find appealing the idea of almost arguing machines where you have multiple I systems with a clear fixed objective we have in government the red team and the blue team that are very fixed on their objectives and they argue and it kind of maybe it would disagree but it kind of seems to make it work somewhat that the the duality of it okay let's go a hundred years back when there was still was going on or at the founding of this country there was disagreement and that disagreement is where so there's a balance between certainty and forced humility because the power was distributed yeah I think that the the the nature of debate and disagreement argument takes as a premise the idea that you could be wrong right which means that you're not necessarily absolutely convinced that your objective is the correct one right if you were absolutely Guiness there'll be no point in having any discussion or argument because you would never change your mind and there wouldn't be any any sort of synthesis or or anything like that so so I think you can think of argumentation as a as an implementation of a form of uncertain reasoning and you know I I've been reading recently about utilitarianism in the history of efforts to define in a sort of clear mathematical way a I feel like a formula for moral or political decision-making and it's really interesting that the parallels between the philosophical discussions going back 200 years and what you see now in discussions about existential risk because you it's almost exactly the same so someone would say okay well here's a formula for how we should make decisions right so utilitarianism you know each person has a utility function and then we make decisions to maximize the sum of everybody's utility mm-hmm right and then people point out well you know in that case the best policy is one that leads to the enormous lis vast population all of whom are living a life that's barely worth living right and this is called the repugnant conclusion and you know another version is you know that we we should maximize pleasure and that's what we mean by utility and then you'll get people effectively saying well in that case you know we might as well just have everyone hooked up to a heroin drip yeah you know and they didn't use those words but that debate you know what's happening in the 19th century as it is now about AI that if we get the formula wrong you know we're going to have AI systems working towards an outcome that in retrospect would be exactly wrong do you think there's it has beautifully put so the the echoes are there but do you think I mean if you look at sam Harris is our imagination worries about the AI version of that because of the speed at which the things going wrong in the utilitarian context could happen yeah is that is that a worry for you yeah I I think that you know it in most cases not in all but you know if we if we have a wrong political idea you know we see it starting to go wrong and we're you know we're not completely stupid and so we said okay that was maybe that was a mistake let's try something different and and also we're very slow and inefficient about implementing these things and so on so you have to worry when you have corporations or political systems that are extremely efficient but when we look at AI systems or even just computers in general right they have this different characteristic from ordinary human activity in the past so let's say you were a surgeon you had some idea about how to do some operation right well and let's say you were wrong all right that that way of doing the operation would mostly kill the patient well you'd find out pretty quickly like after three maybe three or four tries right but that isn't true for pharmaceutical companies because they don't do three or four operations they they manufacture three or four billion pills and they sell them and then they find out maybe six months or a year later that oh people are dying of heart attacks or getting cancer from this drug and so that's why we have the FDA right because of the scalability of pharmaceutical production and you know and there have been some unbelievably bad episodes in the history of pharmaceuticals and and adulteration of of products and so on that that have killed tens of thousands or paralysed hundreds of thousands of people now with computers we have that same scalability problem that you can sit there and type for I equals 1 to 5 billion do right and all of a sudden you're having an impact on a global scale and yet we have no FDA right there's absolutely no controls at all it's over what a bunch of undergraduates with too much caffeine can do to the world and you know we look at what happened with Facebook well social media in general and click-through optimization so you have a simple feedback algorithm that's trying to just optimize click-through that sounds reasonable right because you don't want to be feeding people ads that they don't care about I'm not interested in and you might even think of that process as simply adjusting the the feeding of ads or news articles or whatever it might be to match people's preferences right which sounds like a good idea but in fact that isn't how the algorithm works right you make more money the algorithm makes more money if it could better predict what people are going to click on because then it can feed them exactly that right so the way to maximize click-through is actually to modify the people to make them more predictable and one way to do that is to feed them information which will change their behavior and preferences towards extremes that make them predictable now whatever is the nearest extreme or the nearest predictable point that's where you're going to end up the machines will force you there now and then I think there's a reasonable argument to say that this among other things is contributing to the destruction of democracy in the world and where was the oversight of this process where were the people saying okay you would like to apply this algorithm to five billion people on the face of the earth can you show me that it's safe can you show me that it won't have various kinds of negative effects no there was no one asking that question there was no one placed between you know the undergrads were too much caffeine and the human race well it's just they just did it and but some way outside the scope of my knowledge so economists would argue that the what is it the invisible hand so the the capitalist system it was the oversight so if you're going to corrupt society with whatever decision you make is a company then that's going to be reflected in people not using your product sort of one that's one model of oversight so we shall see but you know in the meantime you know that but you you might even have broken the political system that enables capitalism to function well you've changed it and so we should see yeah change changes often painful so my question is uh absolutely it's fascinating you're absolutely right that there is ZERO oversight on algorithms that can have a profound civilization changing effect so do you think it's possible I mean I haven't have you seen government so do you think it's possible to create regulatory bodies oversight over AI algorithms which are inherently such cutting edge set of ideas and technologies yeah but I think it takes time to figure out what kind of oversight what kinds of controls I mean took time to design the FDA regime you know and some people still don't like it and they want to fix it and I think there are clear ways that it could be improved but the whole notion that you have stage 1 stage 2 stage 3 and here are the criteria for what you have to do to pass a stage 1 trial right we haven't even thought about what those would be for algorithms so I mean I think there are there are things we could do right now with regard to bias for example we we have a pretty good technical handle on how to detect algorithms that are propagating bias that exists in data sets how to D by us those algorithms and and even what it's going to cost you to do that so I think we could start having some standards on that I think there are there are things to do with impersonation of falsification that we could we could work on so I thanks ya or you know in a very simple point so impersonation ISM is a machine acting as if it was a person I can't see a real justification for why we shouldn't insist that machines self-identify as machines you know where is the social benefit in in fooling people into thinking that this is really a person when it isn't you know I I don't mind if it uses a human-like voice that's easy to understand that's fine but it should just say I'm a machine in some some form people are speaking to that I would think relatively obvious factors I think mostly yeah I mean there is actually a law in California that bans impersonation but only in certain restricted circumstances so for the purpose of engaging in a for Geling transaction and for the purpose of modifying someone's voting behavior so those are those are the circumstances where machines have to self-identify but I think this is you know arguably it should be in all circumstances and then when you talk about deep fakes you know we're just beginning but already it's possible to make a movie of anybody saying anything in ways that are pretty hard to detect including yourself because you're on camera now and your voice is coming through with high resolution so you could take what I'm saying and replaces it with it pretty much anything else you wanted me to be saying yeah and even it will change my lips and expression expressions to fit and there's actually not much in the way of real legal protection against that I think in the commercial area you could say yeah that's you're using my brand and so on that there there are rules about that but in the political sphere I think it's at the moment it's you know anything goes so like that could be really really damaging and let me just try to make not an argument but try to look back at history and say something dark in essence is while regulation seems to be oversight seems to be exactly the right thing to do here it seems that human beings what they naturally do is they wait for something to go wrong if you're talking about nuclear weapons you can't talk about nuclear weapons being dangerous until somebody actually like the United States drops the bomb or Chernobyl melting do you think we will have to wait for things going wrong in a way that's obviously damaging to society not an existential risk but obviously damaging or do you have faith that I I hope not but I mean I think we do have to look at history and when you know so the two examples you gave nuclear weapons and nuclear power are very very interesting because you know in nuclear weapons we knew in the early years of the 20th century that atoms contained a huge amount of energy right we had e equals mc-squared we knew the the mass differences between the different atoms and their components and we knew that you might be able to make an incredibly powerful explosive so HG Wells wrote science fiction book I think in 1912 Frederick Soddy who was the guy who discovered isotopes so Nobel Prize winner he gave a speech in 1915 saying that this new explosive would be the equivalent of 150 tons of dynamite which turns out to be about right and you know Kenton this was in World War one right so he was imagining how much worse the world would be if we were using that kind of explosive but the physics establishment simply refused to believe that these things could be made including the people who are making it well so they were doing the nuclear physics I mean eventually were the ones who made it and Rockwell for me or whoever well so up to the the development was was mostly theoretical so it was people using sort of primitive kinds of particle acceleration and doing experiments at the at the level of single particles or collections of particles they they they want yet thinking about how to actually make a bomb or anything like that they but they knew the energy was there and they figured if they understood it better it might be possible but the physics establishment their view and I think because they did not want it to be true their view was that it could not be true that this could not provide a way to make a super weapon and you know there was this famous speech given by Rutherford who was the sort of leader of nuclear physics and I was on September 11th 1933 and he he said you know anyone who talks about the possibility of obtaining energy from transformation of atoms is talking complete moonshine and the next the next morning Leo Szilard read about that speech and then invented the nuclear chain reaction and so as soon as he invented he soon as he had that idea that you could make a chain reaction with neutrons because neutrons were not repelled by the nucleus so they could enter the nucleus and then continue the reaction as soon as he has that idea he instantly realized that the world was in deep doo-doo because this is 1933 right you know Hitler had recently come to power in Germany Zil odd was in London and eventually became a refugee and and came to the US and the in the process of having the idea about the chain reaction he figured out basically how to make a bomb and also how to make a reactor and he patented the reactor 2:34 but because of the situation the great power conflict situation that he could see happening he kept that a secret and so between then and the beginning of World War two people were working including the Germans on how to actually create Neutron sources right what specific fission reactions would produce neutrons of the right energy to continue the reaction and and that was demonstrated in Germany I think in 1938 if I remember correctly the first nuclear weapon patent was 1939 by the French so this was actually you know this was actually going on you know well before World War two really got going and then you know the British probably had the most advanced capability in this area but for safety reasons among others and blush which is sort of just resources they moved the program from Britain to the US and then that became Manhattan Project so the the the reason why we couldn't have any kind of oversight of nuclear weapons and nuclear technology was because we were basically already in an arms race in a war and but you you've mentioned then in the 20s and 30s so what are the echoes yeah the way you've described this story I mean there's clearly echoes why do you think most a I researchers folks who are really close to the metal they really are not concerned about it and they don't think about it whether they don't want to think about it it's but what are the yeah why do you think that is what are the echoes of the nuclear situation to the current situation and what can we do about it I think there is a you know a kinda modak motivated cognition which is a term in psychology means that you believe what you would like to be true rather than what is true and you know it's it's unsettling to think that what you're working on might be the end of the human race obviously so you would rather instantly deny it and come up with some reason why it couldn't be true and the you know I have I collected a long list of reasons that extremely intelligent competent AI scientists have come up with for why we shouldn't worry about this you know for example calculators are super human at arithmetic and they haven't taken over the world so there's nothing to worry about well okay my five-year-old you know could have figured out why that was an unreasonable and and really quite weak argument you know another one was you know you while it's theoretically possible that you could have superhuman AI destroy the world you know it's also theoretically possible that a black hole could materialize right next to the earth and destroy humanity I mean yes it's theoretically possible quantum theoretically extremely unlikely that it would just materialize right there but that's a completely bogus analogy because you know if the whole physics community on earth was working to materialize a black hole in near Earth orbit right wouldn't you ask them is that a good idea is that gonna be safe you know what if you succeed all right right and that's the thing right the AI is sort of refused to ask itself what if you succeed and initially I think that was because it was too hard but you know Alan Turing asked himself that and he said we'd be toast right if we were lucky we might be able to switch off the power but probably we'd be toast but there's also an aspect that because we're not exactly sure what the future holds it's not clear exactly so technically what to worry about sort of how things go wrong and so there is something it feels like maybe you can correct me if I'm wrong but there's something paralyzing about worrying about something that logically is inevitable but you don't really know what that will look like yeah I think that's that's it's a reasonable point and you know the you know it's certainly in terms of existential risks it's different from you know asteroid collides with the earth right right which again is quite possible you know it's happened in the past it'll probably happen again we don't right we don't know right now but if we did detect an asteroid that was going to hit the earth in 75 years time we'd certainly be doing something about it well it's clear there's got big rocks we'll probably have a meeting you see what do we do about the big rock will they I write with a I I mean the very few people who think it's not gonna happen within the next 75 years I know rod Brooks doesn't think it's gonna happen maybe and ruing doesn't think it's happened but you know a lot of the people who work day-to-day you know as you say at the rock face they think it's gonna happen I think the median estimate from AI researchers is somewhere in forty to fifty years from from now or maybe a little you know I think in Asia they think it's gonna be even faster than that I am I'm a little bit more conservative I think probably take longer than that but I think it's you know as happened with nuclear weapons well I went overnight it can happen overnight that you have these breakthroughs and we need more than one breakthrough but you know the it's on the order of half a dozen this is a very rough scale but so half a dozen breakthroughs of that nature it would have to happen for us to reach the superhuman AI but the you know the AI research community is vast now the massive investments from governments from corporations tons of really really smart people you know you just have to look at the rate of progress in different areas of AI to see that things are moving pretty fast so to say oh it's just gonna be thousands of years I don't see any basis for that you know I see you know for example the the Stanford hundred year AI project right which is supposed to be sort of you know the serious establishment view their most recent report actually said it's probably not even possible Wow right which if you want a perfect example of people in denial that's it because you know for the whole history of AI we've been saying to philosophers who said it wasn't possible well you have no idea what you're talking about of course it's possible right give me an give me an argument for why it couldn't happen and there isn't one all right and now because people are worried that maybe a oh it might get a bad name or or I just don't want to think about this they're saying okay well of course it's not really possible you know and we imagine right imagine if you know the the leaders of the cancer biology community got up and said well you know of course curing cancer it's not really possible complete outrage and dismay and you know I I find this really a strange phenomenon so okay so if you accept it as possible and if you accept that it's probably going to happen the point that you're making that you know how does it go wrong a valid question without that without an answer to that question then you stuck with what I call the gorilla problem which is you know the problem that the gorillas face right they made something more intelligent than them namely us a few million years ago and now now they're in deep doo-doo yeah so there's really nothing they can do they've lost the control theater they failed to solve the control problem of controlling humans and so they've lost so we don't want to be in that situation and if the gorillas problem is is the only formulation you have there's not a lot you can do right other than to say okay we should try to stop you know we should just not make the humans or right in this case not make the AI and I think that's really hard to do - I'm not actually proposing that that's a feasible course of action I also think that you know if properly control a I could be incredibly beneficial so the but it seems to me that there's a there's a consensus that one of the major failure modes is this loss of control that we create AI systems that are pursuing incorrect objectives and because the AI system believes it knows what the objective is it has no incentive to listen to us anymore so to speak right it it's just carrying out the the strategy that it it has computed as being the optimal solution and you know it may be that in the process it needs to acquire more resources to increase the possibility of success or prevent various failure modes by defending itself against interference and so that collection of problems I think is something we can address yes that the other problems are roughly speaking you know misuse right so even if we solve the control problem we make perfectly safe controllable AI systems well why you know why does dr. evil going to use those right he wants to just take over the world and he'll make unsafe AI system said but then get out of control so that's one problem which is sort of a you know a partly a policing problem partly a-- a sort of a cultural problem for the profession of how we teach people what kinds of AI systems are safe you talk about autonomous weapon system and how pretty much everybody agrees there's too many ways that that can go horribly wrong if this great slaughter BOTS movie that kind of illustrates that beautifully I want to talk that's another there's another topic I I'm happy talking about the I just want to mention that what I see is the third major failure mode which is overuse not so much misuse but overuse of AI that we become overly dependent so I call this the wooly problems if you seen wall-e the movie all right all the humans are on the spaceship and the machines look after everything for them and they just watch TV and drink big gulps and they're all sort of obese and stupid and they sort of totally lost any notion of human autonomy and you know so a in effect right this would happen like the slow boiling frog right we would gradually turn over more and more of the management of our civilization to machines as we are already doing in this you know this if this process continues you know we sort of gradually switch from sort of being the Masters of Technology to just being the guests right so so we become guests on a cruise ship you know which is fine for a week but not not further the rest of eternity right you know and it's almost irreversible right once you once you lose the incentive to for example you know learn to be an engineer or a doctor or a sanitation operative or or any other of the the infinitely many ways that we maintain and propagate our civilization you know if you if you don't have the incentive to do any of that you won't and then it's really hard to recover and of course there's just one of the technologies that could that third failure mode result in that there's probably other technology in general detaches us from it does a bit but the the the difference is that in terms of the knowledge to to run our civilization you know up to now we've had no alternative but to put it into people's heads right and if you oh it's not we're with Google I mean so software in general so I probably if computers in general but but the you know the knowledge of how you know how a sanitation system works you know that's an the AI has to understand that it's no good putting it into Google so I mean we we've always put knowledge in on paper but paper doesn't run our civilization it only runs when it goes from the paper into people's heads again right so we've always propagated civilization through human minds and we've spent about a trillion person years doing that literature right you you can work it out yeah but right is about just over a hundred billion people who've ever lived and each of them has spent about ten years learning stuff and to keep their civilization going and so that's a trillion person years we put into this effort beautiful way to describe all of civilization and now we're you know we're danger of throwing that away so this is a problem that AI console it's not a technical problem it's a you know if we do our job right the AI systems will say you know the human race doesn't in the long run want to be passengers in a cruise ship the human race wants autonomy this is part of human preferences so we the AI systems are not going to do this stuff for you you've got to do it for yourself right I'm not going to carry you to the top of Everest in an autonomous helicopter you have to climb it if you want to get the benefit and so on so but I'm afraid that because we are short-sighted and lazy we're gonna override the AI systems and and there's an amazing short story that I recommend to everyone that I talk to about this called the machine stops written in 1909 by Ian Foster who you know wrote novels about the British Empire and sort of things that became costume dramas on the BBC but he wrote this one science fiction story which is an amazing vision of the future it has it has basically iPads it has video conferencing it has MOOCs it has computer and computer induced obesity I mean literally the whole thing it's what people spend their time doing is giving online courses or listening to online courses and talking about ideas but they never get out there in the real world that they don't really have a lot of face-to-face contact everything is done online you know so all the things we're worrying about now were described in this story and and then the human race becomes more and more dependent on the Machine loses knowledge of how things really run and then becomes vulnerable to collapse and so it's a it's a pretty unbelievably amazing story for someone writing in 1909 to imagine all this loss yeah so there's very few people that represent artificial intelligence more than you Russell so it's all my fault right you're often brought up as the person well Stuart Russell like the AI person is worried about this that's why you should be worried about it do you feel the burden of that I don't know if you feel that at all but when I talk to people like from you talk about set people outside of computer science when they think about this still Russell is worried about AI safety you should be worried too do you feel the burden of that I mean in a practical sense yeah because I'd yet you know a dozen sometimes 25 invitations a day to talk about it to give interviews to write press articles and so on so in that very practical sense I'm seeing that people are concerned and really interested about this are you worried that you could be wrong as all good scientists are of course I worry about that all the time I mean that's that's always been the way that I I've worked you know is like I have an argument in my head with myself right so I have some idea and then I think okay how could that be wrong or did someone else already have that idea so I'll go and you know search and as much literature as I can't to see whether someone else already thought of that or or even refuted it so you know I right now I'm I'm reading a lot of philosophy because you know in in the form of the debate so V over utilitarianism and other kinds of moral moral formulas shall we say people have already thought through some of these issues but you know what one of the things I'm I'm not seeing in a lot of these debates is this specific idea about the importance of uncertainty in the objective that this is the way we should think about machines that are beneficial to humans so this idea of provably beneficial machines based on explicit uncertainty in the objective you know it seems to be you know my gut feeling is this is the core of it it's gonna have to be elaborated in a lot of different directions and there are a lot of lis beneficial yeah but they're I mean it has to be right we can't afford you know hand-wavy beneficial yeah because there are you know whenever we do hand wavy stuff there are loopholes and the thing about super intelligent machines is they find the loopholes you know just like you know tax evaders if you don't write your tax law properly that people will find loopholes and end up paying no taxes and and so you should think of it this way and in getting those definitions right you know it is really a long process you know so you can you can define mathematical frameworks and within that framework you can prove mathematical theorems that yes this will you know this this theoretical entity will be proven beneficial to that theoretical entity but that framework may not match the real world in some crucial way so long process thinking through it of iterating and so on the last question yep you have ten seconds to answer it what is your favorite sci-fi movie about AI I would say interstellar has my favorite robots or beat it Space Odyssey yeah yeah yeah so so tars the robots one of the robots in interstellar is the way a robot should behave and I would say ex machina is in some ways the one like the one that makes you think in a nervous kind of way about a lot where we're going well Stuart thank you so much for talking today pleasure you
Eric Schmidt: Google | Lex Fridman Podcast #8
- The following is a conversation with Eric Schmidt. He was the CEO of Google for 10 years and a chairman for six more, guiding the company through an incredible period of growth and a series of world-changing innovations. He is one of the most impactful leaders in the era of the internet and the powerful voice for the promise of technology in our society. It was truly an honor to speak with him as part of the MIT course on artificial general intelligence and the Artificial Intelligence podcast. And now, here's my conversation with Eric Schmidt. What was the first moment when you fell in love with technology? - I grew up in 1960's as a boy where every boy wanted to be an astronaut and part of the space program. So like everyone else of my age, we would go out to the cow pasture behind my house, which was literally a cow pasture, and we would shoot model rockets off, and that I think is the beginning. And of course generationally today, it would be video games and all of the amazing things that you can do online with computers. - [Lex] There's a transformative inspiring aspect of science and math that maybe rockets would instill in individuals. You mentioned yesterday that eighth grade math is where the journey through mathematical universe diverges for many people. It's this fork in the roadway. There's a professor of math at Berkeley, Edward Franco. I'm not sure if you're familiar with him. - I am. - [Lex] He has written this amazing book I recommend to everybody called Love and Math. Two of my favorite words. (laughs) He says that if painting was taught like math, then students would be asked to paint a fence. It's just his analogy of essentially how math is taught. So you never get a chance to discover the beauty of the art of painting or the beauty of the art of math. So how, when, and where did you discover that beauty? - I think what happens with people like myself is that you're math-enabled pretty early, and all of the sudden you discover that you can use that to discover new insights. The great scientists will all tell a story. The men and women who are fantastic today, it's somewhere when they were in high school or in college they discovered that they could discover something themselves. And that sense of building something, of having an impact that you own drives knowledge acquisition and learning. In my case, it was programming and the notion that I could build things that had not existed, that I had built that had my name of it. And this was before open-source, but you could think of it as open-source contributions. So today if I were a 16 or a 17-year-old boy, I'm sure that I would aspire as a computer scientist to make a contribution like the open-source heroes of the world today. That would be what would be driving me, and I would be trying and learning, and making mistakes and so forth in the ways that it works. The repository that GitHub represents and that open-source libraries represent is an enormous bank of knowledge of all of the people who are doing that. And one of the lessons that I learned at Google was that the world is a very big place, and there's an awful lot of smart people. And an awful lot of them are underutilized. So here's an opportunity, for example, building parts or programs, building new ideas, to contribute to the greater of society. - [Lex] So in that moment in the 70's, the inspiring moment where there was nothing and then you cerated something through programming, that magical moment. So in 1975, I think, you created a program called Lex, which I especially like because my name is Lex. So thank you, thank you for creating a brand that established a reputation that's long-lasting, reliable, and has a big impact on the world and is still used today. So thank you for that. But more seriously, in that time, in the 70's as an engineer personal computers were being born. Did you think you would be able to predict the 80's, 90's and the noughts of where computers would go? - I'm sure I could not and would not have gotten it right. I was the beneficiary of the great work of many many people who saw it clearer than I did. With Lex, I worked with a fellow named Michael Lesk who was my supervisor, and he essentially helped me architect and deliver a system that's still in use today. After that, I worked at Xerox Palo Alto Research Center where the Alto was invented, and the Alto is the predecessor of the modern personal computer, or Macintosh and so forth. And the Altos were very rare, and I had to drive an hour from Berkeley to go use them, but I made a point of skipping classes and doing whatever it took to have access to this extraordinary achievement. I knew that they were consequential. What I did not understand was scaling. I did not understand what would happen when you had 100 million as opposed to 100. And so since then, and I have learned the benefit of scale, I always look for things which are going to scale to platforms, so mobile phones, Android, all of those things. The world is a numerous, there are many many people in the world. People really have needs. They really will use these platforms, and you can build big businesses on top of them. - [Lex] So it's interesting, so when you see a piece of technology, now you think what will this technology look like when it's in the hands of a billion people. - That's right. So an example would be that the market is so competitive now that if you can't figure out a way for something to have a million users or a billion users, it probably is not going to be successful because something else will become the general platform and your idea will become a lost idea or a specialized service with relatively few users. So it's a path to generality. It's a path to general platform use. It's a path to broad applicability. Now there are plenty of good businesses that are tiny, so luxury goods for example, but if you want to have an impact at scale, you have to look for things which are of common value, common pricing, common distribution, and solve common problems. They're problems that everyone has. And by the way, people have lots of problems. Information, medicine, health, education, and so forth, work on those problems. - [Lex] Like you said, you're a big fan of the middle class-- - 'Cause there's so many of them. - [Lex] There's so many of them. - By definition. - [Lex] So any product, any thing that has a huge impact and improves their lives is a great business decision, and it's just good for society. - And there's nothing wrong with starting off in the high-end as long as you have a plan to get to the middle class. There's nothing wrong with starting with a specialized market in order to learn and to build and to fund things. So you start luxury market to build a general purpose market. But if you define yourself as only a narrow market, someone else can come along with a general purpose market that can push you to the corner, can restrict the scale of operation, can force you to be a lesser impact than you might be. So it's very important to think in terms of broad businesses and broad impact, even if you start in a little corner somewhere. - [Lex] So as you look to the 70's but also in the decades to come and you saw computers, did you see them as tools, or was there a little element of another entity? I remember a quote saying AI began with our dream to create the gods. Is there a feeling when you wrote that program that you were creating another entity, giving life to something? - I wish I could say otherwise, but I simply found the technology platforms so exciting. That's what I was focused on. I think the majority of the people that I've worked with, and there are a few exceptions, Steve Jobs being an example, really saw this a great technological play. I think relatively few of the technical people understood the scale of its impact. So I used MCP which is a predecessor to TCP/IP. It just made sense to connect things. We didn't think of it in terms of the internet and then companies and then Facebook and then Twitter and then politics and so forth. We never did that build. We didn't have that vision. And I think most people, it's a rare person who can see compounding at scale. Most people can see, if you ask people to predict the future, they'll give you an answer of six to nine months or 12 months because that's about as far as people can imagine. But there's an old saying, which actually was attributed to a professor at MIT a long time ago, that we overestimate what can be done in one year. We underestimate was can be done in a decade. And there's a great deal of evidence that these core platforms of hardware and software take a decade. So think about self-driving cars. Self-driving cars were thought about in the 90's. There were projects around them. The first DARPA Grand Challenge was roughly 2004. So that's roughly 15 years ago. And today we have self-driving cars operating at a city in Arizona, so 15 years. And we still have a ways to go before they're more generally available. - [Lex] So you've spoken about the importance, you just talked about predicting into the future. You've spoken about the importance of thinking five years ahead and having a plan for those five years. - The way to say it is that almost everybody has a one-year plan. Almost no one has a proper five-year plan. And the key thing to have on the five-year plan is having a model for what's going to happen under the underlying platforms. So here's an example. Moore's law as we know it, the thing that powered improvement in CPUs has largely halted in its traditional shrinking mechanisms because the costs have just gotten so high and it's getting harder and harder. But there's plenty of algorithmic improvements and specialized hardware improvements. So you need to understand the nature of those improvements and where they'll go in order to understand how it will change the platform. In the area of network conductivity, what are the gains that are to be possible in wireless? It looks like there's an enormous expansion of wireless conductivity at many different bands and that we will primarily, historical I've always thought that we were primarily going to be using fiber, but now it looks like we're going to be using fiber plus very powerful high bandwidth sort of short distance conductivity to bridge the last mile. That's an amazing achievement. If you know that, then you're going to build your systems differently. By the way, those networks have different latency properties because they're more symmetric. The algorithms feel faster for that reason. - [Lex] And so when you think about, whether it's fiber or just technologies in general, so there's this Barbara Wootton poem or quote that I really like. It's from the champions of the impossible, rather than the slaves of the possible, that evolution draws its creative force. So in predicting the next five years, I'd like to talk about the impossible and the possible. - Well, and again, one of the great things about humanity is that we produce dreamers. We literally have people who have a vision and a dream. They are, if you will, disagreeable in the sense that they disagree with the, they disagree with what the sort of zeitgeist is. They say there is another way. They have a belief. They have a vision. If you look at science, science is always marked by such people who went against some conventional wisdom, collected the knowledge at the time, and assembled it in a way that produced a powerful platform. - [Lex] And you've been amazingly honest about, in an inspiring way, about things you've been wrong about predicting, and you've obviously been right about a lot of things. But in this kind of tension, how do you balance as a company predicting the next five years planning for the impossible, listening to those crazy dreamers, letting them run away and make the impossible real, make it happen, and you know that's how programmers often think, and slowing things down and saying well this is the rational, this is the possible, the pragmatic, the dreamer versus the pragmatist that is. - So it's helpful to have a model which encourages a predictable revenue stream as well as the ability to do new things. So in Google's case, we're big enough and well enough managed and so forth that we have a pretty good sense of what our revenue will be for the next year or two, at least for a while. And so we have enough cash generation that we can make bets. And indeed, Google has become Alphabet, so the corporation is organized around these bets. And these bets are in areas of fundamental importance to the world, whether it's artificial intelligence, medical technology, self-driving cars, conductivity through balloons, on and on and on. And there's more coming and more coming. So one way you could express this is that the current business is successful enough that we have the luxury of making bets. And another one that you could say is that we have the wisdom of being able to see that a corporate structure needs to be created to enhance the likelihood of the success of those bets. So we essentially turned ourselves into a conglomerate of bets and then this underlying corporation, Google, which is itself innovative. So in order to pull this off, you have to have a bunch of belief systems, and one of them is that you have to have bottoms up and tops down. The bottoms up we call 20% time, and the idea is that people can spend 20% of the time on whatever they want. And the top down is that our founders in particular have a keen eye on technology, and they're reviewing things constantly. So an example would be they'll hear about an idea or I'll hear about something and it sounds interesting. Let's go visit them, and then let's begin to assemble the pieces to see if that's possible. And if you do this long enough, you get pretty good at predicting what's likely to work. - [Lex] So that's a beautiful balance that's struck. Is this something that applies at all scale? - Seems to be. Sergey, again 15 years ago, came up with a concept called 10% of the budget should be on things that are unrelated. It was called 70/20/10. 70% of our time on core business, 20% on adjacent business, and 10% on other. And he proved mathematically, of course he's a brilliant mathematician, that you needed that 10% to make the sum of the growth work. And it turns out that he was right. - [Lex] So getting into the world of artificial intelligence, you've talked quite extensively and effectively to the impact in the near term, the positive impact of artificial intelligence, especially machine learning in medical applications and education and just making information more accessible. In the AI community, there is a kind of debate. There's this shroud of uncertainty as we face this new world of artificial intelligence. And there is some people like Elon Musk you've disagreed on, at least in the degree of emphasis he places on the existential threat of AI. So I've spoken with Stuart Russell, Max Tegmark, who share Elon Musk's view, and Yoshua Bengio, Steven Pinker who do not. And so there's a lot of very smart people who are thinking about this stuff, disagreeing, which is really healthy, of course. So what do you think is the healthiest way for the AI community to, and really for the general public to think about AI and the concern of the technology being mismanaged in some kind of way. - So the source of education for the general public has been robot killer movies and Terminator, etcetera. And the one thing I can assure you we're not building are those kinds of solutions. Furthermore, if they were to show up, someone would notice and unplug them. So as exciting as those movies are, and they're great movies, were the killer robots to start, we would find a way to stop them, so I'm not concerned about that. And much of this has to do with the timeframe of conversation. So you can imagine a situation 100 years from now when the human brain is fully understood in the next generation and next generation of brilliant MIT scientists have figured all this out, we're gonna have a large number of ethics questions around science and thinking and robots and computers and so forth and so on. So it depends on the question of the timeframe. In the next five to 10 years, we're not facing those questions. What we're facing in the next five to 10 years is how do we spread this disruptive technology as broadly as possible to gain the maximum benefit of it? The primary benefit should be in healthcare and in education. Healthcare because it's obvious. We're all the same even though we somehow believe we're not. As a medical matter, the fact that we have big data about our health will save lives, allow us to deal with skin cancer and other cancers, ophthalmological problems. There's people working on psychological diseases and so forth using these techniques. I can go on and on. The promise of AI in medicine is extraordinary. There are many many companies and start-ups and funds and solutions and we will all live much better for that. The same argument in education. Can you imagine that for each generation of child and even adult you have a tutor educator. It's AI based that's not a human but is properly trained that helps you get smarter, helps you address your language difficulties or your math difficulties or what have you. Why don't we focus on those two? The gain societally of making humans smarter and healthier are enormous. And those translate for decades and decades, and we'll all benefit from them. There are people who are working on AI safety, which is the issue that you're describing, and there are conversations in the community that should there be such problems what should the rules be like? Google, for example, has announced its policies with respect to AI safety, which I certainly support, and I think most everybody would support. And they make sense. So it helps guide the research. But the killer robots are not arriving this year, and they're not even being built. - [Lex] And on that line of thinking, you said the timescale. In this topic or other topics have you found a useful, on the business side or the intellectual side, to think beyond five to 10 years, to think 50 years out? Has it ever been useful or productive-- - In our industry there are essentially no examples of 50 year predictions that have been correct. Let's review AI. AI, which was partially invented here at MIT and a couple of other universities in 1956, 1957, 1958, the original claims were a decade or two. And when I was a PhD student, I studied AI, and it entered during my looking at it a period which is known as AI winter which went on for about 30 years, which is a whole generation of science, scientists, and a whole group of people who didn't make a lot of progress because the algorithms had not improved and the computers had not improved. It took some brilliant mathematicians starting with a fellow names Geoff Hinton at Toronto and Montreal who basically invented this deep learning model which empowers us today. The seminal work there was 20 years ago, and in the last 10 years it's become popularized. So think about the timeframes for that level of discovery. It's very hard to predict. Many people think that we'll be flying around in the equivalent of flying cars. Who knows? My own view, if I want to go out on a limb, is to say we know a couple of things about 50 years from now. We know that they'll be more people alive. We know that we'll have to have platforms that are more sustainable because the earth is limited in the ways we all know, and that the kind of platforms that are gonna get built will be consistent with the principles that I've described. They will be much more empowering of individuals. They'll be much more sensitive to the ecology 'cause they have to be. They just have to be. I also think that humans are going to be a great deal smarter, and I think they're gonna be a lot smarter because of the tools that I've discussed with you, and of course people will live longer. Life extension is continuing at a pace, a baby born today has a reasonable chance of living to 100, which is pretty exciting. It's well past the 21st century, so we better take care of them. - [Lex] And you've mentioned an interesting statistic on some very large percentage, 60%, 70% of people may live in cities. - Today more than half the world lives in cities, and one of the great stories of humanity in the last 20 years has been the rural to urban migration. This has occurred in the United States. It's occurred in Europe. It's occurring in Asia, and it's occurring in Africa. When people move to cities, the cities get more crowded, but believe it or not their health gets better. Their productivity gets better. Their IQ and educational capabilities improve. So it's good news that people are moving to cities, but we have to make them livable and safe. - [Lex] So first of all, you are but you've also worked with some of the greatest leaders in the history of tech. What insights do you draw from the difference in leadership styles of yourself, Steve Jobs, Elon Musk, Larry Page, now the new CEO, Sundar Pichai and others, from the I would say calm sages to the mad geniuses. - One of the things that I learned as a young executive is that there's no single formula for leadership. They try to teach one, but that's not how it really works. There are people who just understand what they need to do and they need to do it quickly. Those people are often entrepreneurs. They just know, and they move fast. There are other people who are systems thinkers and planners. That's more who I am, somewhat more conservative, more thorough in execution, a little bit more risk-adverse. There's also people who are sort of slightly insane in the sense that they are emphatic and charismatic and they feel it and they drive it and so forth. There's no single formula to success. There is one thing that unifies all of the people that you named, which is very high intelligence. At the end of the day, the thing that characterizes all of them is that they saw the world quicker, faster. They processed information faster. They didn't necessarily make the right decisions all the time, but they were on top of it. And the other thing that's interesting about all of those people is that they all started young. So think about Steve Jobs starting Apple roughly at 18 or 19. Think about Bill Gates staring at roughly 20, 21. Think about by the time they were 30, Mark Zuckerburg a good example at 19 or 20, by the time they were 30, they had 10 years, at 30 years old they had 10 years of experience of dealing with people and products and shipments and the press and business and so forth. It's incredible how much experience they had compared to the rest of us who are busy getting our PhDs. - [Lex] Yes, exactly. - So we should celebrate these people because they've just had more life experience and that helps them form the judgment. At the end of the day, when you're at the top of these organizations, all of the easy questions have been dealt with. How should we design the buildings? Where should we put the colors on our products? What should the box look like? That's why it's so interesting to be in these rooms. The problems that they face in terms of the way they operate, the way they deal with their employees, their customers, their innovation are profoundly challenging. Each of the companies is demonstrably different culturally. They are not, in fact, cut of the same. They behave differently based on input. Their internal cultures are different. Their compensation schemes are different. Their values are different. So there's proof that diversity works. - [Lex] So when faced with a tough decision in need of advice, it's been said that the best thing one can do is to find the best person in the world who can give that advice and find a way to be in a room with them one-on-one and ask. So here we are. And let me ask in a long-winded way. I wrote this down. In 1998, there were many good search engines: Lycos, Excite, AltaVista, InfoSeek, Ask Jeeves maybe, Yahoo even. So Google stepped in and disrupted everything. They disrupted the nature of search, the nature of our access to information, the way we discover new knowledge. So now it's 2018, actually 20 years later. There are many good personal AI assistants, including, of course, the best from Google. So you've spoken in medical and education the impact of such an AI assistant could bring. So we arrive at this question. So it's a personal one for me, but I hope my situation represents that of many other as we said dreamers and the crazy engineers. So my whole live I've dreamed of creating such an AI assistant. Every step I've taken has been towards that goal. Now I'm a research scientist in human-centered AI here at MIT. So the next step for me as I sit here facing my passion is to do what Larry and Sergey did in '98, the simple start-up. And so here's my simple question. Given the low odds of success, the timing and luck required, the countless other factors that can't be controlled or predicted, which is all the things that Larry and Sergey faced, is there some calculation, some strategy to follow in the step? Or do you simply follow the passion just because there's no other choice? - I think the people who are in universities are always trying to study the extraordinarily chaotic nature of innovation and entrepreneurship. My answer is that they didn't have that conversation. They just did it. They sensed a moment when in the case of Google, there was all of this data that needed to be organized, and they had a better algorithm. They had invented a better way. So today, with human-centered AI, which is your area of research, there must be new approaches. It's such a big field. There must be new approaches different from what we and others are doing. There must be start-ups to fund. There must be research projects to try. There must be graduate students to work on new approaches. Here at MIT, there are people who are looking at learning from the standpoint of looking at child learning. How do children learn starting at age one and two-- - Josh Tenenbaum and others. - And the work is fantastic. Those approached are different from the approach that most people are taking. Perhaps that's a bet that you should make, or perhaps there's another one. But at the end of the day, the successful entrepreneurs are not as crazy as they sound. They see an opportunity based on what's happened. Let's use Uber as an example. As Travis tells the story, he and his co-founder were sitting in Paris, and they had this idea 'cause they couldn't get a cab. And they said we have smartphones, and the rest is history. So what's the equivalent of that Travis Eiffel Tower where is a cab moment that you could as an entrepreneur take advantage of, whether it's in human-centered AI or something else? That's the next great start-up. - [Lex] And the psychology of that moment. So when Sergey and Larry talk about, in listening to a few interviews, it's very nonchalant. Well here's a very fascinating web data, and here's an algorithm we have. We just kind of want to play around with that data, and it seems like that's a really nice way to organize this data. - Well I should say what happened, remember, is that they were graduate students at Stanford, and they thought this was interesting. So they build a search engine and they kept it in their room. And they had to get power from the room next door 'cause they were using too much power in their room, so they ran an extension cord over and then they went and they found a house and they had Google world headquarters of five people to start the company. And they raised $100,000 from Andy Bechtolsheim, who is the Sun founder to do this and Dave Cheriton and a few others. The point is their beginnings were very simple, but they were based on a powerful insight. That is a replicable model for any start-up. It has to be a powerful insight, the beginnings are simple, and there has to be an innovation. In Larry and Sergey's case, it was PageRank, which was a brilliant idea, one of the most sited papers in the world today. What's the next one? - [Lex] So you're one of, if I may say, richest people in the world, and yet it seems that money is simply a side effect of your passions and not an inherent goal. But you're a fascinating person to ask. So much of our society at the individual level and at the company level and as nations is driven by the desire for wealth. What do you think about this drive, and what have you learned about, if I may romanticize the notion, the meaning of life having achieved success on so many dimensions? - There have been many studies of human happiness, and above some threshold, which is typically relatively low for this conversation, there's no difference in happiness about money. The happiness is correlated with meaning and purpose, a sense of family, a sense of impact. So if you organize your life, assuming you have enough to get around and have a nice home and so forth, you'll be far happier if you figure out what you care about and work on that. It's often being in service to others. There's a great deal of evidence that people are happiest when they're serving others and not themselves. This goes directly against the sort of press-induced excitement about powerful and wealthy leaders of the world, and indeed these are consequential people. But if you are in a situation where you've been very fortunate as I have, you also have to take that as a responsibility and you have to basically work both to educate others and give them that opportunity but also use that wealth to advance human society. In my case, I'm particularly interested in using the tools of artificial intelligence and machine learning to make society better. I've mentioned education. I've mentioned inequality in middle class and things like this, all of which are a passion of mine. It doesn't matter what you do. It matters that you believe in it, that it's important to you, and your life can be far more satisfying if you spend your life doing that. - [Lex] I think there's no better place to end than a discussion of the meaning of life. - Eric, thank you so much. - Thank you very much, Lex.
Jeff Atwood: Stack Overflow and Coding Horror | Lex Fridman Podcast #7
the following is a conversation with Jeff Atwood he is the co-founder of Stack Overflow Stack Exchange websites that are visited by millions of people every single day much like with Wikipedia it is difficult to understate the impact on global knowledge and productivity that these networks of sites have created Jeff is also the author of the famed blog coding horror and the founder of discourse an open-source software project that seeks to improve the quality of our online community discussions this conversation is part of the MIT course on artificial general intelligence and the artificial intelligence podcast if you enjoy it subscribe on youtube itunes or your podcast provider of choice or simply connect with me on twitter at Lex Friedman spelled Fri D and now here's my conversation with Jeff Atwood having co-created and managed for a few years the world's largest community of programmers in Stack Overflow ten years ago what do you think motivates most programmers is it fame fortune glory process of programming itself or is it the sense of belonging to a community it's puzzles really I think it's this idea of working on puzzles independently of other people and just solving a problem sort of like on your own almost although you know nobody really works alone and programming anymore but I will say there's that there's an aspect of sort of hiding yourself away and just sort of beating on a problem until you solve it like brute force basically to me it's what a lot of programming is is like the computer so fast right you can do things that would take forever for a human but you just do them like so many times and so often that you get the answer right you're saying just the pure act of tinkering with the code yes is is the thing that drives most probably the joy the struggle balance within the joy of overcoming the the brute-force process of pain and suffering that eventually leads to something that actually works well data is fun too like there's this thing called the the shuffling problem like the naive shuffle that most programmers right has a huge flaw and there's a lot of articles online about this because it can be really bad if you're like a casino and you have an unsophisticated programmer writing your shuffle algorithm there's surprising ways to get this wrong but the neat thing is the way to figure that out is just to run your shuffle a bunch of times and see like how many orientations of cards you get you should get an equal distribution of all the cards and with the naive method of shuffling if you just look at the data if you just brute force and say okay I don't know what's gonna happen you just write a program that does it a billion times and then see what the buckets look like of the data and the Monty Hall problem is another example of that where you have three doors and somebody gives you information about another door so the correct answer is you should always switch and the Monty Hall problem which is not intuitive and people it freaks people out all the time right but you can solve it with data if you write a program that does the Monty Hall you know game and then never switches and always switches just compare you would immediately see that you don't have to be smart right you know to figure out the answer algorithmically you can just brute force it out with data and say well I know the answer is this because I ran the program a billion times and these are the data buckets that I got from it right so empirically find it but what's the joy of that what so for you for you personally outside of family what motivates you in this process yes well to be honest I don't really write a lot of code anymore like what I do at discourse is like manager II stuff which I always kind of despised right like as a programmer you think of managers as people who don't really do anything themselves but the weird thing about code is like you realize that like language is code like the ability to direct other people lets you get more stuff than you've done then you could by yourself anyway you should write languages code languages community communication yeah those are humans yes you can think of it as a systemic so what what is it like to be what makes before we get into program it what makes a good manager what makes a good leader well I think a leader it's all about leading by example first of all like sort of doing and being the things that you want to be now this can be kind of exhausting particular you have kids because you realize that your kids are watching you like all the time like even in ways that you've stopped seeing yourself like the hardest person to see on the planet is really yourself right it's funnier to see other people and and and make judgments about them but yourself like your for biased you don't actually see yourself the way other people see you often you're very very hard on yourself in a way that other people really aren't going to be so you know that's one of the insights is you know you've got to be really diligent about thinking like am i behaving in a way that represents how I want other people to behave right like leading through example there's a lot of examples of leaders that really mess this up right like they make decisions that are like wow that's why would you know it's just it's it's it's a bad example for other people so I think leading by example is one the other one I believe it is working really hard now I don't mean like working exhaustively but like showing a real passion for the problem like you know not necessarily your solution the problem but the problem itself is just one that you really believe in like with discourse for example the problem that we're looking at which is my current project is how do you get people in groups to communicate in a way that doesn't like break down into the howling of wolves right like how do you deal with trolling not like technical problems of how do I get people to post paragraphs how do I get people to use bold how to get people to use complete sentences although those are problems as well but like how do I get people to get along with each other right like and then solve whatever problem it is they set up to solver you know reach some consensus on discussion or just like not hurt each other even right like maybe it's a discussion doesn't really matter but are people like yelling at each other right and why right like that's not the purpose of this kind of communication so I would say you know leadership is about you know setting an example you know doing the things that represent what you want to be and making sure that you're actually doing those things and there's a trick to that too because the things you don't do also say a lot about what you are yeah so let's pause on that one so those two things are fascinating so how do you have as a leader as that self-awareness so you just said it's really hard to be self-aware so for you personally or maybe for other leaders you've seen or look up to how do you know the both that the things you're doing are the wrong things to be doing the way you speak to others the way you behave and the things you're not doing how do you how do you get that service there's two aspects that one is like processing feedback that you're getting so how you get feedback well right sorry are you getting feedback right like so one way we do it for example a discourse we have three co-founders and we periodically talk about decisions before we make them so it's not like one person can make a mistake or like that's you know there can be misunderstanding things like this so it's part of like group consensus of leadership is like it's good to have I think systems where there's one leader and that leader has the rule of absolute law are just really dangerous and my experience for communities for example like a few of communities run by one person that one person makes all the decisions that person's gonna have a bad day something could happen to that person you know something you know there's a lot of variables so like at first when you think about leadership haven't have multiple people doing leadership and have them talk amongst each other so giving each other feedback about the decisions that they're making and then when you do get feedback I think there's that little voice in your head right like or your gut or wherever you want to put it in your putti I think that voice is really important like I think most people who have any kind of moral compass or like want to do most people want to do the right thing I do believe that I mean there might be a handful of sociopaths out there that don't but most people they want other people to think of them as a good person and why wouldn't you right like do you want people to despise you I mean that's just weird right so you have that little voice that sort of the angel and devil on your shoulder sort of talking to you about like what you're doing how you're doing how does it make you feel to make these decisions right and I think having some attunement to that voice is important but you said that voice also for I think this is a programmer situation to what sometimes the devil on the shoulder is a little a little too loud so you a little too self-critical for a lot of developers and especially when you have introverted personality how do you struggle with the self-criticism other criticism others so one of the things of leadership is to do something that's not potentially unpopular or what people doubt you and you still go through with the decision so what's that balance like I think you have to walk people through your decision-making right like if if this is where blogging is really important communication is so important again code language is just another kind of code is like here is the program by which I arrived at the conclusion that I'm gonna reach right it's one thing to say like this is decisions final deal with it right that's not usually satisfying people but if you say look you know we've been thinking this problem for a while here's some stuff that's happened here's what we think is right here's our goals here's one achieve and we've looked at these options and we think this of available options is the best option people be like oh okay alright maybe I don't totally agree with you but I can kind of see where you're coming from and like see it's not just arbitrary decision delivered from a cloud of flames in the sky right it's like a human trying to reach some kind of consensus about you know goals and their goals might be different than yours that's completely legit right but if you're making that clear it's like oh well the reason we don't agree is because we have totally different goals right like how could we agree it's not that you're a bad person it's that we have radically different goals in mind when we started looking this problem and the other one you said is passion so or hard work sorry well those are tied together to me out in my mind say Hardware compassionate like for me like I just really love the problem discourse is sending out to solve because in a way it's like there's a there's a vision of the world where it all devolves into Facebook basically owning everything and every aspect of human communication right and this has always been kind of a scary world for me um first cuz I don't I think Facebook is really good at execution I gotta compliment them they're very competent in terms of what they're doing but Facebook has not much of a moral compass in terms of Facebook cares about Facebook really they don't really care about you and your problems what they care about is how big they can make Facebook right is that you're talking about the company or just a mechanism how Facebook works kind of both really right like and the idea with discourse the reason I'm so passionate about it is because I believe every community should have the right to own themselves right like they should have their own software that they can run that belongs to them that's their space where they can set the rules and if they don't like it they can move to different hosting or you know whatever they need they need to have it can happen but like this this idea of a company town we're all human communication is implicitly owned by whatsapp Instagram and Facebook and its really disturbing too because Facebook is really smart like I said they're great at execution buying and what's happened buying Instagram were incredibly smart decisions and they also do this thing on if you know but they have this VPN software that they give away for free on smartphones and it indirectly feeds all the the data about the traffic back to Facebook so they can see what's actually getting popular through the VPNs right they have low level access to the network data because users have let them have that so ok let's let's take a small pause here first of all discourse can you talk about can you lay out the land of all the different ways you can have community so there's Stack Overflow that you've built there's discourse yeah so Stack Overflow is kind of like a wiki Wikipedia you talk and it's a very specific scalpel very focused so what is the purpose of discourse and maybe contrast that with Facebook first of all say what is this course yeah start from the beginning well let me start with the very being so Stack Overflow is very structured wiki style QA for programmers right and that was the problem we first worked on it when we started we thought it was discussions because we looked at like programming forums and other things but we quickly realized we were doing QA which is a very narrow subset of human communication sizes so when you start Stack Overflow you thought you didn't even know the QA you know it would be well we didn't know we did we had an idea of like ok these are things that we see working online we had a goal right our goal was there was this site experts exchange with a very unfortunate thank you for killing that site yeah I know right like a lot of people don't remember it anymore which is great like that's the measure of success when people don't remember the thing that you were trying to replace then you've totally won so it was a place to get answers to programming questions but it wasn't clear if it was like focused Q&A if it was a discussion there were plenty of programming forums so we weren't really sure we were like ok we'll take aspects of Digg and reddit like voting we're very important reordering answers based on votes wiki style stuff of like being able to edit post not just your posts but other people's post to make them better and keep them more up-to-date ownership of blogging of like ok this is me I'm saying this is my voice you know this is the stuff that I know and you know you give your reputation accrues to you and it's pure recognition so you asked earlier like what motivates programmers I think peer recognition motivates them a lot that was one of the key insights of Stack Overflow was like recognition from your peers is why things get done initially moneyness well your boss but like your peers saying wow this person really knows their stuff has a lot of value so the reputation system came from that so we were sort of frankensteining a bunch of stuff together in Stack Overflow of like stuff we had seen working and we knew worked and that became Stack Overflow and over time we realized it wasn't really discussion it was very focused questions and answers there wasn't a lot of room on the page for let me talk about this tangential thing it was more like ok he's an answering question is it clarifying the question or could it be an alternative answer to the same question because there's usually more than one way to do it in program there's say five to ten ways and one of the patterns we got into early on stackoverflow was there are questions where there would be like hundreds of answers more like Wow how can there be a programming question with 500 200 500 answers and we looked at those we realized those were not really questions in the traditional sense they were discussions it was stuff that we allowed early on that we eventually decided wasn't allowed such as what's your favorite programming food you know what's the funniest programming cartoon you've seen and we had to sort of backfill oh into rules about like why isn't this allowed such as is this a real problem you're facing like nobody goes to work and says wow I can't work because I don't know what the funniest programming cartoon is so sorry can't compile this code now right it's not a real problem you're facing in your job that was run rule and ii like what can you really learn from that it's like what i call accidental learning or reddit style learning where you just acknowledge browse some things oh wow you know did you know tree frogs only live three years I mean I just made that up I don't know that's true but uh I didn't really set out to learn that I don't need to know that right it's an accidental learning it was more intentional learning we were like okay I have a problem and I want to learn about stuff around this problem having right and it could be theory could be compiler theory it could be other stuff but I'm having a compiler problem hence I need to know the compiler theory that aspect of it that gives me the the gets me to my answer right so kind of a directed learning so we had to backfill all these rules as we sort of figured out what the heck it was we were doing and the system came very strict over time and a lot of people still complain about that and I wrote my latest blog entry what the Stack Overflow want to be I wanted to be when it grows out celebrating the 10-year anniversary yeah yeah so ten years and it that system is trended towards strictness there's a variety of reasons for this one is people don't like to see other people get reputation for stuff as they view they view as frivolous which I can actually understand because if you saw a program or got like five hundred up votes for funniest programming cartoon or funniest comment they had seen in code it's like well why do they have that reputation is because they wrote the joke probably not I mean if they did maybe or the cartoon right they're getting a bunch of reputation based on someone else's work that's not even like programming it's just a joke right it's a related to birth so you begin to resent that like well that's not fair and it isn't at some level they're correct I mean I empathize because like it's not correct you get reputation for that versus here's a really gnarly regular expression problem and here's a really you know clever insightful you know detailed answer laying out oh here's why you're seeing the behavior that you're seeing here let me teach you some things about how to avoid that in the future that's that's great like that's gold right you want people to grab a petition for that not so much for wow look at this funny thing I saw alright great so there's this very specific Q&A format and then take me through the journey towards this course in Facebook and Twitter so you start at the beginning that Stack Overflow evolved to have a purpose so where does this course this passion you have for creating community for discussion what is that when was that born and well part of it is based on the realization the Stack Overflow is only good for very specific subjects where they're sort of it's it's based on data facts and science where answers can be kind of verified to be true another form of that is there's the book of knowledge like the tome of knowledge that defines like whatever it is you can refer to that book and I'll give you the answer there has to be it only works on subjects where there's like semi clear answers to things that can be verified in some form now again there's always more than one way to do it there's complete flexibility and system around that but where it falls down is stuff like poker and Lego like we had if you go to Stack Exchange calm we have an engine that tries to launch different Q&A topics right and people can propose Q&A topics sample questions and and if he gets enough support within the network we launch that Q&A site so someone's we launched where poker and Lego and they did horribly right because I mean there might still be there lingering on in some form but it was an experiment this is like a test right and some subjects work super well in the stack engine and some don't but the reason Lego and Poker don't work is because they're so social really it's not about you know what's the rule here in poker it's like well you know what kind of cigars do we like to smoke while playing poker or you know what's what's a cool set of cards to use when playing poker or you know what some strategies like say I have this hand come up with some strategies I could use it's more of a discussion around like what's happening like with Lego you know same thing like here's this cool Lego set I found look how awesome this isn't like yeah that's freaking awesome right it's not question right there's all these social components discussions that don't fit at all like we literally have to just allow those in Stack Overflow kids it's not about being social it's about problems that you're facing in your work that you need concrete answers for right like you have a real demonstrated problem that's sort of blocking you in something nobody's blocked by you know what should I do when I have a straight flush right like blocking problem in the world it's just an opportunity to hang out and discuss so this course was a way to address that and say look you know discussion forum software ahead was very very bad and when I came out of Stack Overflow until late or early 20 2013 2012 it was still very very bad I've expected it improved and in the four years since I last looked but it had not improved at all and I was like well that's kind of terrible because I love these communities of people talking about things that they love you know that there's just communities of interest right and there's no good software for them like startups would come to me and say hey Jeff I wanna you know I have this startup here's my idea and the first thing I would say them is like well first why are you asking me like I don't really know your field right let it's necessarily like why aren't you asking like the community like the people that are interested in this problem the people that are using your product why aren't you talking to them and then they say Oh a great idea like how do I do that and then that's when I started playing sad trombone because I realized all the software involving talking to your users customers audience patrons whatever it is it was all really bad you know I was like stuff that I would be embarrassed to recommend to other people and yet that's where I felt they could get the biggest and strongest most effective input for what they should be doing with their product right it's from their users from their community right that's what we did on Stack Overflow so what we're talking about with forms the what is it the dark matter of the Internet it's still I don't know if it's still but for a longest time it has some of the most passionate and fascinating discussions and what's the usual structure there's usually what it's a it's linear so it's sequential it's you're posting one after the other and there's pagination so it's every there's a 10th post and you go to the next page and that format still is used by like I'm we're doing a lot of research with Tesla of vehicles and there's Tesla Motors Club forum which is extremely really wanted to run that actually they pinged us about I don't think we got but I really would like to gotten that one but they've started before even 2012 I believe I mean they've been running for a long time it's still an extremely rich source of information so what what's broken about that system and how are you trying to fix it I think there's a lot of power in in connecting people that love the same stuff around that specific topic meaning Facebook's idea of connection is just any human that's related to another human right like like through friendship or you know any other reason Facebook's idea of the world is sort of the status update right like a friend of yours did something ate at a restaurant right whereas discussion forums were additionally around the interest graph like I love electric cars specifically I love Tesla right like I love the way they approach the the problem I love the style of the founder I just love the the design ethic there's a lot to like about Tesla if you saw the oatmeal he did a whole love comic to Tesla and it was actually kind of cool because I learned some stuff he was some how great Tesla cars were specifically like how they were built differently and he went into a lot of great detail that was really interesting to me that oatmeal post if you read it is the genesis of pretty much all interest communities I just really love this stuff's like for me devilish yo-yos right like I'm into the yo-yo communities and there's these interest communities are just really fascinating to me and I feel more connected to the yo-yo communities than I do to you know friends that I don't see that often right like to me that the powerful thing is the interest graph and Facebook kind of dabbles in the interest graph I mean they have groups you can sign up for groups and stuff but it's really about the relationship graph like I'm this is my coworker this is my relative this is my friend but not so much about the interest so I think that's the the linchpin of which forums and communities are built on that I personally love like I I like I said leadership is about passion right and being passionate about stuff is is a really valid way to look at the world and I think it's a way a lot of stuff in the world gets done like I once said someone described me as he's like Jeff you're a guy who you just get super passionate about a few things at a time and you just go super team from those things and I was like oh that's kind of right that's kind of what I do I'll get into something and just be super into that for a couple years or whatever I just learn all I can about it and go super deep in it and that's how I enjoy experiencing the world right like not being shallow on a bunch of things but being really deep on a few things that I'm interested in so forums kind of unlocked that right and you know you don't want a world where everything belongs to Facebook at least I don't I want a world where communities can kind of own themselves set their own norms set their own rules control the experience because commit community is also about ownership right like if if you're meeting at the Barnes & Noble every Thursday at Barnes & Noble says get out of here you guys don't buy enough books well you know you're kind of hose right Barnes and Noble owns you right like you can't but if you have your own meeting space you know your own Clubhouse you can set your own rules decide what you want to talk about there and just really generate a lot better information than you could like hanging out at Barnes & Noble every Thursday at 3:00 p.m. right so that's kind of the vision of discourse is a place where it's it's fully open source you can take the software you can saw it anywhere and you know you and a group of people can go deep on whatever it is that you're into and it this works for startups right startups are a group of people who go super deep on a specific problem right and they want to talk to the comedian's like well install this course right that's what we do at this course that's what I did a stack overflow I spent a lot of time on meta stack overflow which is our internal well public community feedback site and just experiencing what the users were experiencing right because they're the ones doing all the work in the system and they had a lot of interesting feedback and there's that 90/10 rule of like 90% of the feedback you get is not really actionable for a variety reasons it might be bad feedback it might be crazy feedback it might be feedback you just can't act on right now but there's 10% of it that's like gold it's like literally gold and diamonds where it's like feedback of really good improvements to your core product that are not super hard to get to and actually make a lot of sense and my favorite is about 5% of those stuff I didn't even see coming it's like oh my god I never even thought of that but that's a brilliant idea right and I can point to so many features of Stack Overflow that we drive from metastatic overflow feedback and meta discourse right same exact principle at discourse you know we're getting ideas from the comedian's like oh my god I never thought of that but that's fantastic right like I love that relationship with the community from having built these communities what have you what have you learn about what's the process of getting a critical mass of members in a community is it luck skill timing persistence what is is it the tools like discourse that empower that community what what's the key aspect of starting one guy a gal and then building it to 210 and 100 and a thousand so on I think we're starting with an end of one I mean I think it's persistence and and also you have to be interesting like somebody I really admire once that's something that I always liked about blogging he's like here's how you blog you have to have something interesting to say and have an interesting way of saying it right yeah and then do that for like 10 years so that's the genesis is like you have to have sort of something interesting to say that's not exactly what everyone else is saying and an interesting way of saying which is another one same kind of entertaining way of saying it and then as far as growing it it's like ritual you know like you have to like say you're starting a blog you have to say look I'm gonna blog every week three times a week and you have to stick to that schedule right because until you do that for like several years you're never gonna get anywhere like it just takes years to get to where you need to get to and part of that is having the discipline to stick with the schedule and it helps you get if it's something you're passionate about this won't feel like work like I love this I could talk about this all day every day right you just have to do in a way that's interesting to other people and then as you're growing the community that pattern of participation within the community of like generating these artifacts and inviting other people to help you like collaborate on these artifacts like even in case of blogging like I felt in the early days of my blog which I started 2004 which is really the genesis of Stack Overflow if you look at all my blog it leads up to Stack Overflow which was I have all this energy in my blog but I don't like 40,000 people were subscribing to me and I was like I want to do something and then then I met Joel and said hey Joel I want to do something take this ball of energy for my blog and do something and all the people reading my blog saw that's oh cool you're involving us you're saying look you're part of this community let's build this thing together like they pick the name like we voted on the name for Stack Overflow on my blog like we came and naming is super hard first why the hardest problem computer science is coming with a good name for stuff right yeah but there you can go back to my log there's the poll where we voted and Stack Overflow became the name of the site and all the early beta users are stuck over we're audience of my blog plus Joel's blog right so we started from like if you look at the Genesis okay I was just a programmer who said hey I love programming but I have no outlet to talk about it so I'm just gonna blog about it because I don't have enough people to work to talk to about it because at the time I worked a place where you know programming wasn't the core output of the company was a pharmaceutical company and I just love this stuff you know to an absurd degree so I was like I'll just blog about it and then I'll find an audience and eventually found an audience eventually I found Joel and eventually built Stack Overflow from that one core of activity right but it was that repetition of feeding back in feedback from my blog comments feedback from Joel feedback from them the early Stack Overflow community when people see that you're doing that they will follow along with you right they say look cool you're here in good faith you're actually you know not listening to everything because I'm impossible that's impossible but you're actually you know waiting our feedback and what you're doing because I'm and why wouldn't I because who does all the work on Stack Overflow me Joel no it's the other programmers that are doing all the work so you gotta have some respect for that and then you know discipline around look you know we're trying to do a very specific thing here on Stack Overflow we're not trying to solve all the world's problems we're trying to solve this very specific QA problem in a very specific way not because we're jerks about it but because these strict set of rules help us get really good results right and programmers that's an easy sell for the most part because programmers are used to dealing with ridiculous systems of rules like constantly that's basically their job so they're they're very oh yeah super strict system of rules that lets me get on what that's programming right that's what Stack Overflow is so so you're making it sound easy but in 2004 let's go back there in 2004 you started the blog I'm quoting horror was it called that at the beginning at the very beginning was one of the smart things I did it's from a book by Steve McConnell code complete which is where my favorite programming but still probably my number one programming book for anyone to read one of the smart things I did back then I don't always do smart things when I start stuff I contacted Stephen said hey I really like this it was a sidebar illustration indicating danger in code right coding horror was like watch out and I love that illustration cuz it spoke to me because I saw that illustration go oh my god that's me like I'm always my own worst enemy like that and a key insight and programming is every time you write something think how am I gonna screw myself because you will constantly right so that that icon was like oh yeah I need to constantly hold that mirror up and look and say look you're very fallible you're gonna screw this up like how can you build this in such a way that you're not gonna screw it up later like how can you get that discipline around making sure at every step I'm thinking through all the things that I could do wrong or that other people could do wrong because that is actually how you get to be a better programmer a lot of times right so that sidebar illustration I loved it so much and I wrote Steve before I started my belonging say hey can I have permission to use this cuz I just really likes illustration and Steve was kind enough to give me a portion to do that and just continues to give me permission so yeah really that's awesome but in 2004 you started this blog you know you look at it Stephen King this book on writing or Steven Pressfield the war of art book I mean it seems like writers suffer I mean it's a hard process of writing write is there's gonna be suffering I mean I won't kid you like well the work is suffering right like doing the work like even when you're every week you're like okay that blog post wasn't very good or you know people didn't like it or people write said disparaging things about it you have to like have the attitudes like you know no matter what happens I want to do this for me right it's not about you it's about me I mean in the end it is about everyone because this is how good work gets out into the world but you have to be pretty strict about saying like you know I'm selfish in the sense that I have to do this for me you know you mentioned Stephen King like his book on writing but like one of things I do for example when writing is like I read it out loud one of the best pieces of advice for writing anything is read it out loud like multiple times and make it sound like you're talking because that is the goal of good writing it should sound like you said it with with slightly better phrasing because you have two more time to think about your saying but like it should sound natural when you say it and I think that's probably the single best writing advice and give anyone it's just just read it over and over outloud make sure it sounds like something you would normally say and it sounds good and what's your process of writing so there's usually a pretty good idea behind the blog post so ideas right so I think you gotta have the concept that there's so many interesting things in the world like I mean my god the world is amazing right like it's you could never write about everything that's going on because it's so incredible but if you can't come up with like let's say one interesting thing per day to talk about then you're not trying hard enough because the world is full of just super interesting stuff and one great way to like mine stuff is go back to old books because they bring old stuff that's still super relevant and I did that a lot because I was like reading classic program books and a lot of the early blockbuster like oh I was reading this program but can they brought this really cool concept and I want talk about some more and you get the I mean you're not claiming credit for the idea but it gives you something interesting to talk about that's kind of evergreen right like you don't have to go what should I talk about so just go dig up some old classic programming books and find something that oh wow that's interesting or how does that apply today or what about X&Y or compare these two concepts so pull a couple of sentences from that book and then sort of play off of it almost reader disagree that so in 2007 you wrote that you were offered a significant amount of money to sell the blog you chose not to what were all the elements you were thinking about because I'd like to take you back it seems like there's a lot of non-linear decisions you made through life that's so what was that decision like right so i one of the things I love is the choose your own adventure books which I loved as a kid and I feel like the early programmer books cuz they're they're all about if-then statements right if this then this and they're also very very unforgiving like there's all these sites that map the the classic teacher and venture books and how many how comes are bad there's a lot of bad outcomes so part of the game is like oh I got a bow come go back one step go back on further steps like how did I get here right like it's a sequence of decisions and this is true of life right like every decision is a sequence right individually any individual decision is not really right or wrong but they lead you down a path right so I do think there's some truth to that so this particular decision the blog II got fairly popular there's a lot of RSS readers that I discovered and this guy contacted me out of the blue from this like bug tracking companies like I really want to buy your blog for like I think it was around it was a hundred thousand dollars when I'm in like eighty thousand but it was it was a lot right like and that's you know at the time like I would have a year's worth of salary all at once so I'd really think about like well you know and I remember talking to people the times like wow that's a lot of money but then I'm like I really like my blog right like do I want to sell my blog because it wouldn't really belong to me anymore at that point and one of the guidelines that I like to I don't like to give advice to people a lot but one of the piece of advice I do give because I do think it's really true and it's generally helpful is whenever you're looking at a set of decisions like shut you a B or C you got to pick the thing that's a little scarier in that list because not you know not like jump off a cliff scary but the thing that makes you nervous because if you pick the safe choice it's usually you're not really pushing you're not pushing yourself you're not choosing the thing that's gonna help you grow so for me the scarier choice was to say no I was like well no let's just see where this is going right because then I own it I mean it belongs to me it's my thing and I can just take it and to some other logical conclusion right because imagine how different the world would've been had I said yes and sold the blog it's like they're probably gonna be stackoverflow yeah you know a lot of other stuff would have changed so for that particular decision I think it was that same rule like what scares me a little bit more do the thing that scares you yeah so speaking of which startups I think there's a specific some more general questions that a lot of people would be interested in you've started Stack Overflow you started this course so what's the here's one two three guys whatever it is in the beginning what was that process like do you start talking about it do you start programming do you start like where is the birth and the catalyst that actually I can talk about in the context of Oh Stack Overflow and discourse so I think the key thing initially is there is a problem something the some state of the world that's unsatisfactory to the point that like you're upset about it right like in that case it was experts exchange I mean Joel's original idea because I approached I was like look joy I have all this energy by my blog I want to do something I want to build something but I don't know what it is because I'm not I'm honestly not a good idea person I'm really not I'm like the execution guy I'm really good at execution but I'm not good at like blue skying ideas not my forte which is another reason why I like the community feedback because they blue sky all day long for you right so when I can just go in and cherry-pick a blue sky idea from community even if I have to spend three hours reading to get one good idea it's worth it man but anyway so the idea from Joel was hey experts exchange it's got great data but the spirits is hideous right it's it's trying to trick you it feels like used-car salesmen it's just bad so I was like oh that's awesome it feeds in a community it feeds into like you know we can make a Creative Commons so I think the core is to have a really good idea that you feel very strongly about in the beginning that like there's a wrong in the world that we will an injustice that we will right through the process of building this thing for discourse it was like look there's no good software for communities to just hang out and like do stuff right like whether it's problem-solving start up whatever forums are such a great building block or online community and they're hideous they were so bad right it was embarrassing like I literally was embarrassed to be associated with this software right I was we have to have software they could be proud of it's like this is competitive with Reddit this is competitive Twitter this is competitor with Facebook right I would be proud to have the software on my site so that was the genesis of discourse was feeling very strongly about there needs to be a good solution for communities so that's step one Genesis why do you feel super strongly about right and then people galvanize around the idea like Joel was already super excited with the idea I was excited about the idea so with the forum software I was posting on Twitter I had research as part of my research I start researching the problem right and I found a game called forum Wars which was a parody of forum it's still very very funny of like foreign behavior circle like I would say 2003 and it's aged some right like the behavior is a little different in there of Twitter but it was awesome it was very funny and it was like a game as like an RPG and it had a forum attached to it so it was like a game about forums with a forum attached I was like this is awesome right this is so cool and the founder of that company or that project it wasn't really a company contacted me this guy Robin Ward from Toronto's hey you know I saw you been talking about forums and like I really love that problem space he's like I'd still love to build really good forum software cuz I don't think anything out there is any good and I was like awesome at that point I was like we're starting a company because like I couldn't have wished for a better person to walk through the door and say I'm excited about this - same thing with Joe right I mean Joel is a legend in the industry right so when he walks through so I'm excited about as problems like me - man we can do this right so that to me is the most important step it's like having ID you're super excited about and another person a co-founder right because again you get that dual leadership right of like am I making a bad decision sometimes it's nice to have checks of like is this a good idea I don't know right so those are the the crucial seeds but then starting to build stuff whether it's you programmer there's video types so there's tons of research there's tons of research like what what's out there that failed because a lot of people looked at successes I look at how successful X's everybody looks at the successes those are boring show me the failures because that is what's interesting that's where people were experimenting that's where people were pushing but and they failed but they probably failed for reasons that weren't directly about the quality of their idea right yeah so look at all the failures don't just look what everybody looks at which is a go gosh look at all these successful people look at the failures look at the things that didn't work research the entire field and so that's the research that I was doing that led me to Robin Wright was that and then when we for example we did Stack Overflow we're like okay well I really like elements of voting and Digg and reddit I like the the Wikipedia everything is up to date nothing is like an old tombstone that like has horrible out-of-date information we know that works Wikipedia is an amazing resource blogging the idea of ownership is so powerful right like oh I i jo wrote this and look how good Joe's answer is right like all these concepts were rolling out researching all the things are out there that we're working and why they were working and trying to like fold them into that again that Frankenstein's monster of what Stack Overflow is and by the way that wasn't a free decision because there's still a ton of tension in the Stack Overflow system there's reasons people complain about Stack Overflow because it's so strict right why is it so strict why you guys always closing my questions it's because there's so much tension that we built into the system around like trying to get good good results out of the system and you know it it's not a free that stuff doesn't come for free right it's not like we we're all have perfect answers and nobody will have to get their feelings hurt or nobody will have to get down voted like that it doesn't work that way right like so this is an interesting point a small tangent yeah you're right about anxiety so I've posted a lot of questions and answers on Stack Overflow and the questions I usually go to something very specific to something I am working on this is something you talk about that really the goal of Stack Overflow isn't about is to write a question not that's not about you it's about the question that will help the community in the future right but that's a tough sell right because people are like well you know I don't really care about the committee what I care about is my problem my problem and then that's fair right is it sort of that again that tension that balancing active we want to help you but we also hope that everybody comes behind you right the long line of people are gonna come up say oh I kind of have that problem too right and if nobody's ever going to come up and say I have this problem too then that question shouldn't exist on Stack Overflow because the question is too specific and that even that's tension right how do you judge the how do you know that nobody's ever gonna have this particular question again so there's a lot of tension in the system do you think that anxiety of asking the question the anxiety of answering that tension is inherent to programmers is inherent to this kind of process or can it be improved can be happy land where the that tension is not quite so harsh uh I don't think Stack Overflow can totally change though it works one thing they are working on finally is the ask page had not changed since 2011 I'm still kind of bitter about this because I feel like you have a QA system and what are the core pages in a KA system well first of all the question all the answers and all the also the ask page particularly when you're a new user or someone trying to ask question that's the point on what you need the most help and we just didn't adapt with the times but the good news is they're working on this from what I understand and it's gonna be a more wizard based format and you could envision a world where as part of this wizard based program when you're asking questions okay come up with a good title what are good words up in the title one word that's not good to put in the title is problem for example I have a problem oh you have a problem okay a problem that's great right like you need specifics right like so it's trying to help you make a good question title for example that step will be broken out all that stuff but one of those steps in that wizard of asking could say hey I'm a little nervous you know I've never done this before can you put me in a queue for like special mentoring right you could opt into a special mentor I think that would be fantastic like I don't have any objection to that at all in terms of be an opt-in system because there are people there like no I just want to help them I want to help a person no matter what I want to go above and beyond I want to spend like hours with this person uh Ben's what their goals are right a great idea Who am I to judge right so that's fine it's not precluded from happening but there's a certain big-city ethos that we started with like look we're of New York City you don't come to New York City and expect them to be Oh welcome to the city Joe how's it going come on in let me show you around that's not how New York City works right I mean and you know again New York City is a reputation for being rude which I actually don't think it is having been there fairly recently it's not rude people are just like going about their business right now look look I have things to do I'm busy I'm a busy professional as are you and since you're a busy professional certainly when you ask a question you're gonna ask the best possible question right because you're a busy professional and you would not accept anything less than a very well waiting question with a lot of detail about why you're doing it what you're doing what you researched what you found right because you're a professional like me right and this rubs people sometimes the wrong way and I don't think it's wrong to say look I don't want that experience I want just a more chill place for beginners and I still think sacrifice is not was never designed for beginners right there's this misconception that you know even Joel says some - oh yes deck overflow for beginners and I think if you're a prodigy it can be all right but that's not not really representative right like I think as a beginner you want a totally different set of tools you want like live screen sharing live chat you want access to resources you want a playground like a playground you can experiment in and like test and all this stuff that we just don't give people because that was never really the the audience that we were designing the second true flow for that doesn't mean it's wrong and I think it would be awesome if there was a site like that on the internet or if stack overlies and hey you know we're gonna start doing this that's fine too you know I'm not there I'm not making those decisions but I do think the pressure the tension that you describe is there for people to be look I'm a little nervous cuz I know I gotta do my best work right the other one is something you talk about which is also really interesting to me is duplicate questions or do it's a it's a really difficult problem that you highlight super far is super hard like you could take one little topic and you could probably write 10 20 30 ways of asking about that topic and there will be all different I don't know if there should be one page that answers all of it is there a way that Stack Overflow can help disambiguate like separate these duplicate questions or connect them together or is it a totally hopeless difficult impossible task I think it's a very very hard computer science problem and partly because people are very good at using completely different words it always amazed me on Stack Overflow you'd have two questions that were functionally identical and one question had like zero words in common with the other question like oh my god from a computer science perspective how do you even begin to solve that and it happens all the time people are super-good at this right accidentally at asking the same thing in in like in 10 20 different ways and the other complexity is we want some of those duplicates to exist because if there's five versions with different words have those five versions point to the one centralized answer right it's like okay this is duplicate nope no worries this here's here's the answer that you wanted over here on this this this you know the prime example that we want to have rather having ten copies of the question and the answer because if you have 10 copies of the question the answer this also devalues the reputation system which programmers hate as I previously mentioned you're getting reputation for an answer that somebody else or engaged it's like well it's an answer but somebody are sorry gave that answer so why are you getting reputation for the same answer as the other guy who gave it 4 years ago people get offended by that right so the reputation system itself adds tension to the system in that the people who have a lot of reputation become very incentivized to enforce the reputation system and for the most part that's good I know it sounds weird but for most parts like look strict systems I think to use Tec powerful you have to have the idea that ok strict systems ultimately work better and I do think in programming you're familiar with loose typing versus strict typing right the idea that you can declare a variable not declare a variable rather you start using a variable and ok I see it's implicitly an integer BAM awesome duck equals 5 well duck is now an in under 5 right and you're like cool awesome simpler right why would I want to worry about typing and for a long time like in the Ruby community they're like yeah this is awesome like you just do a bunch of unit testing which is testing your programs validity after the fact to catch any bugs that that strict typing of variables would have caught and now you have this thing called typescript from Microsoft from the guy who built c-sharp Manders who's one of the greatest minds in software development right like in terms of language design and says no no no we want to bolt on a strict type system to JavaScript because it makes things better and now everybody's like oh my god we we deployed typescript and found 50 latent bugs that we didn't know about right like this is super common so I think there is a truth in programming that strictness it's not the goal we're not saying be super strict cuz strictness is correct no it's no no strictness produces better results that's what I'm saying right so strict typing of variables I would say you almost universally have consensus now is basically correct should be that way in every language right duck equals 5 should during an error because you know you didn't clear you didn't tell me the duck was an integer right that's a bug right or maybe you missed time you typed deck right instead of duck right you never know this happens all the time right so with that in mind I will say that the strictness the system is correct now that doesn't mean cruel that doesn't mean mean that doesn't mean angry it just means tricked okay so I think where there's misunderstanding is and people get cranky right like another question you asked is like why are programmers kind of mean sometimes well who'da programmers work with all day long so I have a theory that if you're at a job and you work with all day long what do you eventually become an an and what is the computer except the world's biggest because the computer has no time for your the computer the minute you make a mistake everything else crashing down right one semicolon has crashed space missions right so that's normal so you begin to internalize that you begin to think oh my coworker the computer is super strict and kind of a jerk about everything so that's kind of how I'm gonna be because I work with this computer and I have to accede to its terms on everything so therefore you start to absorb that and you start to think oh well being really strict arbitrarily is really good an error of error code five six two four nine is a completely good error message because that's what the computer gave me right so you kind of forget to be a person at some level and you know they say great detectives internalized criminals and kind of are criminals themselves like this trope of the master detective is good because you can think like the criminal well I do think that's true of programmers really good programmers think like the computer because that's their job but if you internalize it too much you become the computer you become a kind of become a jerk to everybody because that's what you've internalized you're almost not a jerk but you have no patience for a lack of strictness as you said it's not out of a sense of meanness it's accidental but I do believe it's an occupational hazard of being a programmer is you start to behave like the computer you're very unforgiving you're very terse you're very Oh wrong and correct move on it's like well can you help me like what could I do to fix now wrong say next question right like that's normal for the computer right just fail next right like out of you remember in Saturday Night Live like in the nine they had this character was an IT guy yeah the move guy move was that Jimmy Fallon no no can't play dumb okay yeah I remember move right he had no patience for he might have been MADtv actually might have been might a bit but anyway that was the that's always been the perception right you start to behave like the computer it's like oh you're wrong out of the way you know you've written so many blog posts about programming about programs programming programmers what do you think makes a good let's start with what makes a good solo programmer well I don't think you should be a solo programmer I think to be a good solo programmer it's kind of like what I talked about well not on mic but one of the things john carmack one of the best points he makes in the book masters of doom which is a fantastic book anybody listening this who hasn't read it please read it's such a great book is that at the time they were working on stuff like Wolfenstein and doom like they didn't have the resources that we have today they didn't have Stack Overflow they didn't have Wikipedia they didn't have like discourse forums they didn't have places to go to get people to help them write they had to work on their own and that's why it took a genius like Carmack to do this stuff because you had to be a genius to invent from first principles a lot of the stuff he was he was like the hacks he was coming up with were genius right genius level stuff but you don't need to be a genius anymore and that means not working by yourself you have to be good at researching stuff online you have to be good at asking questions really good questions that are really well researched which implies oh I went out and researched for three hours before I wrote those questions like that's what you should be doing because that's what's gonna make you good write to me this is the big difference between programming in like the 80s versus programming today is like you you kind of had to be by yourself back then like where would you go for answers I remember in the early days when I was a learning Visual Basic for Windows like I would call the Microsoft helpline on the phone when I had like program because I was like I don't know what to do so I would like go and call and they have these huge phone banks and like can you imagine how alien that is now like who would do that right like that's crazy so there was just nowhere else to go when you got stuck right like I had the books that came with it I read those study those religiously I I just saw a post from Steve Sinofsky that said this C++ version seven came with like 10,000 pages of written material because where else were you gonna figure that stuff out go to the library I mean you don't have what capito you didn't have you know read it you know were to go to answer these questions so you you've talked about through the years basically not having an ego and not thinking that you're the best programmer in the world it's always kind of just looking to improve to become a better programmer than you were yesterday so how have you changed as a programmer and as a as a thinker designer around programming over it'll past what is it 15 years really of being a public figure I would say the big insight that I had is eventually as a programmer you have to kind of stop writing code to be effective which is kind of disturbing because you really love it and but you realize like being effective at program at programming in the general sense doesn't mean writing code and a lot times you can be much more successful by not writing code and writing code in terms of just solving the problems you have essentially hiring people that are really good and like setting them free and like giving them basic direction right like on strategy and stuff because a lot of the problems you encounter aren't necessarily solved through like really gnarly code they're solved by conceptual solutions which can then be turned into code but are you even solving the right problem I mean so I would say for me the main insight I have is to succeed as a programmer you eventually kind of stop writing code that's gonna sound discouraging probably to people hearing but I don't mean it that way what I mean is that you're coding at a higher level language eventually like okay so we're coding an assembly language right that's the beginning right you're hard coded to the architecture then you have stuff like see we're cool we can abstract across the architecture you can write code I can then compile that code for arm or you know you know whatever you know x86 or whatever else is out there and then even higher level net right like you're looking like Python Ruby interpreted languages and then to me as a programmer like okay I want to go even higher I want to go higher than that how do I abstract higher than the language it's like well you abstract in spoken language and written language right like you're sort of inspiring people to get things done giving them guidance like what if we did this what if we did this you're writing in the highest level language that there is which is for me English right whatever your spoken language is so it's all about being effective right and I think a patrick mckenzie Patio 11 on Hacker News and works at stripe has a great post about this of how calling yourself a programmer is a career limiting move at some level once you get far enough from your crin I really believe that and again I apologize this is sound discouraging I don't mean it to be but he's so right because all the stuff that goes on around the code like the people mm-hmm like that's another thing if you look at my early blogging piece is about Wow programming is about people more than it's about code which doesn't really make sense right but it's about can these people even get along together can they understand each other can you even explain to me what it is you're working on are you solving the right problem people weren't right another classic programming book which again up there with code complete please read people where it's that software is people right people are the software first and foremost so a lot of the skills that I was working on early on the blog were about figuring out the people parts of programming which were the harder parts the hard part of programming once you get to a certain skill of in programming you can pretty much solve any reasonable problem that's put in front of you you're not writing algorithms from scratch right that just doesn't happen so any sort of reasonable problem for in front of you're gonna be able to solve but what you can't solve is our manager is a total jerk you cannot solve that with code that is not a codes problem and yet that will you way more than oh we had to use this stupid framework I don't like or or you know Sam keeps writing bad code that I hate or you know you know Dave is off there in the wilderness writing god knows what right these are not your problems your problems your manager or a co-worker is so toxic to everybody else in your team that like nobody can get anything done because everybody's so stressed out and freaked out right these are the problems that you have to attack absolutely and so as you go to these higher level abstractions as you developed as a programmer to higher higher level abstractions go into natural language you're also the guy who kind of preached you know building it you know diving in and doing it and and and like learn by doing yes do you do you worry that as you get to higher higher level abstractions you lose track of the lower level of just building is like do you worry about that you know even not maybe now but 10 years from now 20 years from now well no I mean there is always that paranoia and oh gosh I don't feel as valuable since I'm not writing code but for me like when we started the discourse project it was Ruby which I didn't really know Ruby I mean as you pointed out and this is another valuable have straight from Stack Overflow you can be super proficient for example C sharp which I was working in that's what we built Stack Overflow and and still is written in and then switch to Ruby and you're a newbie again right like I'm but but you have the framework I know what a for loop is I know what recursion is I know you know what would attract a stack traces right like I have all the fundamental concepts to be a programmer I just don't know Ruby so I'm still on a higher level I'm not like a beginner beginner like you're saying I'm just like I need to apply my programming concepts I already know to Ruby what so there's a question that's really interesting so looking at Ruby how do you go about learning enough that your intuition can be applied well that carryover that's all trying to get to is like what I realized written when I started was just me and Robin I realized if I bother Robin I am now costing us productivity right every time I go to Robin rather than building the the are our first alpha version of this course he's now answering my stupid questions about Ruby is that a good use of his time is that a good use of my time the answer to both of those was resoundingly no right like we were getting to an alpha and it was a pretty much disk ok we'll hire more programmers right like we eventually hired Neil and then eventually Sam who came in as a co-founder actually was Sam first then Neil later but the answer of the problem is just hire other competent programmers it's not like teach now I shalt pull myself up by my bootstraps and Ruby but at some point writing code becomes a liability to you in terms of getting things done there's so many other things that go on in the project like building the prototype like you mentioned like well how do you if you're not writing code has every keep focus on like what what are we building well first basic mock-ups and research right like what what do we even want to build there's a little bit of that that goes on then very quickly get to the prototype stage like build a prototype let's iterate on the prototype really really rapidly that's what we do at this course and that's what we we demoed to get our seed funding for this course was the the alpha version of discourse that we had running and ready to go and it was very it was bad I mean it was I'll just tell you it was bad I have we have screenshots and I'm just like embarrassed to look at it now but it was the prototype we were figuring out like what's working what's not working because there's such a broad gap Bateen between the way you think things will work in your mind or even on paper and the way they work once you sit and live in the software like actually spend time living and breathing us out we're so different so my philosophy is get to a prototype and then what you're really optimizing for speed of iteration like how you can turn the crank how quickly can we iterate that's the absolutely critical metric of any software project and I had a tweet recently that people liked and I totally this is so fundamental to what I do is like if you want to measure the core competency of any software tech company it's the speed at which somebody can say hey we really need this word in the product change this word right because it will be more clear to the users like what like instead of respond it's a reply or something but there's some from the conception of that idea to how quickly that single word can be changing your software rolled out to users that is your lifecycle that's your health your your heartbeat if your heartbeat is like super slow you're basically dead no seriously like if it takes two weeks or even a month to get that single word change that was oh my god this great idea that word is so much clearer I'm talking like a super like everybody's on board for this change it's not like let's just change at work cuz we're bored it's like this is an awesome change and then it takes a you know months to roll out it's like what you're dead like you can't iterate you can't do anything right like so anyway about the heartbeat it's like get the the prototype and then iterate on it that's that's what I view is like the central tenets of some modern software development that's fascinating you put it that way it's actually so I work in I build a Thomas vehicles and when you look at what maybe compare Tesla to most other automakers the the psych the whatever the heartbeat for Tesla is literally days now in terms of they can over-the-air deploy software updates to all their vehicles which is markedly different than every other automaker which takes years to update a piece of software and so and that's reflected in everything that's the the final product that's reflected and really how slowly they adapt to the times clear I'm not saying being a hummingbird is the goal either it's like you don't a heartbeat it's like so fast it's like you're your wing you know you're just freaking out but like it is a measure of health you should have a healthy heartbeat it's up to four people listening this decide what that means but it has to be healthy has to be reasonable because otherwise you just get me frustrated because like that's how you build software you make mistakes you roll it out you live with it you see what it feels like and say oh god that was a terrible idea oh my gosh this could be even better if we did why right you turn the crank and then the more you do that the faster you get ahead of your competitors ultimately because you're it's rate of change right delta-v right how fast are you moving well within a year you're gonna be miles away by the time they catch up with you right like that's the way it works and plus users like I as a software developer I love software that's constantly changing because I don't understand people get super pissed off when like oh they changed the software on me how dare they I'm like yes change the software change it all the time man that is that's what makes this stuff great is that it can be changed so rapidly and become something that that is greater than it is now now credit there's some changes that suck I admit I've seen it many times but in general it's like that's what makes software cool right is that it is so malleable like fighting that is like weird to me because it's like well you're fighting the essence of the thing that you're building like that doesn't make sense you want to really embrace that not not to be a hummingbird but like embrace it to a healthy cycle of your heartbeat right so you talk about that people really don't change it's true that's why probably a lot of the stuff you write about in your blog probably will remain true there's a flip side of the coin people don't change so investing and understanding people is is like learning Unix in 1970 because and nothing has changed right like yeah all those things you've learned about people will still be valid 30 40 years from now whereas if you learn the latest JavaScript framework that's gonna be good for like two years right yeah exactly so but if you look at the future of programming so there's a people component but there's also the technology itself do you what do you see as the future of programming will it change significantly or as as far as you can tell people are ultimately programming and so it will not it's not something that you foresee changing and you fund the month away well you gotta go look on sort of the basics of programming and one things that always shocked me is like source control like I didn't learn anything about source control I graduate from college in 1992 but I remember hearing from people like in ladies like 1998-99 like even maybe today they're not learning source control and to me it's like well how can you not learn source control that is so fundamental to working with other programmers working in a way they don't lose your work like just just basics off the bed literal bedrock software development is source control now you compare today like github right like Microsoft brought github which I think was incredibly smart acquisition move on their part now they have anybody who wants like reasonable source control to go sign them and github it's all set up for you right there's tons of walkthroughs tons of tutorials so from the concept of like has programming advanced from say 1999 it's like well hell we have github I mean my god yes right like it's it's massively advanced over over what it was now as to whether program is is significantly different I'm gonna say no but I think the baseline of like what we view is like fundamentals will continue to go up and actually get better like source control that's one of them in fundamentals that has gotten I mean hundreds of orders of magnitude better than it was 10 20 years ago so those are the fundamentals let me introduce two things that maybe you can comment on so one is mobile phones so that could fundamentally transform what what programming is or maybe not maybe you can comment on that and the other one is artificial intelligence which promises to in some ways to do some of the programming for you is one way to think about it so it's really what a programmer is is using the intelligence that's inside your skull to do something useful the hope with artificial intelligence is that it does some of the useful parts for you you don't have to think about it so do you see smart phones the fact that everybody has one and they're getting more and more powerful as potentially changing programming and do you see AI is potentially changing problem okay so that's good so smart phones have definitely changed I mean since you know I guess 2010 that's when they really started getting super popular I mean in the last eight years the world has literally changed like everybody carries a computer around and that's normal I mean that is such a huge change in society I think we're still dealing with a lot of the positive negative ramifications of that right like everybody's connected all the time everybody's on the computer all the time that was my dream world as a geek right but it's like be careful what you ask for right like wow no everybody's a computer it's not quite the utopia that we thought it would be right computers can be used for a lot of stuff that's not necessarily great so to me that's the central focus of the smartphone is just that it puts a computer in front of everyone granted a small touchscreen smallish touchscreen computer but as for programming like I don't know I don't think that I've kind of over time come to subscribe to the UNIX view of the world when it comes to programming it's like you want to teach these basic command line things and that is just what programmers gonna be for I think a long long time I don't think there's any magical like visual programming that's gonna happen I just I don't know I've over time I've become a believer in that UNIX philosophy it was just you know they kind of had it right with UNIX that's gonna be the way it hits for a long long time and well we'll continue to like I said raise the baseline the tools will get better it'll get simpler but it's still fun mental gonna be command-line tools you know makes fancy IDs that's kind of it for the foreseeable future I'm not seeing any visual programming stuff on the horizon because you can I think like what do you do on a smartphone that will be directly analogous to programming like I'm trying to think right like and there's really not much so not necessarily analogous to programming but the kind of things that the kind of programs you would need to write might need to be very different yeah and the kind of language is I mean but I probably also subscribed to the same just because everything in this world might be written in JavaScript oh yeah that's different that's already happening I mean this course is a bit on discourses itself javascript is another bet on that side of the table and I still strongly believe in that so I would say smartphones have mostly a cultural shift more than a programming shift now your other question was about artificial intelligence and like sort of devices predicting what you're gonna do and I do think there's some strengths to that I think artificial intelligence kind of overselling it in terms of what it's doing it's more like people are predictable right people do the same things like let me give you an example one one cheque we put in a discourse that's in a lot of big commercial websites is say you log in from New York City now and then an hour later you log in from San Francisco like well hmm that's interesting how did you get from New York to San Francisco in one hour so at that point you're like okay this is a suspicious login at that point so we would alert you it's like okay but that's an AI right that's just a heuristic of like how did you in one hour get 2,000 miles right that doesn't when you grab maybe you're on a VPN there's other races happen but that's just a basic prediction based on the idea that people pretty much don't move around that much like they may travel occasionally but like nobody I mean unless you're a traveling salesman that's literally we're traveling the world every day like there's so much repetition and predictability in terms of things you're going to do and I think good software anticipate your needs like for example Google I think it's called Google now or whatever that Google thing is that predicts your commute and predicts them based on your phone location like where are you every day well that's probably where you work that kind of stuff I do think computers can get a lot better at that but I hesitate to call it like full-blown AI it's just computers getting better at like first of all they have a ton of because every has a smartphone now I'm suddenly how all this data that we didn't have before about location about like you know communication and feeding that into some some basic heuristics and maybe some fancy algorithms that turn it into predictions of anticipating your needs like like a friend would write like oh hey I I see your home would you like some dinner right like let's go get some food because that's usually what we do this time of day right and the context of actually the act of programming DCI des improving and making the life of programming is better I do think that is possible cuz things a lot of repetition in programming right oh you know Clippy would be the bad example of oh I see it looks like you're writing a for loop um but there are patterns in code right like in and actually libraries are kind of like that right like rather than go you know code up your own HTTP request library it's like what you'd use one of the existing ones that we have that's already troubleshot right it's not a I per se it's just you know building better Lego bricks bigger Lego bricks that have more functionality in them so people don't have to worry about the low-level stuff as much anymore like WordPress for example to me is like a tool for someone who is in a programmer to do something I mean you can turn WordPress into anything it's kind of crazy actually through perla plugins right and that's not programming per se it's just Lego bricks stacking WordPress elements right a little bit of configuration glue so I would say maybe in a broader sense what I'm seeing like they'll be more gluing and less like actual programming and that's a good thing right because most of the stuff you need is kind of out there already you said 1970 is Unix do you see PHP and these kind of old remnants of the early birth of programming remaining with us for a long time like you said Unix in itself do you see ultimately you know this stuff just being there out of momentum I kind of do I mean I was a big believer in Windows early on and I was a big you know I was like a UNIX what a waste of time but over time I've completely flipped on that where I was like okay the UNIX guys were right and pretty much Microsoft and windows were kind of wrong at least on the server side not on the desktop right you need a GUI you know what stuff and yeah the two philosophies like Apple built on UNIX effectively Darwin and on the desktop is a slightly restore even on the server side where you're going to be programming now it's question where the program is gonna be there's gonna be a lot more like client-side programming because technically discourse is client-side programming the way you get discourse we deliver a big ball of JavaScript which is then execute locally so we're really using a lot more local computing power will still retrieve the data obviously we have to display the posts on the screen and so forth but in terms of like sorting and a lot of the basic stuff we're using the host processor but to the extent that a lot of programming is still gonna be server-side I would say yeah the UNIX philosophy definitely one and they'll be different veneers over the UNIX but it's still if you if you peel away one or two layers it's gonna be UNIX safe for a long I think UNIX one I mean so definitively it's interesting to hear you say that because you've done so much excellent work on the Microsoft and aside in terms of back-end development cool so what's the future hold for Jeff Atwood amid the discourse continuing the discourse in trying to improve conversation on the web this force is whatever be it is a and originally I call it a five-year project then really quickly revised that to a ten-year project so where we started in early to that 2013 that's we launched the first version so we're still you know five years in this is the part where it starts getting good like we have a good product out this course there's any any project building software it takes three years to build what you wanted to build anyway like v1 is gonna be terrible which it was but you ship it anyway cuz that's how you get better at stuff it's about turning the crank it's not about v1 being perfect because that's ridiculous it's about v1 then let's get really good at V 1.1 1.2 1.3 like how fast can we iterate and I think we're iterating like crazy on discourse the point that like it's a really good product now we have serious momentum and my original vision was I want to be the wordpress of discussion meaning someone came to you and said I want to start a blog although the very question is kind of archaic now it's like who actually blogs anymore but I wanted the answer to that to be it would be what did WordPress normally because that's the obvious choice for blogging most the time but if someone said hey I want to I need a group of people to get together and do something the answer should be discourse right that should be the default answer for people cuz it's open source it's free doesn't cost you anything you control you can run it your minimum server across four discourses five bucks a month at this point they actually got the VPS prices down it used to be ten dollars a month for one gigabyte of RAM which we where our dependent we have a kind of heavy stack like there's a lot of stuff in discourse you need post grass you need Redis you need Ruby on Rails you need a sidekick for scheduling it's not a trivial amount of stuff because we were architected for like look we're building for the next ten years I don't care about shared PHP hosting that's that's not my model my idea is like hey you know eventually this is gonna be very cheap for everybody and I want to build it right using again you know hire bigger building block levels right that have more requires and there's a wordpress model of wordpress.org juarez calm is their central hosting for this course or no there is we're not strictly segmenting into the open source versus the commercial side we have a hosting business that's how this course makes money is we host discourse instances and we have really close relationship with our customers of the symbiosis of them giving us feedback on the product we definitely wait feedback from customers a lot heavier than feedback from somebody who just wanders by and gives feedback but that's where we make all our money but we don't have a strict division we encourage people to use this course like the whole point is that it's free right you're anybody can set it up I don't want to be the only person that hosts discourse that's absolutely not the goal but it is a primary way for us to build a business and it's actually kind of a great business I mean the business is going really really well in terms of hosting so I I used to work at Google research is a company that's basically funded on advertisement so it's Facebook let me ask if you can comment on it I think advertisement is best so you'd be extremely critical on what ads are but at its best it's actually serving you in a sense as giving you it's connecting you to what you would want to explore so it's like related posts or related content is the same that's the best of advertisement so this course is connecting people based on their interests it seems like a place where advertisement at its best could actually serve the users is that something that you're considering thinking about as a way to bring to financially support the platform that's interesting because I actually have a contrarian view of advertising which I kind of agree with you I recently installed that blocker like reluctantly because I don't like to do that but like the performance of the ads man like they're so heavy now and like it's just crazy so like it's almost like a performance argument more than like I actually am Pro ads and I contrary I have a contrarian viewpoint I agree with you if you do ads right it's showing you stuff you'll be interested in anyway like I don't mind that that actually is kind of a good thing so plus I think it's it's rational to want to support the people that are doing this work through seeing their ads and but that said I run adblock now which I I didn't want to do but I was convinced by all these artists like 30 40 megabytes of stuff just to serve you ads yeah it feels like as now or like the experts exchange of whenever you started Stack Overflow it's a little bit it's all there's so many companies and Antec though it's embarrassing like you can do that if you see those logo charts of like just a whole page just like you can't even see them they're so small there's so many companies in the space but since you brought it up I do want to point out that very very few discourse sites run using an ad-supported model it's not effective like it's too diluted it's too weird it doesn't pay well and like users hate it so it's a combination of like users hate it it doesn't actually work that well in practice like in theory yes I agree with you but if you clean fast ads that were exactly the stuff you would be interested awesome we're so far from that though right like Google does an okay job retargeting and stuff like that but in the in in the real world discourse sites rarely can make ads work it just doesn't work for so many reasons but you know it does work is subscriptions patreon affiliate codes for like Amazon of like just oh here here's a cool yo-yo click and then you click and go to Amazon they get a small percentage of that which is fair I think because you saw the yo-yo on that site and you click through and you bought it right that's fair for them to get 5% of that or 2% of that or whatever it is those things definitely work in fact a site that I used to participate on a lot I helped the owner one things I I got them switched to discourse obviously paid them to switch to discourse because I was like look you guys got a switch I can't come here anymore all this terrible in software but I was like a look and on top of that like you're serving people ads that they hate like you should just go full on patreon because he had a little bit of patreon go full on patreon do the Amazon affiliates thing for any Amazon links to get posted and just do that and just triple down on that stuff and that's worked really well for them and this creator in particular so that stuff works but traditional ads I mean definitely not working at least on this course so last question you've created the code keyboard I've programmed most of my adult life and a Kinesis keyboard I have one upstairs now can you describe what a mechanical keyboard is and why is it something that makes you happy well you know this is another fetish item really like it's not required you can do programming on any kind of keyboard right even like an on-screen keyboard oh god that's terrifying right like well you could but if you look back to the early days computing there were chiclet keyboards which are I think those are awful right but what's a chick like you were oh god okay well it's just like thin rubber membranes all the rubber ones oh no super bad right yeah so it's a fetish item all it really says is look I care really about keyboards because the keyboard is the primary method of communication with computer right so it's just like having a nice mic for this this podcast you want a nice keyboard right because it has tat very tactile feel I can tell exactly when I press the key I get that little click so oh and it feels good and it's also kind of a fetish shot it was like wow I care enough about programming that I care about the tool the primary tool that I use committing to computer make sure it's as good as it feels good to use for me and like I can be very productive with it so to be honest it's a little bit of a fetish item but a good one it indicates that you're serious and in case you're interested it indicates that you care about the fundamentals because you know what makes you a good programmer being able to type really fast right like this is true right so a core skill is just being able to type fast enough to get your ideas out of your head into the codebase so just practicing your typing can make you a better programmer it is also something that makes you well makes you enjoy typing correct the actual act something about the process I got played piano it's time so there's a tactile feel that ultimately feeds the passion makes you happy right no totally that's it I mean and it's funny because artisanal keyboards have exploded like mass drop has gone ballistic with this stuff there's probably like 500 keyboard projects on mass drop alone and there's some other guy I follow on Twitter I used to write for this the site the tech report way back in the day and he's like every week he's just posting like what I call keyboard porn of like just cool keyboards like how my god they look really cool right like that's like how many keyboards this guy yeah it's got me with yo-yos how many rows do you have how many do you need well technically one but I like a lot I don't know why so same thing with keyboards so yeah they're awesome like I highly recommend anybody who doesn't have a mechanical to research it look into it and see what you like and you know it's ultimately a fetish item but I think these sort of items these religious artifacts that we have are part of what make us human like that that part you important right it's kind of makes life worth living and yes it's not necessary in the strictest sense but ain't nothing necessary if you think of yet right like and so yeah why not so sure Jeff thank you so much for talking today yeah you're welcome thanks for having you
Guido van Rossum: Python | Lex Fridman Podcast #6
the following is a conversation with guido van rossum creator of Python one of the most popular programming languages in the world used in almost any application that involves computers from web back-end development to psychology neuroscience computer vision and robotics deep learning natural language processing in almost any subfield of AI this conversation is part of MIT course on artificial general intelligence and the artificial intelligence podcast if you enjoy it subscribe on YouTube iTunes or your podcast provider of choice or simply connect with me on Twitter at lex friedman spelled FR ID and now here's my conversation with guido van rossum you were born in the Netherlands in 1956 your parents and the world around you was deeply impacted by world war ii as was my family from the soviet union so with that context well what is your view of human nature are some humans inherently good and some inherently evil or do we all have both good and evil within us ouch I did not expect such a deep one I I guess we all have good and evil potential in us in a lot of it depends on circumstances in context out of that world at least on the Soviet Union side in Europe sort of out of suffering out of challenge out of that kind of set of traumatic events often emerges beautiful art music literature in an interview I read or heard you said you enjoy Dutch literature when when you were a child can you tell me about the books that had an influence on you in your childhood well as a teenager my favorite writer was my favorite Dutch author was a guy named villain Phaedra chemins whose writing certainly his early novels were all about sort of ambiguous things that happened during World War two I think he was a young adult during that time and he wrote about it a lot and and very interesting very good books I thought I think in a nonfiction way no it was all fiction but it was very much set in in the ambiguous world of resistance against the Germans where often you couldn't tell whether someone was truly in the resistance or really a spy for the Germans and and some of the characters in his novels sort of crossed that line and you never really find out what exactly happened and in his novels there's always a good guy and a bad guy the nature of good and evil is it clear there's a hero it's no his heroes are often more his main characters are often anti-heroes and and and so there they're not not very heroic they're they're often they they fail at some level to accomplish their lofty goals and looking at the trajectory through the rest of your life has literature Dutch or English or translation and an impact outside the technical world that you existed in I still read novels I don't think that it impacts me that much directly doesn't impact your work it's just it's uh it's a separate world my work is is highly technical and sort of the the world of art and literature doesn't really directly have any bearing on it you don't think there's a creative element to the design you know some would say our design of a language is art I'm not disagreeing with that I'm just saying that sort of I don't feel direct influences from more traditional art on my own creativity right of course you don't feel doesn't mean it's not somehow deeply there and your subconscious knows who knows so let's go back to your early teens your hobbies were building electronic circuits building mechanical models what if you could just put yourself back in the mind of that young Guido 12 13 14 was that grounded in a desire to create a system so to create something or was it more just tinkering just the joy of puzzle solving uh I think it was more the leather actually I maybe towards the end of my high school period I felt confident enough that that I designed my own circuits that were sort of interesting somewhat but a lot of that time I literally just took a model kit and follow the instructions putting the things together I mean that I think the first few years that I build electronics kits I really did not have enough understanding of sort of electronics to really understand what I was doing I mean I could debug it and I could sort of follow the instructions very carefully which has had which has always stayed with me but I had a very naive model of like how a transistor works and I don't think that that in those days I had any understanding of coils and capacitors which which actually sort of was a major problem when I started to build more complex digital circuits because I was unaware of the sort of the analog part of the how they actually work and I would have things that the scheme the schematic looked every everything looked fine and it didn't work and what I didn't realize was that there was some megahertz level oscillation that was throwing the circuit off because I had a sort of two wires were too close or the switches were were kind of poorly built but through that time I think it's really interesting and instructive to think about because as echoes of it are in this time now so in the 1970s the personal computer was being born so did you sense in tinkering with these circuits did you sense the encroaching revolution and personal computing so if at that point you're sick we will see you down and ask you to predict the 80s and the 90s do you think you would be able to do so successfully to unroll this the process that's no I had no clue I I remember I think in the summer after my senior year or maybe it was the summer after my junior year well at some point I think when I was 18 I went on a trip to the Math Olympiad in Eastern Europe and there was like I was part of the Dutch team and there were other nerdy kids that sort of had different experiences and one of them told me about this amazing thing called a computer and I had never heard that word my own explorations in electronics were sort of about very simple digital circuits and I I had sort of I had the idea that I somewhat understood how a digital calculator worked hmm and so there is maybe some echoes of computers there but I didn't didn't I never made that connection I didn't know that when my parents were paying for magazine subscriptions using punched cards that there was something called a computer that was involved that read those cards and transferred the money between accounts that was also not really interested in those things it was only when I went to university to study math that I found out that they had a computer and students were allowed to use it and there were some you're supposed to talk to that computer by programming it what did that feel like yeah that was the only thing you could do with it I think the computer wasn't really connected to the real world the only thing you could do was sort of you typed your program on a bunch of punched cards you gave the punched cards to the operator and an hour later the operator gave you back your printout and so all you could do was write a program that did something very abstract and I don't even remember what my first forays into programming were but they were sort of doing simple math exercises and just to learn how a programming language worked did you sense ok first year of college you see this computer you're able to have a program and it generates some output did you start seeing the possibility of this or was it a continuation of the tinkering with circuits the did you start to imagine that one the personal computer but did you see it as something that is a tool so got a tool like a word processing tool maybe maybe for gaming or something or did you start to imagine that it could be you know going to the world of robotics like you you know the Franklin is that picture that you could create an artificial being there's like another entity in front of you you did not say I don't think I really saw it that way I was really more interested in the tinkering it's maybe not a sort of a complete coincidence that I ended up sort of creating a programming language which is a tool for other programmers I've always been very focused on the sort of activity of programming itself and not so much what happens with with the program you write right I do remember and I don't dream it maybe in my second or third year probably my second actually someone pointed out to me that there was this thing called Conway's Game of Life you're probably familiar with it I think the seventies I think yeah he came up with it so there was a scientific American column by someone who did a monthly column about mathematical diversions I'm also blanking out on the guy's name it was it was very famous at the time and I think up to the 90s or so and one of his columns was about Conway's Game of Life and he had some illustrations and he wrote down all the rules and sort of there was the suggestion that this was philosophically interesting that that was why Conway had called it that and all I had was like the two pages photocopy of that article I didn't even remember where I got it but it spoke to me and I remember implementing a version of that game for the batch computer we were using where I had a whole Pascal program that sort of read an initial situation from input and read some numbers that that said do so many generations and print every so many generations and then out would come pages and pages of sort of things kinds of different kinds and yeah and I remember much later I've done a similar thing using Python but I'd sort of that original version I wrote at the time I found interesting because I combined it with some trick I had learned during my electronics hobbyists times I essentially first on paper I designed a simple circuit built out of logic gates that took nine bits of input which is the sort of the cell and its neighbors and produced a new value for that cell and it's like a combination of a half adder and some other clipping you know it's actually a full adder and so I had worked that out and then I translated that into a series of boolean operations on Pascal integers where you could use the integers as bitwise values and so I could basically generate 60 bits of a generation in in like eight instructions or so nice I was proud of that it's it's funny that you mentioned so for people who don't know Conway's Game of Life is a there's it's a cellular automata whether it's single compute units that kind of look at their neighbors and figure out what they look like in the next generation based on the state of their neighbors and this is deeply distributed system that it in in concept at least and then there's simple rules that all of them follow and somehow out of this simple rule when you step back and look at what occurs it's it's beautiful there's a emergent complexity even though the underlying rules are simple there's an emergent complexity now the funny thing is you've implemented this and the thing you're commenting on is you're proud of a hack you did to make it run efficiently when you're not commenting on what like this is a beautiful implementation you're not commenting on the fact that there's an emergent complexity that you've you've you've coded a simple program and when you step back and you print out those following generation after generation that's stuff that you may have not predicted what happen is happening right and there was that is that magic I mean that's the magic that all of us feel when we program when you when you create a program and then you run it and whether it's hello world or show something on screen if there's a graphical component for you seeing the magic in the mechanism of creating that I think I went back and forth as a student we had an incredibly small budget of computer time that we could use it was actually measured I once got in trouble with one of my professors because I had overspent the department's budget it's a different story but so I I actually wanted the efficient implementation because I also wanted to explore what would happen with a larger number of generations and a larger sort of size of the of the board and so once the implementation was flawless I would feed at different patterns and then I think maybe there was a follow-up article where there were patterns that that were like gliders parents that repeated themselves after a number of generations but translated one or two positions to the right or up or something like that and there were I remember things like glider guns well you can you can google Conway's Game of Life is still of people still go on and over it for a reason because it's not really well understood why I mean this is what Stephen Wolfram is obsessed about yeah okay so he's just the the we don't have the mathematical tools to describe the kind of complexity of the emerges in these kinds of systems and the only way to do is to run it I'm not convinced that that it's sort of a problem that lends itself to two classic mathematical analysis no and so one one theory of how you create an artificial intelligence or artificial being is you kind of have to send with a game of life you kind of have to create a universe and let it run that creating it from scratch in a design way in the you know coding up a Python program that creates a full intelligence system may be quite challenging that you might need to create a universe just like the game of life is well you might have to experiment with a lot of different universes before there there is a set of rules that doesn't essentially always just end up repeating itself in in a trivial way yeah and analyst Steve wolf from Stephen Wolfram works with these simple rules says that it's kind of surprising how quickly find rules that create interesting things you shouldn't be able to but somehow you do and so maybe our universe is laden with with rules that will create interesting things that might not look like humans but yeah you know emergent phenomena that's interesting may not be as difficult to create as we think sure but let me sort of ask at that time you know some of the world's least in popular press was kind of captivated perhaps at least in America by the idea of artificial intelligence that that these computers would be able to think pretty soon and yeah that touch you at all did that in science fiction or in reality in uh in anyway I didn't really start reading science fiction until much much later I think as a teenager I I read maybe one bundle of science fiction stories was in my background somewhere like in your thoughts that sort of the using computers to build something intelligent always fell to me because I had I felt I had so much understanding of what actually goes on inside a computer I I knew how many bits of memory it had and how difficult it was to program and sort of I didn't believe at all that that you could just build something intelligent out of that that that would really sort of satisfy my definition of intelligence I think the most the most influential thing that I read in my early 20s was girlish ABBA that was about consciousness and that was a big eye-opener in in some sense in what sense oh so console yeah so on your own brain did you do use did you at the time or do you now see your own brain as a computer or is there a total separation of the way so yeah you're very pragmatically practically know the limits of memory the limits of this sequential computing or weakly paralyzed computing and you just know what we have now and it's hard to see how it creates but it's also easy to see it was in the 40s 50s 60s and now at least similarities between the brain and our computers oh yeah I mean I I totally believe that brains are computers in some sense I mean the rules they they used to play by are pretty different from the rules we we can sort of implement in in our current hardware but I don't believe in like a separate thing that infuses us with intelligence or consciousness or any of that there's no soul I've been an atheist probably from when I was 10 years old just by thinking a bit about math and the universe and then well my parents were atheists now I know that you you you could be an atheist and still believe that there is something sort of about intelligence or consciousness that cannot possibly emerge from a fixed set of rules I am NOT in that camp I I totally see that sort of given how many millions of years evolution took its time DNA is is a particular machine that that sort of encodes information and an unlimited amount of information in in chemical form and has figured out a way to replicate itself I thought that death was maybe it's 300 million years ago but I thought it was closer to half a billion years ago that that's sort of originated and it hasn't really changed that the sort of the structure of DNA hasn't changed ever since that is like our binary code that you're having hardware I mean the basic programming language hasn't changed but maybe the programming itself of has lead it did it sort of it it happened to be a set of rules that was good enough to to sort of develop endless variability and and sort of the the idea of self-replicating molecules competing with each other for resources and and one type eventually sort of always taking over that happened before there were any fossils so we don't know how that exactly happened but I believe it it's it's clear that that did happen and can you comment on consciousness and how you see it because I think we'll talk about programming quite a bit we'll talk about you know intelligence connecting to programming fundamentally but consciousness consciousness is this whole lot of other thing do you think about it often as a developer of a programming language and and as a human those those are pretty sort of separate topics my sort of my line of work working with programming does not involve anything that that goes in the direction of developing intelligence or consciousness but sort of privately as an avid reader of popular science writing I I have some thoughts which which is mostly that I don't actually believe that consciousness is an all-or-nothing thing I have a feeling that and and I forget what I read that influenced this but I feel that if you look at a cat or a dog or a mouse they have some form of intelligence if you look at a fish it has some form of intelligence and that evolution just took a long time but I feel that the the sort of the evolution of more and more intelligence that led to to sort of the human form of intelligence follow the evolution of the senses especially the visual sense I mean there is an enormous amount of processing that's needed to interpret a scene and humans are still better at that than then computers yeah and so and and and I have a feeling that there is a sort of the reason that that like mammals is in particular developed the levels of consciousness that they have and that eventually read sort of informative going from intelligence to to self-awareness in consciousness has to do with sort of being a robot that has very highly developed senses as a lot of rich sensory information coming in so the it's a really interesting thought that the that whatever that basic mechanism of DNA whatever that basic building blocks are programming is you if you just add more abilities more more high resolution sensors more sensors you just keep stacking those things on top that there's basic programming in trying to survive develops very interesting things that start to us humans to appear like intelligence and consciousness yeah so in in as far as robots go I think that the self-driving cars have the sort of the greatest opportunity of developing something like that because when I Drive myself I don't just pay attention to the rules of the road I also look around and I get clues from that oh this is a shopping district oh here's an old lady crossing the street oh here is someone carrying a pile of mail there's a mailbox thatthat should they're gonna cross the street to reach that mailbox and I slowed down and I don't even think about that yeah and and so there is there's so much where you turn your observations into an understanding of what utter consciousnesses are going to do or what what utter systems in the world are going to be oh that tree is gone at fault yeah I see sort of I see much more of expect somehow that if anything is going to become conscious it's going to be the self-driving car and not the network of a bazillion computers at in a Google or Amazon data center that are all networked together to to do whatever they do so in that sense so you actually have like is that's what I work in autonomous vehicles you highlight a big gap between what we currently can't do and what we truly need to be able to do to solve the problem under that formulation and consciousness and intelligence is something that basically a system should have in order to interact with us humans as opposed to some kind of abstract notion of a consciousness consciousness is something that you need to have to be able to empathize to be able to fear the understand what the fear of death is all these aspects that are important for interaction with pedestrians you need to be able to do basic computation based on our human desires and flaws sort of yeah if you if you look at the dog the dog clearly knows I mean I'm not the dog out on my brother I have friends who have dogs the dogs clearly know what the humans around them are going to do or the least they have a model of what those humans are going to do and they learn the dot some dogs know when you're going out and they want to go out with you they're sad when you leave them alone they cry they're afraid because they were mistreated when they were younger we we don't assign sort of consciousness to dogs or at least not not all that much but I also don't think they have none of that so I think it's it's consciousness and intelligence are not all or nothing the spectrum it's really interesting but in returning to programming languages and the way we think about building these kinds of things about building intelligence building consciousness building artificial beings I think one of the exciting ideas came in the 17th century and with liveness Hobbes decart where there's this feeling that you can convert all thought all reasoning all the thing that we find very special in our brains you can convert all that into logic you can formalize it form a reasoning and then once you formalize everything all of knowledge and you can just calculate and that's what we're doing with our brains is we're calculating so there's this whole idea that we that this is possible that this we're aware of the concept of pattern matching in the sense that we are aware of it now add a sort of thought you they they had discovered incredible bits of mathematics like Newton's calculus and they're sort of idealism they're they're sort of extension of what they could do with logic and math sort of went along those lines and they thought there there's like yeah logic there's there's like a bunch of rules and a bunch of input they didn't realize that how you recognize a face is not just a bunch of rules but it's a ton of data plus a circuit that that sort of interprets the visual clues and the context and everything else and somehow can massively parallel pattern match against stored rules I mean but if I see you tomorrow here in front of the drop box office I might recognize you even if I'm wearing a different shirt yeah but if I if I see you tomorrow in a coffee shop in Belmont I might have no idea that was you or on the beach or whatever hey I make those mistakes myself all the time I see someone that I only know s like oh this person is a colleague of my wife's yeah and then I see them at the movies and I didn't recognize them but do you see those you call it pattern matching do you see that rules is unable to encode that to you you everything you see all the pieces of information you look around this room I'm wearing a black shirt I have a certain height I'm a human all these you can there's probably tens of thousands of facts you pick up moment by moment about this scene you take them for granted and you accumulate aggregate them together to understand the scene you don't think all that could be encoded to where at the end of the day you can just put it all on the table and calculate oh I don't know what that means I mean yes in the sense that there is no there there is no actual magic there but there are enough layers of abstraction from sort of from the facts as they enter my eyes in my ears to the understanding of the scene that that's I don't think that that AI has really covered enough of of that distance it's like if you take a human body and you realize it's built out of atoms well that that is a uselessly reductionist view right right the body is built out of organs the organs are built out of cells the cells are built out of proteins the proteins are built out of amino acids the amino acids are built out of atoms and then you get to quantum mechanics so that's a very pragmatic view I mean obviously is an engineer I agree with that kind of view but I also you also have to consider the the with the same harris view of well well intelligence is just information processing these just like you said you take in sensory information you do some stuff with it and you come up with actions that are intelligent that McGee makes it sound so easy I don't know who Sam Harris is oh let's philosopher so like this how philosophers often think right and essentially that's what the car was is wait a minute if there is like you said no magic so you basically says it doesn't appear like there is any magic but we know so little about it that it might as well be magic so just because we know that we're made of atoms just because we know we're made of organs the fact that we know very little hot to get from the atoms to organs in a way that's recreate able means it that you shouldn't get too excited just yet about the fact that you figured out that we're made of atoms right and and and the same about taking facts as are our sensory organs take them in and turning that into reasons and actions that sort of there are a lot of abstractions that we haven't quite figured out how to how to deal with those I mean I so sometimes I don't know if I can go on a tangent or not I dragged you back in sure so if I take a simple program that parses say say have a compiler it parses a program in a sense the input routine of that compiler of that parser is a sense a sensing Oregon and it builds up a mighty complicated internal representation of the program it just saw it doesn't just have a linear sequence of bytes representing the text of the program anymore it has an abstract syntax tree and I don't know how many of your viewers or listeners are familiar with compiler technology but there's fewer and fewer these days right that's also true probably people want to take a shortcut but they're sort of this abstraction is a data structure that the compiler then uses to produce outputs that is relevant like a translation of the program to machine code that can be executed by by hardware and then the data structure gets thrown away when a fish or a fly sees sort of gets visual impulses I'm sure it also builds up some data structure and for the fly that may be very minimal a fly may may have only a few I mean in the case of a fly's brain I could imagine that there are few enough layers of abstraction that it's not much more than when it's darker here than it is here well I can sense motion because a fly sort of responds when you move your arm towards it so clearly it's visual processing is intelligent well not intelligent but it has an abstraction for motion and we still have similar things in in but much more complicated in our brains I mean otherwise you couldn't drive a car if you if you couldn't sort if you didn't have an incredibly good abstraction for motion yeah in some sense the same abstraction for motion is probably one of the primary sources of our of information for us we just know what to do I think we know what to do with that we've built up other abstractions on top we've much more complicated data structures based on that and we build more persistent data structures sort of after some processing some information sort of gets stored in our memory pretty much permanently and is available on recall I mean there are some things that you sort of you're conscious that you're remembering it like you give me your phone number I well at my age I have to write it down but I could imagine I could remember those seven numbers or 10 10 digits and reproduce them in a while if I sort of repeat them to myself a few times so that's a fairly conscious form of memorization on the other hand how do I recognize your face I have no idea my brain has a whole bunch of specialized hardware that knows how to recognize faces I don't know how much of that is sort of coded in our DNA and how much of that is trained over and over between the ages of 0 and 3 but but but somehow our brains know how to do lots of things like that that are useful in our interactions with with other humans with without really being conscious of how it's done anymore right so where are actual d-day lives we're operating at the very highest level of abstraction we're just not even conscious of all the little details underlying it there's compilers on top of sec Turtles on top of turtles or Turtles all the way down it's compilers all the way down but that's essentially you see that there's no magic that's what I what I was trying to get at I think is with decart started this whole train of saying that there's no magic I mean there's always before well then the cart also have the notion though that the soul and the body were were fundamentally separate yeah I think you had to write in God in there for political reasons so I don't actually not historian but there's notions in there that all of reasoning all of human thought can be formalized I think that continued in the 20th century with with Russell and with with Gaydos incompleteness theorem this debate of what what what are the limits of the things that could be formalized that's where the touring machine came along and this exciting idea I mean underlying a lot of computing that you can do quite a lot with a computer you can you can encode a lot of the stuff we're talking about in terms of recognizing faces and so on theoretically in an algorithm they can then run on a computer and in that context I'd like to ask programming in a philosophical way so what so what it what does it mean to program a computer so you said you write a Python program or a compiled a C++ program that compiles to somebody code it's forming layers your your programming a layer of abstraction is higher how do you see programming in that context can it keep getting higher and higher levels of abstraction I think and at some point the higher level of levels of abstraction will not be called programming and they will not resemble what we we call programming at the moment there will not be source code I mean there will still be source code sort of at a lower level of the machine just like they're still molecules and electrons and and sort of proteins in our brains but and and so they're still programming and and and system administration and who knows what's keeping to keep the machine running but what the machine does is is a different level of abstraction in a sense and as far as I understand the way that for last decade or more people have made progress with things like facial recognition or the self-driving cars is all by endless endless amounts of training data where at least as a layperson and I feel myself totally as a layperson in that field it looks like the researchers who publish the results don't necessarily know exactly how how their algorithms work and that I often get upset when I sort of read a sort of a fluff piece about Facebook in the newspaper or social networks and they say well Albert and that that's like a totally different interpretation of the word algorithm yeah because for me the way I was trained or what I learned when I was eight or ten years old an algorithm is a set of rules that you completely understand that can be mathematically analyzed and and and you can prove things you can like prove that Aires Dawson E's sieve produces all prime numbers and only prime numbers yes so I don't know if you know how Andre Carpathia's I'm afraid not so he's a ahead of hey aya Tesla now but his Stanford before and he has this cheeky way of calling this concept software 2.0 so let me disentangle that for a second so the so kind of what you're referring to is the traditional traditional the the algorithm the concept of an algo something that's there is clear you can read it you understand it you can prove its functioning it's kind of software 1.0 and what software 2.0 is is exactly what you described which is you have neural networks which is a type of machine learning that you feed a bunch of data and that neural network learns to do a function all you specifies the inputs and the outputs you want and you can't look inside you can't analyze it all you can do is train this function to map the inputs the outputs by giving a lot of data in that sense programming becomes getting a lot of cleaning getting a lot of data that's what programming is in this well that would be programming 2.0 2.0 to programming 2.0 I I wouldn't call that programming it's just a different activity just like building organs out of cells is not called chemistry well so let's just set that back and think sort of more generally of course but you know it's like as a parent teaching teaching your kids things can be called programming in that same sense that that's how program has been used you're providing them data examples use cases so imagine writing a function not by not with for loops and clearly readable text but more saying well here's a lot of examples of what this function should take and here's a lot of examples when it takes those functions it should do this and then figure out the rest so that's the 2.0 concept and the this is the question I have for you is like it's a very fuzzy way this is a reality of a lot of these pattern recognition systems and so on it's a fuzzy way of quote-unquote programming what do you think about this kind of world it should be called something totally different than programming it's like if you're a software engineer does that mean you're you're designing systems that are very can be systematically tested evaluated they have a very specific specification and then this other fuzzy software 2.0 world machine learning world that's that's something else totally or is there some intermixing that it's possible well the question is probably only being asked because we we don't quite know what that software 2.0 actually is and it sort of I think there is a truism that every task that AI has has tackled in the past at some point we realized how it was done and then it was no longer considered part of artificial intelligence because it was no longer necessary to to use that term it was just oh now he we know how to do this and a new field of science or engineering has been developed and I don't know if sort of every form of learning or sort of controlling computer systems should always be called programming I said I that I don't know maybe I'm focused too much on the terminology i but i expect that that there just will be different concepts where people with sort of different education and a different model of what they're trying to do will will develop those concepts yeah and i guess if you could comment and another way to put this concept is i think i think the kind of functions that neural networks provide is things as opposed to being able to upfront prove that this should work for all cases you throw at it all you're able it's the worst case analysis versus average case analysis all you're able to say is it's it seems on everything we've tested to work 99.9 percent of the time but we can't guarantee it and it it fails in unexpected ways but can't even give you examples of how it fails in unexpected ways but it's like really good most of the time yeah but there's no room for that in current ways we think about programming programming 1.0 is actually sort of getting to that point to where the sort of the ideal of a bug-free program has been abandoned long ago by most software developers we only care about bugs that manifest themselves often enough to be annoying and we're willing to take the occasional crash or outage or incorrect result for granted because we can't possibly we don't have enough programmers to make all the code bug free and it would be an credibly tedious business and if you try to throw formal methods at it it gets it becomes even more tedious so every once in a while the user clicks on a link in and somehow they get an error and the average user doesn't panic they just click again and see if it works better the second time which often magically it does or they go up and they try some other way of performing their tasks so that's sort of an end-to-end recovery mechanism and inside systems there is all sorts of retries and timeouts and fall backs and I imagine that that sort of biological systems are even more full of that because otherwise they wouldn't survive do you think programming should be taught and thought of as exactly what you just said before I come from is kind of you're almost denying that fact always in the insert of basic programming education the sort of the program's you're you're having students right are so small and simple that if there is a bug you can always find it and fix it because the sort of programming as it's being taught in some even elementary middle schools in high school introduction to programming classes in college typically it's programming in the small very few classes sort of actually teach software engineering building large systems I mean every summer here at Dropbox we have a large number of interns every tech company on the west coast has the same thing these interns are always amazed because this is the first time in their life that they see what goes on in a really large software development environment and everything they've learned in college was almost always about a much smaller scale and somehow the difference in scale makes a qualitative difference in how you how you do things and how you think about it if you then take a few steps back in two decades seventies and eighties when you're first thinking about Python or just that world of programming languages did you ever think that there would be systems as large as underlying Google Facebook and Dropbox did you when you were thinking about Python I was actually always caught by surprise by yeah pretty much every stage of computing so maybe just because uh you spoken in other interviews but I think the evolution of programming languages are fascinating it's especially because it leads from my perspective towards greater and greater degrees of intelligence I learned the first programming language I played with in in Russia was with the turtle logo logo yeah and if you look I just have a list of programming languages all of which I've known played with a little bit and they're all beautiful in different ways from Fortran COBOL Lisp Algol 60 basic logo and C as a few the object-oriented came along in the 60s Simula Pascal small talk all of that lean all the classics the classics yeah the classic hits write scheme built that's built on top of Lisp on the database side SQL C++ and all that leads up to Python Pascal - and that's before Python MATLAB these kind of different communities different languages so he talked about that world I know that Python came out of ABC which actually never knew that language I just having researched this conversation went back to ABC and it looks remarkably it it has a lot of annoying qualities but underneath those like all caps and so on but underneath that there's elements of Python that are quite if they're already there that's where I got all the good stuff all the good stuff so but in that world you're swimming these programming languages were you focused on just the good stuff in your specific circle but did you have a sense of what what is everyone chasing you said that every programming language is built to scratch an itch mm-hmm were you aware of all the itches in the community and if not or if yes I mean what H we trying to scratch with Python well I'm glad I wasn't aware of all the itches because I would probably not have been able to do anything I mean if you're trying to solve every problem at once you saw nothing well yeah that it's it's too overwhelming and so I had a very very focused problem I wanted a programming language that set somewhere in between shell scripting and C and now arguably there is like one is higher level one is lower level and Python is sort of a language of an intermediate level although it's still pretty much at the high level and no I was I was thinking about much more about I want a tool that I can use to be more productive as a programmer in a very specific environment and I also had given myself a time budget for the development of the tool and that was sort of about three months for both the design like thinking through what are all the features of the language syntactically and semantically and how do i implement the whole pipeline from parsing the source code to executing it so I think both were the timeline and the goals it seems like productivity was at the core of it as a goal so like for me in the 90s and the first decade of the 21st century I was always doing machine learning AI programming for my research was always in C++ and then and then the other people who are a little more mechanical engineering Electrical Engineering our MATLAB II they're a little bit more MATLAB focus those are the world and maybe a little bit Java too but people who are more interested in and emphasizing the object oriented nature of things so but then in last 10 years or so especially with a calming of neural networks and these packages are built on Python to interface with with neural networks I switch to Python and it's just I've noticed a significant boost that I can't exactly because I don't think about it but I can't exactly put into words why I'm just except much much more productive just being able to get the job done much much faster so how do you think whatever that qualitative difference is I don't know if it's quantitative it could be just a feeling I don't know if I'm actually more productive but how do you think about Layar yeah well that that's right I think there's elements let me just speak to one aspect that I think those affect that productivity is C++ was I really enjoyed creating performant code and creating a beautiful structure where everything that you know this kind of going into this especially with the newer newer standards of templated programming of just really creating this beautiful formal structure that I found myself spending most of my time doing that as opposed to get you parsing a file and extracting a few key words or whatever the task was trying to do so what is it about Python how do you think of productivity in general as you were designing it now sort of through the decades last three decades what do you think it means to be a productive programmer and how did you try to design it into the language there are different tasks and as a programmer it's it's useful to have different tools available that sort of are suitable for different tasks so I still write C code I still write shellcode but I write most of my things in Python why do I still use those other languages because sometimes the task just demands it and well I would say most of the time the task actually demands a certain language because the task is not write a program that solves problem x from scratch but it's more like fix bug in existing program X or add a small feature to an existing large program but even if if you sort of if you're not constrained in your choice of language by context like that there is still the fact that if you write it in a certain language then you sort of you you have this balance between how long does it time does it take you to write the code and how long does the code run and when you're in sort of in the face of exploring solutions you often spend much more time writing the code than running it because every time you've sort of you've run it you see that the output is not quite what you wanted and you spend some more time Cody and a language like Python just makes death iteration much faster because there are fewer details there is a large library sort of there are fewer details that that you have to get right before your program compiles and runs there are libraries that do all sorts of stuff for you so you can sort of very quickly take a bunch of existing components put them together and get your prototype application running just like when I was building electronics I was using a breadboard most of the time so I had this like sprawl out circuit that if you shook it it would stop working because it was not put together very well but it functioned and all I wanted was to see that it worked and then move on to the next next schematic or design or add something to it once you've sort of figured out oh this is the perfect design for my radio or light sensor or whatever then you can say okay how do we design a PCB for this how do we solder the components in a small space how do we make it so that it is robust against say voltage fluctuations or mechanical disruption I mean I know nothing about that when it comes to designing electronics but I know a lot about that when it comes to to writing code so the initial initial steps are efficient fast and there's not much stuff that gets in the way but you're kind of describing from a like Darwin described the evolution of species right you're you're observing of what is about true about Python now if you take step back if the art of if the act of creating languages is art and you had three months to do it and initial steps and ha so you just specified a bunch of goals sort of things that you observe about Python perhaps you had those goals but how do you create the rules the syntactic structure the the features that result in those so I have in the beginning and I have follow-up questions about through the evolution of Python 2 but in the very beginning when you're sitting there creating the lexical analyzers or whatever evolution was still a big part of it because I I sort of I said to myself I don't want to have to design everything from scratch I'm going to borrow features from other languages that I like Oh interesting so you basically exactly you first observe what you like yeah and so that's why if you're 17 years old and you want to sort of create a programming language you're not going to be very successful at it because you have no experience with other languages whereas I was in my let's say mid-30s I had written parsers before so I had worked on the implementation of ABC I had spent years debating the design of ABC with its authors its with its designers I had nothing to do with the design it was designed fully as it was ended up being implemented when I joined the team but so you borrow ideas and concepts and very concrete sort of local rules from different languages like the indentation and certain other syntactic features from ABC but I chose to borrow string literals and how numbers work from C and various other things so in then if you take that further so yet you've had this funny sounding but I think surprisingly accurate and or at least practical title of a benevolent dictator for life for quite you know for last three decades whatever or no not the actual title but functionally speaking so you had to make decisions design decisions can you maybe let's take Python - there's a Python releasing Python 3 as an example mm-hmm it's not backward-compatible - Python - in ways that a lot of people know so what was that deliberation discussion decision like we have what was the psychology of that experience do you regret any aspects of how that experiments undergone that else yeah so it was a group process really it at that point even though I was be DFL in nine a name and and certainly everybody sort of respected my my position as the creator and and the current sort of owner of the language design I was looking at everyone else for feedback sort of Python 300 in some sense was sparked by other people in the community pointing out oh well there are a few issues that sort of bite users over and over can we do something about that and for Python three we took a number of those Python wards as they were called at the time and we said can we try to sort of make small changes to the language that address those warts and we had sort of in the past we had always taken backwards compatibility very seriously and so many Python warts in earlier versions had already been resolved because they could be resolved while maintaining backwards compatibility or sort of using a very gradual path of evolution of the language in a certain area and so we were stuck with a number of warts that were widely recognized as problems not like road blocks but nevertheless sort of things that some people trip over and you know that that's always the same thing that that people trip over when they trip and we could not think of a backwards compatible way of resolving those issues but it's still an option to not resolve the issues and so yes for for a long time we had sort of resigned ourselves to well okay the language is not going to be perfect in this way and that way that way and we sort of certain of these I mean there are still plenty of things where you can say well that's that particular detail is better in Java or in R or in Visual Basic or whatever and we're okay with that because well we can't easily change it it's not too bad we can do a little bit with user education or we can have a static analyzer or warnings in in the parser or something but there were things where we thought well these are really problems that are not going away they are getting worse in the future we should do something about do something but ultimately there is a decision to be made right yes so was that the toughest decision in the history of Python yet to make as the benevolent dictator for life or if not what are there maybe even on a smaller scale what was a decision where you were really torn up about well the toughest decision was probably to resign all right let's go there hold on a second then let me just because in the interest of time too because I have a few cool questions for you I let's touch a really important one because it was quite dramatic and beautiful in certain kinds of ways then in July this year three months ago you wrote now that pepp 572 is done I don't ever want to have to fight so hard for a and find that so many people despise my decisions I would like to remove myself entirely from the decision process I'll still be there for a while as an ordinary core developer and I'll still be available to mentor people possibly more available but I'm basically giving myself a permanent vacation for being be DFL yeah but not well in dictator for life and you all will be on your own it's just this it's a it's almost Shakespearean I'm not going to appoint a successor so water you're all going to do create a democracy anarchy a dictatorship a federation so that was a very dramatic and beautiful set of statements it's almost it's open-ended nature called the community to create a future for Python this is kind of a beautiful aspect to it well so what end and dramatic you know what was making that decision like what was on your heart on your mind stepping back now a few months later we could take you to your Maya thing I'm glad you liked of writing because it was actually written pretty quickly it was literally something like after months and months of going around in circles I had finally approved Pet 572 which I had a big hand in its design although it I didn't initiate it originally I gave it a bunch of nudges in a direction that would be better for the language so I just asked it's a sink I oh no the one or no no kept 572 was actually a small feature which is assignment expressions assignment expressions dad had been taught there was just a lot of debate where a lot of people claimed that they knew what was pythonic and what was not pythonic and they knew that this was going to destroy the language this was like a violation of pythons most fundamental design philosophy and I thought that was all because I was in favor of it and that I would think I know something about pythons design philosophy so I was really tired and also stressed of that thing and literally after sort of announcing I was going to accept it a certain Wednesday evening I had finally send the email it's accepted now let's just go implement it so I went to bed feeling really relieved that's behind me and I wake up Thursday morning 7:00 a.m. and I think well that was the last one that's going to be such such a terrible debate and that's it going to be said that's the last time that I let myself be so stressed out about a peb decision I should just resign I've been sort of thinking about retirement for half a decade I've been joking and sort of mentioning retirement sort of telling the community some point in the future I'm going to retire don't take that FL part of my title too literally and I thought okay this is it I'm done I had the day off I wanted to have a good time with my wife we were going to a little beach town nearby and in he think maybe 15-20 minutes I wrote that thing that you just called Shakespearean yeah the funny thing is I get so much crap for calling you Shakespearean I didn't even I didn't even realize what a monumental decision it was because five minutes later I read that's a link to my message back on Twitter where people were already discussing on Twitter guido resigned as the BD FL and I had I had posted it on an internal forum that I thought was only read by core developers so I thought I would at least have one day before the news would sort of get out the on your own aspect I had also an element of quite it was quite a powerful element of the uncertainty that lies ahead but can you also just briefly talk about you know like for example I play guitar as a hobby for fun and whenever I play people are super positive so super friendly they're like this is awesome this is great but sometimes I enter as an outside observer I enter the programming community and there seems to some sometimes be camps on whatever the topic and and the two camps the two or plus camps are often pretty harsh are criticizing the opposing camps as an onlooker I may be totally wrong on this yeah well because like wars are sort of a favorite activity in the programming community and what is the psychology behind that is is that okay for a healthy community to have is that is that a productive force ultimately for the evolution of the language well if everybody is betting each other on the back and never telling the truth yes it would not be a good thing I think there is a middle ground where sort of being nasty to each other is not okay but there there is is a middle ground where there is healthy ongoing criticism and feedback that is very productive and you you mean at every level you see that I mean someone proposes to fix a very small issue in a codebase chances are that some reviewer will sort of respond by saying well actually you can do it better the other way right when it comes to deciding on the future of the Python core developer community we now have I think five or six competing proposals for a constitution so that future do you have a fear of that future do you have a hope for that future I'm not very confident about that future it by and large I think that the debate has been very healthy and productive and I actually when when I wrote that resignation email I knew that that Python was in a very good spot and that the Python core development community that the group of fifty or a hundred people who sort of write or review most of the code that goes into Python those people get along very well most of the time a large number of different areas of expertise are represented different levels of experience in the Python core deaf community different levels of experience completely outside in software development in general large systems small systems embedded systems so I I felt okay resigning because I knew that that the community can really take care of itself and out of a grab bag of future future developments let me ask if you can comment maybe on all very quickly concurrent programming parallel computing async IL these are things that people have expressed hope complained about whatever have discussed on reddit async i also the parallelization in general packaging i was totally clueless on this I just used piston install stuff but apparently this paper and in poetry there's these dependency packaging systems that manage dependencies and so on there urging and there's a lot of confusion about what's what's the right thing to use then also functional programming the the ever you know the the are we're going to get more functional programming or not this kind of this kind of idea and of course the the gill is a connected to the parallelization I suppose the global interpreter lock problem can you just comment on whichever you want to comment on well let's take the gill and paralyzation and async io as one one topic I'm not that hopeful that Python will develop into a sort of high concurrency high parallelism language that's sort of the the way the language is designed the way most users use the language the way the language is implemented all make that a pretty unlikely future so you think it might not even need to really the way people use it it might not be a something that should be a of Greek I think I think async IO is a special case because it sort of allows overlapping IO and only IO and that is is a sort of best practice of supporting very high throughput IO many collections per second I'm not worried about that I think async IO will evolve there are a couple of competing packages we have some very smart people who are sort of pushing us in sort of to make async IL better parallel computing I think that Python is not the language for that there are there are ways to work around it but you sort of you can't expect to write an algorithm in Python and have a compiler or paralyzed that what you can do is use a package like numpy and they're a bunch of other very powerful packages that sort of use all the CPUs available because you tell the package here's the data here's the abstract operation to apply over it go at it and then then we're back in the c++ world but those packages are themselves implemented usually in c++ that's right that's so that's where Tenzin phoned all these acts just come in where they paralyze across GPUs for example they take care of that fit so in terms of packaging can you comment on this yeah my it packaging has always been my least favorite topic it's it's it's a really tough problem because the OS and the platform want to own packaging but their packaging solution is not specific to a language like if you take Linux there are two competing packaging solutions for Linux or for UNIX in in general and but they all work across all languages and several languages like node JavaScript and Ruby and Python all have their own packaging solutions that only work within the ecosystem of that language well what should you use that is a tough problem my own own approach is I use the system packaging system to install Python and I use the Python packaging system then to install third party Python packages that's what most people do ten years ago Python packaging was really a terrible situation nowadays pip is the future there is there is a separate ecosystem for numerical and scientific Python Python based on anaconda those two can live together I don't think there is a need for more than that great so that's that's packaging that's well at least for me that's that's where I've been extremely happy I didn't I didn't even know this was an issue until it's brought up well in interest of time I mean sort of skipped through a million other questions I have so I watched the five hour five five and a half hour oral history they've done with the Computer History Museum and the nice thing about it it gave this because of the linear progression of the interview he gave this feeling of a life you know a life well-lived with interesting things in it sort of a pretty I would say a good spend of of this little existence we have on earth so outside of your family looking back what about this journey are you really proud of their moments that stand out accomplishments ideas is it the creation of Python itself that stands out as a thing that you look back and say damn I did pretty good there well I would say that Python is definitely the best thing I've ever done and I I wouldn't sort of say just the creation of Python but the way I sort of raised by farm like a baby I didn't just conceive a child but I raised the child and now I'm setting the child free in the world and I've set up the child to to sort of be able to take care of himself and I'm very proud of that and as the announcer of Monty Python's Flying Circus used to say and now for something completely different do you have a favorite Monty Python moment or a moment Hitchhiker's Guide or any other literature show a movie that cracks you up when you think about it oh you can always play me the parrots the dead parrot sketch oh that's brilliant yeah that's my favorite as well pushing up the daisies okay greeted thank you so much for talking with me today lecture there's been a great conversation you
Vladimir Vapnik: Statistical Learning | Lex Fridman Podcast #5
The following is a conversation with Vladimir Vapnik. He is the co-inventor of support vector machines, support vector clustering, VC theory, and many foundational ideas in statistical learning. He was born in the Soviet Union and worked at the Institute of Control Sciences in Moscow. Then in the United States, he worked at AT&T, NEC Labs, Facebook Research, and now is a professor at Columbia University. His work has been cited over 170,000 times. He has some very interesting ideas about artificial intelligence and the nature of learning, especially especially, on the limits of our current approaches and the open problems in the field. This conversation is part of the MIT course on Artificial General Intelligence and the Artificial Intelligence Podcast. If you enjoy it, please subscribe on YouTube or rate it on iTunes or your podcast provider of choice or simply connect with me on Twitter or other social networks at Lex Fridman, spelled F-R-I-D. And now, here's my conversation with Vladimir Vapnik. Lex: Einstein famously said that God doesn't play dice. Vladimir: Yeah. Lex: You have studied the world through the eyes of statistics, so let me ask you in terms of the nature of reality--fundamental nature of reality. Does God play dice? Vladimir: We don't know some factors. And because we don't know some factors, which could be important, it looks like God plays dice, but you should describe. In philosophy, they distinguish between two positions: positions of instrumentalism, where you're creating theories of prediction and position of realism, where you're trying to understand what God did. Lex: Can you describe instrumentalism and realism a little bit? For example, if you have some mechanical laws, what is that? Is it law which is true always and everywhere or is it a law which allows you to predict a position of moving elements? What do you believe? Do you believe that it is God's law, that God created the world which is this physical law, or is it just law for predictions? Lex: And which one is instrumentalism? For predictions. If you believe that this is the law of God and it is always true everywhere, that means that you're a realist. You're trying to understand God's thought. Lex: So the way you see the world is as an instrumentalist? Vladimir: You know I'm working from some models-- Models of Machine Learning. So in this model, you can see settings and you try to resolve the problem. And you can do it in two different ways from the point of view of the instrumentalist, and that's what everybody does now because the goal of machine learning is to find the rule for classification. That is true, but it is an instrument for prediction. But I can say, the goal of machine learning is to learn about conditional probability, so how God play and use. Does he play what is the probability for one and what is the probability for another in a given situation? But for prediction, I don't need this. I need the rule. But for understanding, I need conditional probability. Lex: So let me just step back a little bit first to talk about, you mentioned which I read last night the parts of the 1960 paper by Eugene Wigner, Unreasonable Effectiveness of Mathematics in the Natural Sciences. It's such a beautiful paper, by the way. To be honest, to confess my own work in the past two years on deep learning heavily applied, it made me feel that I was missing out on some of the beauty of nature in the way that math can uncover. So let me just step away from the poetry of that for a second. How do you see the role of math in your life? Is it a tool? Is it poetry? Where does it sit? And does math, for you, have limits? Vladimir: Some people are saying that Math is language which use god. Lex: Speak to god or use god? - Use God. Lex: Use God Vladimir: I believe that this article about Unreasonable Effectiveness of Math is that if you look at mathematical structures, they know something about reality. And most scientists from Natural Science, they look at an equation in trying to understand reality, so the same with machine learning. If you try to very carefully look on all the equations which define conditional probability, you can understand something about reality more than from your fantasy. Lex: So math can reveal the simple underlying principles of reality, perhaps. Vladimir: You know, what may seem simple, it is very hard to discover them. But then, when you discover them and look at them, you see how beautiful they are. And it is surprising why people did not see that before when you look at an equation and derive it from the equations. For example, I talked yesterday about the Least Squares Method and people had a lot of fantasies about improving least squares method. But if you look, going step by step by solving some equations, you suddenly will get some terms which after thinking; you understand it, the described position of an observation point. Least squares method, they throw out a lot of information. You don't look at the composition of point of observations. We're looking only on the details. But, when you understood that very simple idea, which is not too simple to understand and you can derive this just from equations. Lex: So some simple Algebra, so a few steps will take you to something surprising that when you think about-- Vladimir: Absolutely, yes. And that is proof that human intuition is not too rich and very primitive, and it does not see very simple situations. Lex: So let me take a step back, in general, yes. What about human ingenuity as opposed to intuition, the moments of brilliance? Do you have to be so hard on human intuition? Are there moments of brilliance on human intuition that can leap ahead of math, and then the math will catch up? Vladimir: I don't think so. I think the best human intuition, it is putting in axioms, then it is technical where you have to arrive. Lex: See where the axioms take you. Vladimir: Yeah. But if they correctly take axioms. Axioms are polished during generations of scientists and this is integral wisdom. Lex: That's beautifully put. When you think of Einstein and especially, relativity, what is the role of imagination coming first there in the moment of discovery of an idea? So, that's obviously a mix of math and out of the box imagination there. Vladimir: That, I don't know. Whatever I did, I exclude any imagination because whatever I saw in machine learning that come from imagination, like features, like deep learning, they're not really one to the problem. When you're looking very clearly from a mathematical equation, you'd arrive in very simple story which goes far beyond, theoretically, than whatever people can imagine because it is not good fantasies. It is just interpretation. It is just fantasy, but it is not what you need. You don't need any imagination to derive mind principle of machine learning. Lex: When you think about learning and intelligence, maybe thinking about the human brain in trying to describe mathematically the process of learning that is something like what happens in the human brain, do you think we have the tools, currently? Do you think we will ever have the tools to try to describe that process of learning? Vladimir: It is not description what's going on. It is interpretation. It is your interpretation. Your vision can be wrong. You know, when the guy who invented the microscope, Leeuwenhoek, for the first time, only he got this instrument and he kept it secret. But he wrote a report in the London Academy of Science. In his report, when he's looking on the blood, he looked everywhere--on the water, on the blood on those film, but he described blood like a fight between queens and kings. So he saw blood cells, red cells and he imagines it is like an army fighting each other. And it was his interpretation of the situation. And he sent it as a report in the Academy of Science. They very carefully looked because they believe that he is right. He saw something, but he gave a wrong interpretation. And I believe the same can happen with the brain. The most important part, you know, I believe in human language. In some proverbs, there's so much wisdom. For example, people say that it is better than a thousand days of diligent study is one day with a great teacher. But if you'll ask what the teacher does, nobody knows. And that is intelligence. But we know from history, and now from machine learning is that a teacher can do a lot. Lex: So what from a mathematical point of view is a great teacher? Vladimir: I don't know, but we can say what a teacher can do. He can introduce some invariants, some predicate for creating invariants. How is he doing it, I don't know, because a teacher knows reality and can describe from his reality a predicate and invariants. But we know when you're using invariant, you can decrease the number of observations a hundred times. Lex: Maybe try to pull that apart a little bit, but I think you mentioned that like a piano teacher saying to the student, "Play like a butterfly." I played piano. I played the guitar for a long time and maybe it's romantic and poetic, but it feels like there's a lot of truth in that statement, like there's a lot of instruction to that statement. Can you pull that apart? What is that? The language itself may not contain this information. Vladimir: It's not blah, blah, blah because it affects you. It's what? Affects you, affects your playing. Lex: Yes it does, but what is the information being exchanged there? What is the nature of information? What is the representation in that information? Vladimir: I believe that it is a sort of predicate, but I don't know. That is exactly what intelligence in machine learning should be because the rest is just mathematical technique. I think that what was discovered recently is that there are two mechanisms of learning. One is called strong convergence mechanism and big convergence mechanism. Before, people used only one convergence. In big convergence, you can use predicate. That's what "fly like butterfly" is and if you immediately effect your plan. You know there is an English proverb which is "If it looks like a duck, sleeps like a duck, and quack like a duck, then it is probably a duck." But this is exact about predicate. It looks like a duck, what does it mean? So, you saw many ducks--that's your training data. You have a description that looks like ducks. Lex: Yeah, the visual characteristics of a duck, yeah. Vladimir: Yeah, and you have a model for recognizing ducks. So you would like that theoretical description from the model to coincide. There's empirical description which you saw. So, about "it looks like a duck," it is general. But, what about swims like a duck? You should know that ducks swim. You can't say it plays chess like a duck. Okay, ducks doesn't play chess. It's a completely legal predicate but it is useless. So, how can a teacher recognize a non-useless predicate? So, up to now, we don't use this predicate in existing machine learning, so why do we need zillions of data? But this English proverb say use only three predicates--looks like a duck, swims like a duck and quack like a duck. Lex: So you can't deny the fact that swims like a duck and quacks like a duck has humor in it, has ambiguity? Vladimir: Let's talk about "swims like a duck." It does not say jumps like a duck, why? Lex: It's not relevant. Vladimir: It means that you know ducks and you know different birds. You know animals and you derived from this that it is relevant to say "swim like a duck." Lex: So in order for us to understand "swims like a duck," it feels like we need to know millions of other little pieces of information we pick up along the way. You don't think so? That doesn't need to be this knowledge-based, in those statements, carry some rich information that helps us understand the essence of duck? Vladimir: Yeah. Lex: How far are we from integrating predicates? Vladimir: You know that when you can see the complete story of machine learning, so what it does, you have a lot of functions, and then you're talking it looks like a duck. You see your training data. From the training data, you recognize what the expected duck should look like. Then, you remove all functions which do not look like what you think it should look from the training data. So, you decrease the amount of function from which you pick up one. Then, you give a second predicate and again, they create a set of functions. And after that, you pick up the best function you can. It is standard machine learning. So, why do you need not too many examples? Lex: Because your predicates are very good. Vladimir: Yeah, that's exactly basic predicate because every predicate is invented to decrease the admissible set of functions. Lex: So you talk about admissible set of functions and you talk about good functions. So what makes a good function? Vladimir: So admissible set of function is a set of function which has a small capacity or small diversity, a small dimension, which contains good functions inside. Lex: By the way, for people who don't know VC, you're the V in the VC. So how would you describe to a lay person what VC theories are? How would you describe VC? Vladimir: When you have a machine, a machine capable to pick up one function from the admissible set of function. But the set of admissible functions can be big. They contain all continuous functions and theories. You don't have so many examples to pick up functions. But it can be small-- what we call capacity, but maybe diversity-- so not very different functions in the settings, an infinite set of functions but not very diverse. So, if it's a small VC dimension and when the VC dimension is small, you need a small amount of training data. So the goal is to create admissible set of functions which have small VC dimension and contains good functions. Then, you'll be able to pick up the function using a small amount of observations. Lex: So that is the task of learning is creating a set of admissible functions that has a small VC dimension and then you figure out a clever way of picking up the good. Vladimir: That is the goal of learning which I formulated yesterday. Statistical learning theory does not involve creating admissible set of functions. In classical learning theory everywhere, in 100% of textbooks, the admissible set of functions is given, but this is telling us about nothing because the most difficult problem is to create admissible set of functions given, say, a lot of functions, a continuous set of functions. Create admissible set of functions, that means that the finite VC dimension, small VC dimension and contains good functions. So, this was out of consideration. Lex: So what's the process of doing that, I mean, that's fascinating? What is the process of creating this admissible set of functions? Vladimir: That is invariance. Lex: That's invariance. Can you describe invariance? Vladimir: Yeah. You have to think of properties of the training data and properties means they have some function and you just count what is the average value of function of training data. You have a model and what is the expectation of this function on the model and they should coincide. So, the problem is about how to pick up functions. It can be any function. In fact, it is true for all functions, but when I say a duck doesn't jump, so you don't ask a question on "jumps like a duck" because it is trivial. It does not jump, so it does not help you at all. But you know something on which questions to ask like when you ask "swims like a duck." But "looks like a duck," it is a general situation. But, looks like, say, a guy who has this illness, this disease, it is legal. So, there is a general type of predicate, "It looks like," and a special type of predicate which is related to this specific problem. And that is the intelligence part of this business and that is where a teacher is involved. Lex: Incorporating the specialized predicates. Vladimir: Yes. Lex: Okay. What do you think about deep learning as neural networks, these architectures, as helping accomplish some of the tasks you're thinking about? Their effectiveness or lack thereof, what are the weaknesses and what are the possible strengths? Vladimir: You know, I think that this is fantasy, everything like deep learning, like features. Let me give you this example. One of the greatest books is Churchill's book about the history of the Second World War. He starts in his book describing that in the old times when a war is over, the great kings, they gather together--and most of them are relatives--and they discuss what should be done to create peace and they come to an agreement. And what happens in the First World War? The general public came in power. They were so greedy that robbed Germany. It was clear for everybody that it is not peace, that peace will only last for 20 years because they were not professionals. I see the same in machine logic. There are mathematicians looking for the problem from a very deep mathematical point of view and there are computer scientists that mostly do not know mathematics. They just have interpretations of that and they invented a lot of blah, blah interpretations like deep learning. Why did you do deep learning? Mathematics does not know deep learning. Mathematics does not know neurons; it is just functions. If you like to say piecewise linear function, say that and do it in a class of piecewise linear function. But they invented something and then they tried to prove the advantage of that through interpretations, which was mostly wrong. And when it is not enough, they appeal to the brain and they say they know nothing about that. Nobody knows what's going in the brain. So, I think it is more reliable to work on math. This is a mathematical problem, do your best to solve this problem. Try to understand that there is not only one way of convergence, which is the strong way of convergence. There is a big way of convergence which requires predicates. And if you will go through all this stuff, you will see that you don't need deep learning. Even more, I would say one of the theorems, which is called Representer theorem, it says that optimal solution of mathematical problems, which describe learning, is on a shallow network, not on deep learning. Lex: On a shallow network. Yeah, the problem is there. Absolutely. So, in the end, what you're saying is exactly right. The question is, you have no value for throwing something on the table, playing with it--not math. It's like a neural network where you said throwing something in the bucket or the biological example in looking at kings and queens or the cells on the microscope, you don't see value in imagining the cells or the kings and queens and using that as inspiration, an imagination for where the math will eventually lead you? Do you think that interpretation basically deceives you in a way that's not productive? Vladimir: I think that if you're trying to analyze this business of learning and especially, the discussion about deep learning, it is a discussion about interpretations and not about things, about what you can say about things. Lex: That's right. But, aren't you surprised by the beauty of it, not mathematical beauty but the fact that it works at all? Or, are you criticizing that very beauty, our human desire to interpret, to find our silly interpretations in these constructs? Like, let me ask you this, are you surprised or does it inspire you, how do you feel about the success of a system like AlphaGo at beating the game of Go using neural networks to estimate the quality of a board? Vladimir: That is your interpretation--quality of the board. Lex: Yes. It is not our interpretation. The fact is a neural network system--it doesn't matter--a learning system that we don't, I think, mathematically, understand that well, beats the best human player, that's something that was thought impossible. Vladimir: That means it's not a very difficult problem. That's it. Lex: So we've empirically have discovered that this is not a very difficult problem. That's true. I can't argue. Vladimir: Even more, I would say, if they used deep learning, it is not the most effective way of learning theory. And usually, when people use deep learning, they're using zillions of training data, but you don't need this. So when I describe a challenge, can we do some problems that you did well with deep learning method, with deepnet, using a hundred times less training data? Even more, there are some problems that deep learning cannot solve because it's not necessarily that they created admissible set of functions. To create deep architecture means to create admissible set of functions. You cannot say that you're creating good admissible set of functions. It's your fantasy. It does not come from us. But, it is possible to create admissible set of functions because you have your training data Actually, for mathematicians, when you consider a variant, you need to use the law of large numbers. When you make a training in existing algorithms, you need a uniform law of large numbers, which is much more difficult. It requires VC dimension and all that stuff. But nevertheless, if you use both big and strong way of convergence, you can decrease a lot of training data. Lex: Yeah, you could do the three--that swims like a duck and quacks like a duck. So let's step back and think about human intelligence in general. And clearly, that has evolved in a non-mathematical way. Lex: As far as we know, God or whoever didn't come up with a model and placed in our brain of admissible functions; it kind of evolved. I don't know your view on this but Alan Turing in the 50's in his paper asked and interjected the question: Can machines think? It's not a very useful question, but can you briefly entertain this useless question "Can machines think?" So, talk about intelligence and your view of it. Vladimir: I don't know that. I know that Turing described imitation--if a computer can imitate a human being. Let's call it intelligence and he understands that it is not a thinking computer. He completely understands what he was doing, but he set up a problem of imitation. So now we understand it as a problem of not an imitation. I'm not sure that intelligence is just inside of us. It may also be outside of us. I have several observations, so when I prove some theorems, it's very difficult theorems. In a couple of years, in several places, people will prove the same theorem, say, saw a dilemma after ours was done, then another guy proves the same theorem. In the history of science, it has happened all the time. For example, geometry, it happens simultaneously. First is Lobachevsky and then Gauss and Bolyai and then other guys, and approximately, in a ten-year period of time, and I saw a lot of examples like that. And when a mathematician thinks it, when they develop something, they develop something in general which affects everybody. So, maybe our model of intelligence is only inside of us is incorrect. Lex: It's our interpretation. Yeah. Vladimir: It may be that they exist with some connection with world intelligence. I don't know that. Lex: You're almost like plugging in into... Vladimir: Yeah, exactly. Lex: ...and contributing to this. Vladimir: ...into a big network. Lex: Into a big, maybe a neural network. On the flip side of that, maybe you can comment on the big O complexity and how you see classifying algorithms by worst-case running time in relation to their input. So, that way of thinking about functions, do you think P equals un-P? Do you think that's an interesting question? Vladimir: Yeah, it is an interesting question. But let me talk about complexity and about worst-case scenario. There is a mathematical setting. When I came to the United States in 1991, people did not know this. They did not know statistical learning theorem. In Russia, it was published in our monographs, but in America, they did not know, and then, they learned it. Somebody told me that it was worst-case theory and they will create real-case theory, but until now, they haven't. Because it is a mathematical tool, you can do only what you can do using mathematics, which is clear understanding and clear description. For this reason, we introduced complexity. In VC dimension you can prove some theorems. But we also create theory for cases when you know probability measure and that is the best case it can happen. So from a mathematical point of view, you know the best possible case is the worst possible case. You can derive different models in the middle, but it's not so interesting. Lex: Do you think the edges are interesting? Vladimir: The edges are interesting because it is not so easy to get the exact bounds. It's not, in many cases where you have the bounds are not exact, but interesting principles are discovered the most. Lex: Do you think it's interesting because it's challenging and reveals interesting principles that allow you to get those bounds or do you think it's interesting because it's actually very useful for understanding the essence of a function of an algorithm? So, it's like me judging your life as a human being by the worst thing you did and the best thing you did versus all the stuff in the middle. It seems not productive. Vladimir: I don't think so because you cannot describe situations in the middle or it will not be general. So you can describe edge cases and it is clear it has some models, but you cannot describe a model for every new case. So, you'll never be accurate when you're using models. Lex: But, from a statistical point of view, the way you studied functions and the nature of learning and the world, don't you think that the real world has a very long tail that the edge cases are very far away from the mean, the stuff in the middle, or no? Vladimir: I don't know that. I think that from my point of view, if youwill use formal statistics, you need uniform law of large numbers, if you will use this invariance business, you don't need just law of large numbers. And there's a huge difference between uniform law of large numbers and large numbers. Lex: Is it useful to describe that a little more or shall we just take it at... Vladimir: No. For example, when I'm talking about ducks, I get three predicates and that was enough. But, if you will try to do formally distinguish, you will need a lot of observations. So that means that information about "looks like a duck" contained a lot of bit of information formal bits of information. So we don't know how much bit of information is contained from intelligence and that is a subject of analysis. Until now, on business, I don't have people consider artificial intelligence. They consider it as some codes which imitate activities of human beings. It is not science. It is applications. You would like to imitate Go. Okay, it's very useful and a good problem, but you need to learn something more on how people came to develop, say, predicates "sleeps like a duck" or "fly like a butterfly" or something like that. It's not that the teacher tells you how it came to his mind, how he chooses the image. That is a problem of intelligence. Lex: That is the problem of intelligence. And you see that connected to the problem of learning? Are they? Vladimir: Absolutely, because you immediately give this predicate like specific predicates "swims like a duck" or "quacks like a duck." It was chosen somehow. Lex: So what is the line of work, would you say, if you were to formulate as a set of open problems that will take us there, to fly like a butterfly, we'll get a system to be able to? Vladimir: Let's separate two stories--one mathematical story that if you have predicates you can do something, and another story on how to get predicates. It is an intelligence problem and people even did not start understanding intelligence. Because to understand intelligence, first of all, try to understand what they will teach us, how a teacher teach, why one teacher is better than another one. Lex: Yeah. And so, do you think we really even haven't started on the journey of generating the predicates? Vladimir: No. We don't understand. We even don't understand that this problem exists. Lex: You do. Vladimir: No. I just know a name. I won't understand why one teacher is better than another and how the teacher affects the student. It is not because he is repeating the problem which is in the textbooks. He makes some remarks. He makes some philosophy of reasoning. Lex: Yeah, that's beautiful. It is a formulation of a question that is the open problem: Why is one teacher better than another? Vladimir: Right. What he does about it. Lex: "Why" at every level. How did they get better? What does it mean to be better? Vladimir: Yeah. From whatever model I have, one teacher can give a very good predicate. One teacher can say "swims like a duck" and another can say "jumps like a duck." And jumps like a duck carries zero information. Lex: So what is the most exciting problem in statistical learning you ever worked on or are working on now? Vladimir: I just finished this invariance story and I'm happy that I believe that it is an ultimate learning story. At least, I can show that there are no other mechanisms. There are only two mechanisms but they separate statistical parts from intelligence parts and I know nothing about the intelligence part. And if you will know there's the intelligence part, it will help us a lot in teaching and in learning. Lex: And we'll know it when we see it? So for example, in my talk, in the last slide was a challenge. So you have a NIST digit recognition problem and deep learning claims that they did it very well say 99.5% correct answers, but they used 60,000 observations. Can you do the same using a hundred times less but incorporating invariants, what it means, you know, digit 1, 2, 3? Just looking on that, explain the vision variant I should keep, to use a hundred times less examples, to do the same job. Lex: Yeah, that last slide, unfortunately, your talk ended quickly, but that last slide was a powerful open challenge and a formulation of the essence there. Vladimir: That is the exact problem of intelligence because everybody, when machine learning started and it was developed by mathematicians, they immediately recognized that they use much more training data than humans needed. But now, again, we came to the same story of how to decrease. That is a problem of learning. It is not like in deep learning, they use zillions of training data because maybe zillions are not enough if you have a good invariance. Maybe, you'll never collect some number of observations. But now, it is a question of intelligence on how to do that because the statistical part is ready. As soon as you supply us this predicate, we can do a good job with the small amount of observations and the very first challenges of a long digital cognition and you know digits and 12 invariants. I'm thinking about that and I can say for digit 3, I would introduce the concept of horizontal symmetry, so digit 3 has horizontal symmetry more than digit 2 or something like that. But as soon as I get the horizontal symmetry, I can mathematically invent a lot of measure of horizontal symmetry or the vertical symmetry or the diagonal symmetry, whatever, if I have the ideal symmetry. What would it tell us? Looking on digits, I see that it is a meta-predicate which is not shaped into something like symmetry, like how dark is the whole picture, something like that, which can certify as a predicate. Lex: Do you think such a predicate could rise out of something that's not general, meaning, it feels like for me to be able to understand the difference between the two and the three, I would need to have had a childhood of 10 to 15 years playing with kids, going to school, being yelled at by parents, all of that, walking, jumping, looking at ducks. And now, then, I would be able to generate the right predicate for telling the difference between a two and a three, or do you think there's a more efficient way? Vladimir:I don't know. I know for sure that you must know something more than digits. Lex: Yes, and that's a powerful statement. Vladimir: Yeah, but maybe there are several languages of description around these elements of digits. So, I'm talking about symmetry, about some properties of geometry. I'm talking about something abstract. I don't know about that, but it is a problem of intelligence. So in one of our articles, it is trivial to show that every example can carry not more than one bit of information because when you show an example and you say, this is a one, you can remove functions which doesn't tell you one. The best strategy if you can do it perfectly is to remove half of that. But when you use one predicate which is "looks like a duck," you can remove much more functions in half, and that means it contains a lot of bit of information from a formal point of view. But, when you have a general picture, on whatyou want to recognize and a general picture of the world, can you invent this predicate? And, that predicate carries a lot of information. Lex: Beautifully put. Maybe it's just me, but in all the math you show in your work, which is some of the most profound mathematical work in the field of learning AI and just math, in general, I hear a lot of poetry and philosophy. You really kind of talk about philosophy of science. There's a poetry in music to a lot of the work you're doing and the way you're thinking about it, so where does that come from? Do you escape to poetry? Do you escape to music? Vladimir: I think that there exists ground truths and that can be seen everywhere. The smart guy philosopher, sometimes I'm surprised how they see deeply. Sometimes I see that some of them are completely out of subject. But the ground truths, I see in music. Lex: Music are the ground truth? Vladimir: Yeah. And in poetry, many poetry, they believe that they take dictation. Lex: So what piece of music as a piece of empirical evidence gave you a sense that they are touching something in the ground truth? Vladimir: It is structure. Lex: The structure, the math of music. Vladimir: Because when you're listening to Bach, you see the structure--very clear, very classic, very simple. And the same it was when you have axioms in geometry, you have the same feeling. And in poetry, sometimes, this is the same. Lex: Yeah. And if you look back to your childhood, you grew up in Russia. You maybe were born as a researcher in Russia, you developed as a researcher in Russia. You came to the United States and a few places. If you look back, what were some of your happiest moments as a research? Some of the most profound moments, not in terms of their impact on society, but in terms of their impact on how damn good you feel that day and you remember that moment? Vladimir: You know, every time when you found something, it is the greatest moments in life, every simple thing. But, my general feelings most of the time was wrong. You should go again and again and again and try to be honest in front of yourself, not to my interpretation, but try to understand that it is related to ground rules and it is not my blah, blah, blah interpretation or something like that. Lex: But, you're allowed to get excited at the possibility of discovery. Vladimir: Oh, yeah. Lex: You have to double check it. Vladimir: No, but how it's relates to the ground rules. Is it just temporary or is it forever? You know, you always have a feeling when you found something. How big is that? So 20 years ago, when we discovered statistical learning theory, nobody believed except for one guy, Dudley from MIT. And then, in 20 years, it became in fashion, and the same with Support Vector Machines. Lex: So, with support vector machines and learning theory, when you were working on it, you had a sense, a sense of the profundity of it, how this seems to be right, this seems to be powerful? Vladimir: Right. Absolutely. Immediately. I recognized that it will last forever. And now, when I found this invariant story, I have a feeling that this is complete learning because I have proved that there are no different mechanisms. You can have some cosmetic improvements that you can do, but in terms of invariants, you need more invariants in statistical learning organization work together. But, also, I'm happy that you can formulate what is intelligence from that and to separate from the technical point. That is completely different. Lex: Absolutely. Well, Vladimir, thank you so much for talking today. Vladimir: Thank you. Lex: It's an honor.
Yoshua Bengio: Deep Learning | Lex Fridman Podcast #4
what difference between biological neural networks and artificial neural networks is most mysterious captivating and profound for you first of all there's so much we don't know about biological neural networks and that's very mysterious and captivating because maybe it holds the key to improving our differential neural networks one of the things I studied recently something that we don't know how biological neural networks do but would be really useful for artificial ones is the ability to do credit assignment through very long time spans there are things that we can in principle do with artificial neural nets but it's not very convenient and it's not biologically plausible and this mismatch I think this kind of mismatch may be an interesting thing to study to a understand better how brains might do these things because we don't have good corresponding theories with artificial neural Nets and B maybe provide new ideas that we could explore about things that brain do differently and that we could incorporate in artificial neural Nets so let's break created assignment up a little bit yeah what it's a beautifully technical term but it could incorporate so many things so is it more on the RNN memory side that thinking like that or is it something about knowledge building up common sense knowledge over time or is it more in the reinforcement learning sense that you're picking up rewards over time for a particular to achieve certain kind of goals so I was thinking more about the first two meanings whereby we store all kinds of memories episodic memories in our brain which we can access later in order to help us both infer causes of things that we are observing now and assign credit to decisions or interpretations we came up with a while ago when you know those memories were stored and then we can change the way we would have reacted or interpreted things in the past and now that's credit assignment used for learning so in which way do you think artificial neural networks the current LS TM the current architectures are not able to capture the presumably you're thinking of very long term yes so current recurrent Nets are doing a fairly good jobs for sequences with dozens or say hundreds of time stamps and then it gets harder and harder and depending on what you have to remember and so on as you consider longer durations whereas humans seem to be able to do credit assignment through essentially arbitrary times like I could remember something I did last year and then now because I see some new evidence I'm gonna change my mind about the way I was thinking last year and hopefully not do the same mistake again I think a big part of that is probably forgetting you're only remembering the really important things it's very efficient forgetting yes so there's a selection of what we remember and I think there are really cool connection to higher-level cognition here regarding consciousness deciding and and emotions like sort of deciding what comes to consciousness and what gets stored in memory which which are not trivial either so you've been at the forefront there all along showing some of the amazing things that neural networks deep neural networks can do in the field of artificial intelligence is just broadly in all kinds of applications but we can talk about that forever but what in your view because we're thinking towards the future is the weakest aspect of the way deep neural networks represent the world what is that what is in your view is missing so currently current state-of-the-art neural nets trained on large quantities of images or texts have some level of understanding of you know what explains those datasets but it's very basic it's it's very low-level and it's not nearly as robust and abstract in general as our understanding okay so that doesn't tell us how to fix things but I think it encourages us to think about how we can maybe train our neural nets differently so that they would focus for example on causal explanations something that we don't do currently with neural net training also one thing I'll talk about in my talk this afternoon is instead of learning separately from images and videos on one hand and from text on the other hand we need to do a better job of jointly learning about language and about the world to which it refers so that you know both sides can help each other we need to have good world models in our neural nets for them to really understand sentences which talk about what's going on in the world and I think we need language input to help provide clues about what high-level concepts like semantic concepts should be represented at the top levels of these neural nets in fact there is evidence that the purely unsupervised learning of representations doesn't give rise to high level representations that are as powerful as the ones we are getting from supervised learning and so the the clues we're getting just with the labels not even sentences is already very powerful do you think that's an architecture challenge or is it a data set challenge neither I'm tempted to just end it there in your library of course data sets and architectures are something you want to always play with but but I think the crucial thing is more the training objectives the training frameworks for example going from passive observation of data to more active agents which learn by intervening in the world the relationships between causes and effects the sort of objective functions which could be important to allow the the highest level explanations to to to rise from from the learning which I don't think we have now the kinds of objective functions which could be used to reward exploration the right kind of exploration so these kinds of questions are neither in the dataset nor in the architecture but more in how we learn under what objectives and so on yeah that's a afraid you mentioned in several contexts the idea is sort of the way children learn they interact with objects of the world and it seems fascinating because it's some sense except with some cases in reinforcement learning that idea is not part of the learning process in artificial neural network so it's almost like do you envision something like an objective function saying you know what if you poke this object in this kind of way would be really helpful for me to further yes further learn right right sort of almost guiding some aspect of learning right right so I was talking to Rebecca Saxe just an hour ago and she was talking about lots and lots of evidence for infants seem to clearly take what interest them in a directed way and so they're not passive learners they they focus their attention on aspects of the world which are most interesting surprising in in a non-trivial way that makes them change their theories of the world so that's a fascinating view of the future progress but Anna the more maybe boring a question do you think going deeper and large so do you think just increasing the size of the things that have been increasing a lot in the past few years will will also make significant progress so some of the representational issues that you mentioned that is they're kind of shallow in some sense Oh higher in a sense of abstraction up straight in a sense of abstraction they're not getting some I don't think that having more more depth in the network in the sense of instead of a hundred layers we have ten thousand is going to solve our problem you don't think so is that obvious to you yes what is clear to me is that engineers and companies and labs grad students will continue to tune architectures and explore all kinds of tweaks to make the current state of the Arts that he ever slightly better but I don't think that's gonna be nearly enough I think we need some fairly drastic changes in the way that we're considering learning to achieve the goal that these learners actually understand in a deep way the environment in which they are you know observing and acting but I guess I was trying to ask a question is more interesting than just more layers is basically once you figure out a way to learn through interacting how many parameters does it take to store that information so I think our brain is quite bigger than most neural networks right right oh I see what you mean oh I I'm with you there so I agree that in order to build neural nets with the kind of broad knowledge of the world that typical adult humans have probably the kind of computing power we have now is going to be insufficient so well the good news is there are hardware companies building neural net chips and so it's gonna get better however the good news in a way which is also a bad news is that even our state-of-the-art deep learning methods fail to learn models that understand even very simple environments like some Grid worlds that we have built even these fairly simple environments I mean of course if you train them with enough examples eventually they get it but it's just like instead of what instead of what humans might need just dozens of examples these things will need millions right for very very very simple tasks and so I think there's an opportunity for academics who don't have the kind of computing power that say Google has to do really important and exciting research to advance the state-of-the-art in training frameworks learning models agent learning in even simple environments that are synthetic that seem trivial but yet current machine learning fails on we've talked about priors and common-sense knowledge it seems like we humans take a lot of knowledge for granted so what what's your view of these priors of forming this broad view of the world this accumulation of information and how we can teach a neural networks or learning systems to pick that knowledge up so knowledge you know for a while the artificial intelligence what's maybe in the 80 there's a time or knowledge representation knowledge acquisition expert systems I mean though the symbolic AI was was a view was an interesting problem set to solve and it was kind of put on hold a little bit it seems like because it doesn't work it doesn't work that's right but that's right but the goals of that remain important yes remain important kind of how do you think those goals can be addressed right so first of all I believe that one reason why the classical expert systems approach failed is because a lot of the knowledge we have so you talked about common sense intuition there's a lot of knowledge like this which is not consciously accessible the lots of decisions we're taking that we can't really explain even if sometimes we make up a story and that knowledge is also necessary for machines to take good decisions and that knowledge is hard to codify in expert systems rule-based systems and you know Costco EAJA formalism and there are other issues of course with the old AI like not really good ways of handling uncertainty I would say something more subtle which we understand better now but I think still isn't enough in the minds of people there is something really powerful that comes from distributed representations the thing that really makes neural Nets work so well and it's hard to replicate that kind of power in a symbolic world the knowledge in in expert systems and so on is nicely decomposed into like a bunch of rules whereas if you think about a neural net it's the opposite you have this big blob of parameters which work intensely together to represent everything the network knows and it's not sufficiently factorized and so I think this is one of the weaknesses of current neural nets that we have to take lessons from classically I in order to bring in another kind of compositionality which is common in language for example and in these rules but that isn't so native to New Ulm Ed's and on that line of thinking disentangled representations yes so so let me connect with disentangled representations if you might if don't mind yes exactly so for many years I've thought and I still believe that it's really important that we come up with learning algorithms either unsupervised or supervised but or enforcement whatever that build representations in which the important factors hopefully causal factors are nicely separated and easy to pick up from the representation so that's the idea of disentangle representations it says transform the data into a space where everything becomes easy we can maybe just learn with linear models about the things we care about and and I still think this is important but I think this is missing out on a very important ingredient which classically AI systems can remind us of so let's say we have these design technologies invation you still need to learn about the the relationships between the variables those high-level semantic variables they're not going to be independent I mean this is like too much of an assumption they're gonna have some interesting relationships that allow to predict things in the future to explain what happened in the past the kind of knowledge about those relationships in a classically AI system is encoded in the rules like a rule is just like a little piece of knowledge that says oh I have these two three four variables that are linked in this interesting way then I can say something about one or two of them given a couple of others right in addition to disentangling the the elements of the representation which are like the variables in rule-based system you also need to disentangle the the mechanisms that relate those variables to each other so like the rules so the rules are neatly separated like each rule is you know living on its own and when I change a rule because I'm learning it doesn't need to break other rules whereas current your Mets for example are very sensitive to what's called catastrophic forgetting where after I've learned some things and then I learn new things they can destroy the old things that I had learned right if the knowledge was better factorized and and separated disentangled then you would avoid a lot of that now you can't do this in the sensory domain but a decent okay like an pixel space but but my idea is that when you project the data in the right semantic space it becomes possible to now represent this extra knowledge beyond the transformation from input to representations which is how representations act on each other and predict the future and so on in a way that can be neatly disentangled so now it's the rules or disentangle from each other and not just the variables that are disentangled from each other and you draw a distinction between semantic space and pixel like yes there need to be an architectural difference or well yeah so there's the sensory space like pixels which where everything is untangled the the information like the variables are completely interdependent in very complicated ways and also computation like the it's not just variables it's also how they are related to each other is is all intertwined but but I I'm hypothesizing that in the right high-level representation space both the variables and how they relate to each other can be disentangled and that will provide a lot of generalization power generalization power yes distribution of the test set he assumed to be the same as a distribution of the training set right this is where current machine learning is too weak it doesn't tell us anything is not able to tell us anything about how are you let's say our gonna generalize to a new distribution and and you know people may think well but there's nothing we can say if we don't know what the new distribution will be the truth is humans are able to generalize to new distributions how are we able to do that so yeah because something these new distributions even though they could look very different from the training solutions they have things in common so let me give you a concrete example you read a science fiction novel the science fiction novel maybe you know brings you in some other planet where things look very different on the surface but it's still the same laws of physics all right and so you can read the book and you understand what's going on so the distribution is very different but because you can transport a lot of the knowledge you had from Earth about the underlying cause and effect relationships and physical mechanisms and all that and maybe even social interactions you can now make sense of what is going on on this planet where like visually for example things are totally different taking that analogy further and distorting it let's enter a sign science fiction world to say Space Odyssey 2001 with hell yeah or or maybe which is probably one of my favourite AI movies and then then - and then there's another one that a lot of people love that it may be a little bit outside of the AI community is ex machina right I don't know if you've seen it yes yes but what are your views on that movie alright does it does are you able to wear things I like and things I hate so maybe you could talk about that in the context of a question I want to ask which is uh there's quite a large community of people from different backgrounds often outside of AI who are concerned about existential threat of artificial intelligence right you've seen now this community develop over time you've seen you have a perspective so what do you think is the best way to talk about a a safety to think about it to have this course about it within AI community and outside and grounded in the fact that ex machina is one of the main sources of information for the general public about AI so I think I think you're putting it right there's a big difference between the sort of discussion we oughta have within the AG community and the sort of discussion that really matter in the general public so I think the the picture of terminator and you know AI lose and killing people and super intelligence that's gonna destroy us whatever we try isn't really so useful for the public discussion because for the public discussion that things I believe really matter are the short-term and mini term very likely negative impacts of AI on society whether it's from security like you know Big Brother scenarios with face recognition or killer robots or the impact on the job market or concentration of power and discrimination all kinds of social issues which could actually some of them could really threaten democracy for example just to clarify when you said killer robots you mean autonomous weapons yes weapon systems yes I do not terminator that's right so I think these these short and medium-term concerns should be important parts of the public debate now existential risk for me is a very unlikely consideration but still worth academic investigation in the same way that you could say should we study what could happen if meteorite you know came to earth and destroyed it so I think it's very unlikely that this is gonna happen and or happen it in a reasonable future it's it's very the the sort of scenario of an AI getting loose goes against my understanding of at least current machine learning and current neural nets and so on it's not plausible to me but of course I don't have a crystal ball and who knows what a I will be in fifty years from now so I think it is worth at scientists study those problems it's just not a pressing question as far as I'm concerned so before continuing down the line a few questions there but what what do you like and not like about ex machina as a movie because I I actually watch it for the second time and enjoyed it I hated it the first time and I enjoyed it quite a bit more the second time when I sort of learned to accept certain pieces of it CC is the concept movie hi what was your experience wouldn't Laura your thoughts so the negative is the picture it paints of science is totally wrong science in general and AI in particular science is not happening in some hidden place by some you know really smart guy one person one person this is totally unrealistic this is not how it happens even a team of people in some isolated place will not make it science moved by small steps thanks to the collaboration and community of a large number of people interacting and [Music] all the scientists who are expert in their field Canon Oh what is going on even in the industrial labs its information flows and leaks and so on and and and the spirit of it is very different from the way science is painted in this movie yeah let me let me ask on that on that point it's been the case to this point yeah that kind of even if the research happens inside Google or Facebook inside companies it still kind of comes out like yes come on absolutely think that will always be the case so there I is is it possible to bottle ideas to the point where there's a set of breakthrough the go completely undiscovered by the general research community do you think that's even possible it's possible but it's unlikely unlikely it's not how it is done now it's not how I can foresee it in in the foreseeable future but of course I don't have a crystal ball and so who knows this is science fiction after all but but usually the ominous that the lights went off during during that discussion so the problem again there's a you know one thing is the movie and you could imagine all kinds of science fiction the problem wouldn't for me may be similar to the question about existential risk is that this kind of movie pain such a wrong picture of what is actual you know the actual science and how it's going on that that it can have unfortunate effects on people's understanding of current science and and so that's kind of sad is it an important principle in research which is diversity so in other words research is exploration resources explosion in the space of ideas and different people will focus on different directions and this is not just good it's essential so I'm totally fine with people exploring directions that are contrary to mine or look orthogonal to mine it's I I am more than fine I think it's important I and my friends don't claim we have universal truth about what well especially about what will happen in the future now that being said we have our intuitions and then we act accordingly according to where we think we can be most useful and where society has the most gain or to lose we should have those debates and and and and not end up in a society where there's only one voice and one way of thinking in research money is spread out so disagreement is a sign of good research good science so yes the idea of bias in in the human sense of bias yeah how do you think about instilling in machine learning something that's aligned with human values in terms of bias we and intuitively human beings have a concept of what bias means of what fundamental respect for other human beings means but how do we instill that into machine learning systems do you think so I think there are short-term things that are already happening and then there are long-term things that we need to do and the short term there are techniques that have been proposed and I think will continue to be improved and maybe alternatives will come up to take datasets in which we know there is bias we can measure it pretty much any data set where humans are you know being observed taking decisions will have some sort of bias discrimination against particular groups and so on and we can use machine learning techniques to try to build predictors classifiers that are going to be less biased we can do it for example using adversarial methods to make our systems less sensitive to these variables we should not be sensitive to so these are clear well-defined ways of trying to address the problem maybe they have weaknesses and you know more research is needed and so on but I think in fact they are sufficiently mature that governments should start regulating companies where it matters say like insurance companies so that they use those techniques because those techniques will produce the bias but at a costs for example maybe their predictions will be less accurate and so companies will not do it until you force them all right so this is short term long term I'm really interested in thinking of how we can instill moral values into computers obviously this is not something we'll achieve in the next five or ten years how can we you know there's already work in detecting emotions for example in images and sounds and texts and also studying how different agents interacting in different ways may correspond to patterns of say injustice which could trigger anger so these are things we can do in in the medium term and eventually train computers to model for example how humans react emotionally I would say the simplest thing is unfair situations which trigger anger this is one of the most basic emotions that we share with other animals I think it's quite feasible within the next few years so we can build systems that can take these kind of things to the extent unfortunately that they understand enough about the world around us which is a long time away but maybe we can initially do this in virtual environments so you can imagine like a video game we're agents interact in in some ways and then some situations trigger an emotion I think we could train machines to detect those situations and predict that the particular emotion you know will likely be felt if a human was playing one of the characters you have shown excitement and done a lot of excellent work with supervised learning but on a superbug you know there's been a lot of success on the supervised learning yes yes and one of the things I'm really passionate about is how humans and robots work together and in the context of supervised learning that means the process of annotation do you think about the problem of annotation of put in a more interesting way is humans teaching machines yes is there yes I think it's an important subject reducing it to annotation may be useful for somebody building a system tomorrow but longer-term the process of teaching I think is something that deserves a lot more attention from the machine learning community so there are people have coined the term machine teaching so what are good strategies for teaching a learning agent and can we design train a system that gonna be is gonna be a good teacher so so in my group we have a project called the baby I or baby I game where there is a game or scenario where there's a learning agent and a teaching agent presumably the teaching agent would eventually be a human but we're not there yet and the the role of the teacher is to use its knowledge of the environment which it can acquire using whatever way brute force to help the learner learn as quickly as possible so the learner is going to try to learn by it of maybe be using some exploration and whatever but the teacher can choose can can can have an influence on the interaction with the learner so as to guide the learner maybe teach it the things that the learner has most trouble with or just at the boundary between what it knows and doesn't know and so on so this is there's a tradition of these kind of ideas from other fields and like tutorial systems for example and they I and and of course people in the humanities have been thinking about these questions but I think it's time that machine learning people look at this because in the future we'll have more and more human machine interaction with a human in the loop and I think understanding how to make this work better all the problems around that are very interesting and not sufficiently addressed you've done a lot of work with language to what aspect of the traditionally formulated Turing test a test of natural language understanding a generation in your eyes is the most difficult of conversation but in your eyes is the hardest part of conversation to solve for machines so I would say it's everything having to do with the non linguistic knowledge which implicitly you need in order to make sense of sentences things like the Winograd schemas so these sentences that are semantically ambiguous in other words you need to understand enough about the world in order to really interpret probably those sentences I think these are interesting challenges for our machine learning because they point in the direction of building systems that both understand how the world works and it's causal relationships in the world and associate that knowledge with how to express it in language either for reading or writing you speak French yes it's my mother tongue it's one of the Romance languages do you think passing the Turing test and all the underlying challenges we just mentioned depend on language do you think it might be easier in front that is in English now is independent of language mmm I think it's independent of language I I would like to build systems that can use the same principles the same learning mechanisms to learn from human agents whatever their language well certainly us humans can talk more beautifully and smoothly in poetry some Russian originally I know poetry and Russian is maybe easier to convey complex ideas than it is in English but maybe I'm showing my bias and some people could say that about front half French but of course the goal ultimately is our human brain is able to utilize any kind of those languages to use them as tools to convey meaning you know of course there are differences between languages and maybe some are slightly better at some things but in the grand scheme of things where we're trying to understand how the brain works and language and so on I think these differences are a minut so you've lived perhaps through an AI winter of sorts yes how did you stay warm and continue and you're resurfacing stay warm with friends and with friends okay so it's important to have friends and what have you learned from the experience listen to your inner voice don't you know be trying to just please the crowds and the fashion and if you have a strong intuition about something that is not contradicted by actual evidence go for it I mean it could be contradicted by people but not your own instinct of based on everything you know of course of course you have to adapt your beliefs when your experiments contradict those beliefs but but you have to stick to your beliefs otherwise it's it's it's what allowed me to go through those years it's what allowed me to persist in directions that you know took time whatever all the people think took time to mature and you bring fruits so history of AI is marked with these of course it's mark with technical breakthroughs but it's also marked with these seminal events that capture the imagination of the community most recent I would say alphago beating the world champion human go player was one of those moments what do you think the next such moment might be okay surface first of all I think that these so-called seminal events are overrated as I said science really moves by small steps now what happens is you make one more small step and it's like the the drop that you know allows to that fills the bucket and and and then you have drastic consequences because now you're able to do something you were not able to do before or now say the cost of building some device or solving a problem becomes cheaper than what existed and you have a new market that opens up right so so especially in the world of Commerce and applications the impact of a small scientific progress could be huge but in the science itself I think it's very very gradual and where these steps being taken now so there's supervised right so if I look at one trend that I like in in in my community so for example in at me lie in my Institute what are the two hottest topics Gans and rain for spurning even though in the montreal in particular like reinforcement learning was something pretty much absent just two or three years ago so it is really a big interest from students and there's a big interest from people like me so I would say this is something where are we gonna see more progress even though it hasn't yet provided much in terms of actual industrial fallout like even though there's alphago there's no like Google is not making money on this right now but I think over the long term this is really really important for many reasons so in other words agent I would say reinforcement learning baby more generally agent learning because it doesn't have to be with rewards it could be in all kinds of ways that an agent is learning about its environment now reinforced learning you're excited about do you think do you think Gans could provide something yes some moment in in a well Gans or other generative models I believe will be crucial ingredients in building agents that can understand the world a lot of the successes in reinforcement learning in the past has been with policy gradient where you you'll just learn a policy you don't actually learn a model of the world but there are lots of issues with that and we don't know how to do model-based our rel right now but I think this is where we have to go in order to build models that can generalize faster and better like to new distributions that capture to some extent at least the underlying causal mechanisms in in the world last question what made you fall in love with artificial intelligence if you look back what was the first moment in your life when he's when you were fascinated by either the human mind or the artificial mind you know when I wasn't at the lesson I was reading a lot and then I I started reading science fiction there you go but I got that's that's it that that's that's where I got hooked and then and then you know I had one of the first personal computers and I got hooked in programming and so it just you know start with fiction and then make it a reality that's right Yoshio thank you so much for talking to my pleasure you
Steven Pinker: AI in the Age of Reason | Lex Fridman Podcast #3
you've studied the human mind cognition language vision evolution psychology from child to adult from the level of individual to the level of our entire civilization so I feel like I can start with a simple multiple-choice question what is the meaning of life is it a to attain knowledge as Plato said B to attain power as Nietzsche said C to escape death as Ernest Becker said d to propagate our genes as Darwin and others have said e there is no meaning as the nihilists have said F knowing the meaning of life is beyond our cognitive capabilities as Steven Pinker said based on my interpretation twenty years ago and G none of the above I'd say aid comes closest but I would amend that to attaining not only knowledge but fulfillment more generally that is life health stimulation access to the living cultural and social world now this is our meaning of life it's not the meaning of life if you were to ask our genes their meaning is to propagate copies of themselves but that is distinct from the meaning that the brain that they lead to sets for itself so to you knowledge is a small subset or a large subset it's a large subset but it's not the entirety of human striding because we also want to interact with people we want to experience beauty we want to experience the the richness of the natural world but understanding the what makes the universe tick is his way up there for some of us more than others certainly for me that's that's one of the top five so is that a fundamental aspect are you just describing your own preference or is this a fundamental aspect of human nature is to seek knowledge just in your latest book you talk about the the the power the usefulness of rationality and reason so on is that a fundamental nature human beings or is it something we should just strive for it's both it is we're capable of striving for it because it is one of the things that make us what we are Homo sapiens wise men we are unusual among animals in the degree to which we acquire knowledge and use it to survive we we make tools we strike agreements via language we extract poisons we predict the behavior of animals we try to get at the workings of plants and when I say we I don't just mean we in the modern West but we as a species everywhere which is how we've managed to occupy every niche on the planet and how we've managed to drive other animals to extinction and the refinement of Reason in pursuit of human well-being of health happiness social richness cultural richness is our our main challenge in the present that is using our intellect using our knowledge to figure out how the world works how we work in order to make discoveries and strike agreements that make us all better off in the long run right and you do that almost undeniably and in a data-driven way in a recent book but I'd like to focus on the artificial intelligence aspect of things and not just artificial intelligence but natural intelligence too so twenty years ago in the book you've written on how the mind works you conjecture again my right to interpret things you could you can correct me if I'm wrong but you conjecture that human thought in the brain may be a result of and now we're a massive network of highly interconnected neurons so from this interconnectivity emerges thought compared to artificial neural networks we use for machine learning today is there something fundamentally more complex mysterious even magical about the biological neural networks versus the ones we've been starting to use over the past 60 years and it becomes a success in the past 10 there is something a little bit mysterious about the human neural networks which is that each one of us who is a neural network knows that we ourselves are conscious conscious not of a sense of registering our surroundings or even registering our internal state but in having subjective first-person present-tense experience that is when I see red it's not just different from green but it just there's there's a redness to it I feel whether an artificial system would experience that or not I don't know and I don't think I can know that's why it's mysterious if we had a perfectly lifelike robot that was behaviorally indistinguishable from a human would we attribute consciousness to it or ought we to attribute consciousness to it and that's something that it's very hard to know but putting that aside put inside that that largely philosophical question the question is is there some difference between the human neural network and the ones that we were building in artificial intelligence will mean that we're on the current trajectory not going to reach the point where we've got a lifelike robot indistinguishable from a human because the way their neural so-called neural networks were organized are different from the way ours are organized having there's overlap but I think there are some some big differences that they're the current neural networks current so called deep learning systems are in reality not all that deep that is they are very good at extracting high order statistical regularities but most of the systems don't have a semantic level a level of actual understanding of who did what to who why where how things work what causes what else do you think that kind of thing can emerge as it does so artificial you know so much smaller the number of connections and so on in the current human biological networks but do you think sort of go to go to consciousness or to go to this higher level semantic reasoning about things do you think that can emerge with just a larger network with a more richly weirdly interconnected network separating consciousness because consciousness is even a matter of complex a really good one yeah you could have you could sensibly ask the question of whether shrimp are conscious for example they're not terribly complex but maybe they feel pain so let's just put that one that part of it aside yet but I think sheer size of a neural network is not enough to give it structure and knowledge but if it's suitably engineered then then why not that is where neural networks natural selection did a kind of equivalent of engineering of our brains so I don't know there's anything mysterious in the sense that no no system made out of silicon could ever do what a human brain can do I think it's possible in principle whether it'll ever happen depends not only on how clever we are in engineering these systems but whether even we even want to whether that's even a sensible goal that is you can ask the question is there any locomotion system that is as as good as a human well we kind of want to do better than a human ultimately in terms of legged locomotion there's no reason that humans should be our benchmark they're their tools that might be better in some ways it may just be not as maybe that we can't duplicate a natural system because at some point it's so much cheaper to use a natural system that we're not going to invest more brainpower and resources so for example we don't really have a subsidy and exact substitute for wood we still build houses out of would we still go furniture out of wood we like the look we like the feel it's wood has certain properties that synthetics don't there's not that there's any magical or mysterious about wood it's just that the extra steps of duplicating everything about wood is something we just haven't bothered because we have wood likewise a cotton I mean I'm wearing cotton clothing now feels much better than the polyester it's not that cotton has something magic in it and it's not that if there was that we couldn't ever synthesize something exactly like cotton but at some point it just it's just not worth it we've got cotton and likewise in the case of human intelligence the goal of making an artificial system that is exactly like the human brain is a goal that we no one's gonna pursue to the bitter end I suspect because if you want tools that do things better than humans you're not going to care whether it does something like humans so for example you're diagnosing cancer or particularly whether why set humans as your benchmark but in in general I suspect you also believe that even if the human should not be a benchmark on women's don't want to imitate humans in their system there's a lot to be learned about how to create an artificial intelligence system by studying the human yeah III think that's right there in in the same way that to build flying machines we want understand the laws of aerodynamics and including birds but not mimic the birds right but the same laws you have a view on AI artificial intelligence and safety that from my perspective is refreshingly rational or perhaps more importantly has elements of positivity to it which I think can be inspiring and empowering as opposed to paralyzing for many people including AI researchers the eventual existential threat of AI is obvious not only possible but obvious and for many others including a researchers the threat is not obvious so Elon Musk is is famously in the highly concerned about AI camp saying things like AI is far more dangerous and nuclear weapons and that AI will likely destroy human civilization so in February you said that if Elon was really serious about AI they the threat of AI he would stop building self-driving cars that he's doing very successfully as part of Tesla then he said Wow if even Pinker doesn't understand the difference between arrow AI like a car in general AI when the latter literally has a million times more compute power and an open-ended utility function humanity is in deep trouble so first what did you mean by the statement about Elon Musk should stop Bill ourselves driving cars if he's deeply concerned not last time that Elon Musk has fired off an intemperate tweet well we live in a world where Twitter has power yes yeah I think the the that there are two kinds of existential threat that have been discussed in connection with artificial intelligence and I think that they're both incoherent one of them is vague fear of AI takeover that it just as we subjugated animals and less technologically advanced people's so if we build something that's more advanced than us it will inevitably turn us into pets or slaves or or domesticated animal equivalents I think this confuses intelligence with a will to power that it so happens that in the intelligence system we are most familiar with namely Homo sapiens we are products of natural selection which is a competitive process and so bundled together with our problem-solving capacity are a number of nasty traits like dominance and exploitation and maximization of power and glory and resources and influence there's no reason to think that sheer problem-solving capability will set that as one of its goals its goals will be whatever we set it its goals as and as long as someone isn't building a megalomaniacal artificial intelligence and there's no reason to think that it would naturally evolve in that direction now you might say well what if we gave it the goal of maximizing its own power source well that's a pretty stupid goal to give a an autonomous system you don't give it that goal I mean that's just self-evident we idiotic so if you look at the history of the world there's been a lot of opportunities where engineers could instill in a system destructive power and they choose not to because that's the natural process of Engineering well weapons I mean if you're building a weapon its goal is to destroy people and so I think they're good reasons to not not build certain kinds of weapons I think the building nuclear weapons was a massive mistake but probably do you think so maybe pause on that because that is one of the serious threats do you think that it was a mistake in a sense that it was should have been stopped early on or do you think it's just an unfortunate event of invention that this was invented we think it's possible to stop I guess is the question it's hard to rewind the clock because of course it was invented in the context of World War two and the fear that the Nazis might develop one first then once was initiated for that reason it was it it was hard to turn off especially since winning the war against the Japanese and the Nazis was such an overwhelming goal of every responsible person that there's just nothing that people wouldn't have done then to ensure victory it's quite possible if World War two hadn't happened that nuclear weapons wouldn't have been invented we can't know but I don't think it was by any means a necessity any more than some of the other weapon systems that were envisioned but never implemented like planes that would disperse poison gas over cities like crop dusters or systems to try to do to create earthquakes and tsunamis in enemy countries to weaponize the weather weaponize solar flares all kinds of crazy schemes that that we thought the better off I think analogies between nuclear weapons and artificial intelligence are fundamentally misguided because the whole point of nuclear weapons is to destroy things the point of artificial intelligence is not to destroy things so the analogy is is misleading so there's two artificial intelligence you mentioned the first one was the intelligence all know hungry yeah the system that we design ourselves where we give it the goals goals are external to the means to attain the goals I if we don't design an artificial intelligence system to maximize dominance then it won't maximize dominance it just that we're so familiar with Homo sapiens when these two traits come bundled together particularly in men that we are apt to confuse high intelligence with a will to power but that's just an error the other fear is that we'll be collateral damage that will give artificial intelligence a goal like make paperclips and it will pursue that goal so brilliantly that before we can stop it it turns us into paperclips we'll give it the goal of curing cancer and it will turn us into guinea pigs for lethal experiments or give it the goal of world peace and its conception of world pieces no people therefore no fighting and so it'll kill us all now I think these are utterly fanciful in fact I think they're actually self-defeating they first of all assume that we're going to be so brilliant that we can design an artificial intelligence that can cure cancer but so stupid that we don't specify what we mean by curing cancer in enough detail that it won't kill us in the process and it assumes that the system will be so smart that it can cure cancer but so idiotic that it doesn't can't figure out that what we mean by curing cancer is not killing everyone so I think that the the collateral damage scenario the value alignment problem is is also based on a misconception so one of the challenges of course we don't know how to build either system currently or are we even close to knowing of course those things can change overnight but at this time theorizing about it is very challenging in either direction so that that's probably at the core the problem is without that ability to reason about the real engineering things here at hand is your imagination runs away with things exactly but let me sort of ask what do you think was the motivation the thought process of elam Wasco i build autonomous vehicles I study autonomous vehicles I studied Tesla autopilot I think it is one of the greatest currently application large scale application of artificial intelligence in the world it has a potentially a very positive impact on society so how does a person who's creating this very good quote/unquote narrow AI system also seem to be so concerned about this other general AI what do you think is the motivation there what do you think is the thing really you probably have to ask him but there and and he is notoriously flamboyant impulsive to the as we have just seen to the detriment of his own goals of the health of a company so I don't know what's going on on his mind you probably have to ask him but I don't think the and I don't think the distinction between special-purpose a and so-called general is relevant that in the same way that special-purpose AI is not going to do anything conceivable in order to attain a goal all engineering systems have to are designed to trade off across multiple goals well we build cars in the first place we didn't forget to install brakes because the goal of a car is to go fast it occurred to people yes you want to go fast but not always so you build an brakes too likewise if a car is going to be autonomous that doesn't and program it to take the shortest route to the airport it's not going to take the diagonal and mow down people and trees and fences because that's the shortest route that's not what we mean by the shortest route when we program it and that's just what and an intelligent system is by definition it takes into account multiple constraints the same is true in fact even more true of so-called general intelligence that is if it's genuinely intelligent it's not going to pursue some goal single-mindedly omitting every other consideration and collateral effect that's not artificial in general intelligence that's that's artificial stupidity I agree with you by the way on the promise of autonomous vehicles for improving human welfare I think it's spectacular and I'm surprised at how little press coverage notes that in the United States alone something like 40,000 people die every year on the highways vastly more than are killed by terrorists and we spend we spent a trillion dollars on a war to combat deaths by terrorism but half a dozen a year whereas if you're an year out 40,000 people are massacred on the highways which could be brought down to very close to zero so I'm with you on the humanitarian benefit let me just mention that it's as a person who's building these cars it is it a little bit offensive to me to say that engineers would be clueless enough not to engineer safety into systems I often stay up at night thinking about those 40,000 people that are dying and everything I tried to engineer is to save those people's lives so every new invention that I'm super excited about every new and the in all the deep learning literature and cvpr conferences and nips everything I'm super excited about is all grounded in making it safe and help people so I just don't see how that trajectory can all a sudden slip into a situation where intelligence will be highly negative you know you and I certainly agree on that and I think that's only the beginning of the potential humanitarian benefits of artificial intelligence there's been enormous attention to what are we going to do with the people whose jobs are made obsolete by artificial intelligence but very little attention given to the fact that the jobs that hooni made obsolete are horrible jobs the fact that people aren't going to be picking crops and making beds and driving trucks and mining coal these are you know soul deadening jobs and we have a whole literature sympathizing with the people stuck in these menial mind deadening dangerous jobs if we can eliminate them this is a fantastic boon to humanity now granted we you solve one problem and there's another one namely how do we get these people a a decent income but if we're smart enough to invent machines that can make beds and put away dishes and and handle hospital patients well I think we're smart enough to figure out how to redistribute income to apportion some of the vast economic savings to the human beings who will no longer be needed to to make beds okay Sam Harris says that it's obvious that eventually AI will be in existential risk he's one of the people says it's obvious we don't know when the claim goes but eventually it's obvious and because we don't know when we should worry about it now this is a very interesting argument in my eyes so how do you how do we think about time scale how do we think about existential threats when we don't really know so little about the threat unlike nuclear weapons perhaps about this particular threat that it could happen tomorrow right so but very likely won't yeah they're likely to be a hundred years away so how do do we ignore it do how do we talk about it do we worry about it what how do we think about those what is it a threat that we can imagine it's within the limits of our imagination but not within our limits of understanding - sufficient to accurately predict it but but what what is what is the ether asre AI xai being the existential threat AI can always know like enslaving us or turning us into paperclips I think the most compelling from the Sam Harris was fact it would be the paperclip situation yeah I mean I just think it's totally fanciful I just don't build a system don't give it a don't first of all the code of engineering is you don't implement a system with massive control before testing it now perhaps the culture of engineering will radically change then I would worry I don't see any signs that engineers will suddenly do idiotic things like put a electrical power plant in control of a system that they haven't tested first or all of these scenarios not only imagine a almost a magically powered intelligence you know including things like cure cancer which is probably an incoherent goal because there's so many different kinds of cancer or bring about world peace I mean how do you even specify that as a goal but the scenarios also imagine some degree of control of every molecule in the universe which not only is itself unlikely but we would not start to connect these systems to infrastructure without without testing as we would any kind of engineering system now maybe some engineers will be irresponsible and we need legal and regulatory and legal responsibilities implemented so that engineers don't do things that are stupid by their own standards but the ii-i've never seen enough of a plausible scenario of existential threat to devote large amounts of brain power to to forestall it so you believe in the sort of the power and mass of the engineering of reason as the argue this book of Reason science and sort of be the very thing that puts the development of new technology so it's safe and also keeps us safe it's the same and you know granted the same culture of safety that currently is part of the engineering mindset for airplanes for example so yeah I don't think that that that should be thrown out the window and that untested all-powerful system should be suddenly implemented but there's no reason to think they are and in fact if you look at the progress of artificial intelligence it's been you know it's been impressive especially in the last ten years or so but the idea that suddenly there'll be a step function that all of a sudden before we know it it will be all powerful that there'll be some kind of recursive self-improvement some kind of Foom is also fanciful we certainly by the technology that we that were now impresses us such as deep learning when you train something on hundreds of thousands or millions of examples they're not hundreds of thousands of problems of which curing cancer is a typical example and so the kind of techniques that have allowed AI to increase in the last five years are not the claim that are going to lead to this fantasy of of exponential sudden self-improvement so it's may I think it's it's kind of a magical thinking it's not based on our understanding of how AI actually works now give me a chance here so you said fanciful magical thinking in his TED talk Sam Harris says that thinking about AI killing all human civilization is somehow fun intellectually now I have to say as a scientist engineer I don't find it fun but when I'm having beer with my non-ai friends there is indeed something fun and appealing about it like talking about an episode of black mirror considering if a large meteor is headed towards Earth we were just told a large meteors headed towards Earth something like this and can you relate to this sense of fun and do you understand the psychology of it yeah that's a good question III personally don't find it fun I find it kind of actually a waste of time because there are genuine threats that we ought to be thinking about like like pandemics like like a cyber security vulnerabilities like the possibility of nuclear war and certainly climate change this is enough to film it many conversations without and I think there I think Sam did put his finger on something namely that there is a community us sometimes called the rationality community that delights in using its brain power to come up with scenarios that would not occur to mere mortals to less cerebral people so there is a kind of intellectual thrill in finding new things to worry about that no one has worried about yet I actually think though that it's not only is it is a kind of fun that doesn't give me particular pleasure but I think there is there can be a pernicious side to it namely that you overcome people with such dread such fatalism that there's so many ways to die to annihilate our civilization that we may as well enjoy life while we can there's nothing we can do about it if climate change doesn't do us in then runaway robots will so let's enjoy ourselves now we've got to prioritize we have to look at threats that are close to certainty such as climate change and distinguish those from ones that are merely imaginable but with infinitesimal probabilities and we have to take into account people's worry budget you can't worry about everything and if you so dread and fear and terror and numb and fatalism it can lead to a kind of numbness well they're just these problems are overwhelming and the engineers are just gonna kill us all so let's either destroy the entire infrastructure of science technology or let's just enjoy life while we can so there's a certain line of worry which I'm worried about a lot of things engineering there's a certain line of worry when you cross a lot across that it becomes paralyzing fear as opposed to productive fear and that's kind of what they're highlighting there exactly right and we've seen some we know that human effort is not well calibrated against risk in that because a basic tenet of cognitive psychology is that perception of risk and hence perception of fear is driven by imagined ability not by data and so we miss allocate vast amounts of resources to avoiding terrorism which kills on average about six Americans a year with a one exception of 9/11 we invade countries we invent entire new departments of government with massive massive expenditure of resources and lives to defend ourselves against a trivial risk whereas guaranteed risks and you mentioned as one of them you mentioned traffic fatalities and even risks that are not here but are plausible enough to worry about like pandemics like nuclear war receive far too little attention the in presidential debates there's no discussion of how to minimize the risk of nuclear war lots of discussion of terrorism for example and and so we I think it's essential to calibrate our budget of fear worry concern planning to the actual probability of harm yep so let me ask this then this question so speaking of imagined ability you said it's important to think about reason and one of my favorite people who who likes to dip into the outskirts of reason through fascinating exploration of his imagination is Joe Rogan oh yes you so who has through reason used to believe a lot of conspiracies and through a reason has stripped away a lot of his beliefs in that way so it's fascinating actually to watch him through rationality kind of throw away that ideas of Bigfoot and 9/11 I'm not sure exactly trails I don't know what the leaves in yet but you no longer know believed in that's right no either he's become a real force for for good yeah so you were on the Joe Rogan podcast in February and had a fascinating conversation but as far as I remember didn't talk much about artificial intelligence I will be on his podcast in a couple weeks Joe is very much concerned about existential threat away I am not sure if you're this is why I was I was hoping that you would get into that topic and in this way he represents quite a lot of people who look at the topic of AI from 10,000 foot level so as an exercise of communication he said it's important to be rational and reason about these things let me ask if you were to coach me as AI researcher about how to speak to Joe and the general public about AI what would you advise well I'd the short answer would be to read the sections that I wrote an Enlightenment I know about AI but a longer reason would be I think to emphasize and I think you're very well positioned as an engineer to remind people about the culture of engineering that it really is safety oriented that another discussion in enlightenment now I plot rates an accidental death from various causes plane crashes car crashes Occupational accidents even death by lightning strikes and they all plummet because the culture of engineering is how do you squeeze out the the lethal risks death by fire death by drowning death by asphyxiation all of them drastically declined because of advances in engineering then I gotta say I did not appreciate until I saw those graphs and it is because exactly people like you who stamp at night thing oh my god it is what a mime is what I mean what I'm inventing likely to hurt people and to deploy ingenuity to prevent that from happening now I'm not an engineer although I spent 22 years at MIT so I know something about the culture of engineering my understanding is that this is the way this is what you think if you're an engineer and it's essential that that culture not be suddenly switched off when come start official intelligence so I mean fact that could be a problem but is there any reason to think it would be switched off I don't think so and one there's not enough engineers speaking up for this way for this the excitement for the positive view of human nature what you're trying to create is the positivity like everything we try to invent is trying to do good for the world but let me ask you about the psychology of negativity it seems just objectively not considering the topic it seems that being negative about the future makes you sound smarter than me positive about the future irregardless of topic am I correct in the observation and if you if so why do you think that is yeah I think that I think there is that that phenomenon that as Tom Lehrer the satirist said always predict the worst and you'll be hailed as a prophet it may be part of our overall negativity bias we are as a species more attuned to their negative than the positive we dread losses more than we enjoy gains and that mate might open up a space for prophets to remind us of harms and risks and losses that we may have overlooked so I think there there there is that asymmetry so you've written some of my favorite books all over the place so starting from enlightenment now to the better angels of our nature blank slate how the mind works the the one about language language instinct bill gates big fan to set of your most recent book that it's my new favorite book of all time so for you as an author what was the book early on in your life that had a profound impact on the way you saw the world certainly this book enlightenment now is influenced by David Deutsch as the beginning of infinity a rather deep reflection on knowledge and the power of knowledge to improve the human condition the and with bits of wisdom such as that problems are inevitable but problems are solvable given the knowledge and that solutions create new problems have to be solved in their turn that's I think a kind of wisdom about the human condition that influenced the writing of this book there's some books that are excellent but obscure some of which I have on my page of my website I read a book called the history of force self-published by a political scientist named James Payne on the historical decline of violence and that was one of the inspirations for the better angels of our nature the what about early on if we look back when you're maybe a teenager loved a book called one two three infinity when I was a young adult I read that book by George gamma the physicist very accessible in humorous explanations of relativity of number theory of dimensionality high multiple dimensional spaces in a way that I think is still delightful seventy years after it was published I like that the time life science series these were books that would arrive every month my mother subscribed to each one on a different topic one would be on electricity what would be on forests want to be learned may evolution and then one was on the mind and I was just intrigued that there could be a science of mind and that that book I would cite as an influence as well then later on you fell in love with the idea of studying the mind that's one thing that grabbed you it was one of the things I would say the I read as a college student the book reflections on language by Noam Chomsky spent most of his career here at MIT Richard Dawkins two books the blind watchmaker and The Selfish Gene or enormous Li influential partly for mainly for the content but also for the writing style the ability to explain abstract concepts in lively prose Stephen Jay Gould first collection ever since Darwin also excellent example of lively writing George Miller psychologist that most psychologists are familiar with came up with the idea that human memory has a capacity of seven plus or minus two chunks and then Sophia's biggest claim to fame but he wrote a couple of books on language and communication that I've read it's an undergraduate again beautifully written and intellectually deep wonderful Steven thank you so much for taking the time today my pleasure thanks a lot Lex you
Christof Koch: Consciousness | Lex Fridman Podcast #2
as part of MIT course success zero nine nine on artificial general intelligence I got a chance to sit down with Christophe Coe who's one of the seminal figures in neurobiology in neuroscience and generally in the study of consciousness he is the president the chief scientific officer of the Allen Institute for brain science in Seattle from 1986 to 2013 he was the professor at Caltech before that he was at MIT he is extremely well sited over a hundred thousand citations his research his writing his ideas have had big impact on the scientific community and the general public in the way we think about consciousness in the way we see ourselves as human beings he's the author of several books the quest for consciousness and your biological approach and a more recent book consciousness confessions of a romantic reductionist if you enjoy this conversation this course subscribe click the little bell I got to make sure you never miss a video and in the comments leave suggestions for any people you'd like to see be part of the course or any ideas that you would like us to explore thanks very much I hope you enjoy okay before we delve into the beautiful mysteries of consciousness let's zoom out a little bit and let me ask do you think there's intelligent life out there in the universe yes I do believe so we have no evidence of it but I think the probabilities are overwhelming in favor of it give me a universe where we have 10 to the 11 galaxies and each galaxy is between 10 to the 11 10 to the 12 stars and we know more stars have one or more planets so how does that make you feel it still makes me feel special because I have experiences I feel the world I experience the world and independent of whether there are other creatures out there I still feel the world and I have access to this world in this very strange compelling way and that's the core of human existence now he said human do you think if those intelligent creatures are out there do you think they experience their world yes they evolved if they are product of natural evolution if it would have to be they will also experience their own world so consciousness isn't just a human your ID it's it's much wider it's probably it may be spread across all of biology we have the only thing that we have special is we can talk about it of course not all people can talk about the babies and little children can talk about the patients who have have a Stoke and let's see the left inferior frontal gyrus can talk about it but most normal adult people can talk about it and so we think that makes us special compared to little monkeys a dogs or cats or mice or all the other creatures that we share the planet with but all the evidence seems to suggest that they to experience the world and so it's overwhelmingly likely that other alien that aliens would also experience their world of course differently because they have a difference in serum they've different sense of they had a very different environment but the fact that I would strongly suppose that they also have experiences if your pain and pleasure and see in some sort of spectrum and here and have all the other sensors of course their language if they have one would be different so we might not be able to understand their poetry about the experiences that they have that's correct right so in a talk in a video I've heard you mention support so a DAC sound that you came up with there you go up with as part of your family when you were young first of all you're a technically a Midwestern boy you just secondly yes after that you traveled around a bit and it's a little bit of the accent you talked about support so the DAC solid having these elements of humaneness of consciousness that he discovered so I just wanted to ask can you look back and you childhood and remember one was the first time you realized you yourself sort of from a third-person perspective or our conscious being this idea of you know stepping outside yourself and seeing well there's something special going on here in my brain I can't really actually it's a good question I'm not sure I recall a discrete moment I mean you take it for granted because that's the only world you know at the only world I know you know it's a world of seeing and hearing voices and touching and all the other things so it's only much later at early in my undergrad days when I became when I enrolled in physics and in philosophy that I really thought about it and thought well this is really fundamentally very very mysterious and there's nothing really in physics right now that explains his transition from the physics of the brain to feelings where do the feelings come in all right so you can look at the foundational equation of quantum mechanics general relativity you can look at the period table of the elements you can look at the endless 80g seat chat and our genes and no way is consciousness yet I wake up every morning to a world where I have experiences and so that's the hub of the ancient mind-body problem how do experiences get into the world so what is consciousness experience consciousness is any any conte any experience some people call it subjective feeling some people call it phenomenon phenomenology some people call it quality of their philosophy all denote the same thing it feels like something in the famous word of as if a loss at Thomas Nagel it feels like something to be a bad or to be a you know an American out to be angry or to be sad or to be in love or to have pain and that is what experience is any possible experience could be as mundane as just sitting in a chair could be as exalted as you know having a mystical moment you know in in deep meditation those are just different forms of experiences experience so if you were to sit down with maybe the next skip a couple generations of IBM Watson something that one jeopardy what is the gap I guess the question is between Watson that might be much more smarter than you then ask then all any human alive but may not have experience what is the gap well so that's a big big question that's occupied people for the last certainly last 50 years since we you know since he happened the birth of of computers that's a question on chilling try to answer and of course he did it in this indirect way by proposing its test and operational test so but that's not really that's you know he tried to get it what does it mean for person to think and then he had this test like you lock him away and then you have a communication with him and then you try to to guess after while whether that is a person or whether it's a computer system there's no question that now or very soon you know Alexa or Siri or you know Google now will pass this test right and you can game it but you know ultimately certainly in your generation there will be machines that will speak with complete poise that will remember everything you ever said they'll remember every email you ever had like like Samantha remember in the movie her yeah it's no question it's gonna happen but of course the key questions is does it feel like anything to be Samantha in a movie home too does it feel like anything to be Watson and there one has to be very very strongly think they're two different concepts here that we call mingle there is the concept of intelligence natural or artificial and there is a concept of consciousness of experience natural or artificial those are very very different things now historically we associate consciousness with intelligence why because we live in a world leaving aside computers of natural selection where we are surrounded by creatures either our own kin that are less or more intelligent or we go across species some some are more adapted to particular environment others are less a tablet whether it's a whale or dog or you go talk about a Paramecium or a little worm alright and and we see the complexity of the nervous system goes from one cell to two specialized cells to a worm that has three net that has 30% of its cells and nerve cells to creature like also like a blue whale that ever has had a billion even more nerve cells and so they based on behavioral evidence and based on the underlying neuroscience we believe that as these creatures become more complex they are better adapted to it to their particular ecological niche and they become more conscious probably because their brain calls and we believe consciousness unlike the ancient ancient people thought most almost every cult thought that consciousness with intelligence has to do with your heart mm-hmm and you still to see that today you see honey I love you is on my house yes but what you should actually say no honey I love you was all my lateral hypothalamus and for Valentine's Day you should give you a sweetheart you know hypothalamic same piece of chocolate another heart shaped chocolate right and you know so we still have this language but now we believe it's a brain and so we see brains of different complexity and we think well they have different levels of consciousness they're capable of different experiences [Music] but now we confront a world where we know where we're beginning to engineer intelligence and it's radical unclear whether the intelligence we're engineering has anything to do with consciousness and whether it can experience anything because fundamentally what's the difference intelligence is about function intelligence no matter exactly how you define it sort of an adaptation to new environments being able to learn and quickly understand you know you know the setup of this and what's going on and who the actors and what's gonna happen next gets all about function consciousness is not about function consciousness is about being it's in some sense much fundamental you can see folks that you can see this in two in several cases you can see it for instance in the case of the clinic when you're dealing with patients who are let's say had a stroke or had were in traffic accident etc they're pretty much in mobile Terri Schiavo you may have heard historically she was a person here in the in the 90s in flora - heart Stood Still she was reanimated and for the next fourteen years she was what's called in a vegetative state so there are thousands of people in a vegetative state so they're you know they're you know they're like this occasionally they open their eyes for two three four five six eight hours and then close their eyes they have sleep-wake cycle occasionally they have behavior they do you know there but there's no way that you can establish a lawful relationship between what you see or the doctor says or the mom says and what the patient does correctly so so the so the there isn't any behavior yet in some of these people there is still experience you can you can design and build brain machine interfaces where you can see there's they still explain something and of course at these cases of locked-in state there's a famous book called that the diving bell and the butterfly well yet an editor French editor here the stroke in the in the brainstem unable to move except his vertical eyes eye movement he could just move his the eyes up and down and he dictated an entire book and some people even lose this at the end it all the evidence seems to suggest that they're still in there in this case you have no behavior your consciousness second cases tonight like all of us you're gonna go to sleep close your eyes you go to sleep you will wake up inside your sleeping body and you will have conscious experiences they are different from everyday experience you might fly you might not be surprised that you're flying you might meet a long-dead pet childhood dog and you're not surprised that you're meeting them you know but you have conscious experience of love of hate you know they can be very emotional your body during this stage typically to them state sends an active signal to your motor neurons to paralyze you it's called atonia right because if you don't have that like some patience what do you do you act out your dreams you get proximal and behavioral disorder which is a bad which is bad juju to get okay third case is pure experience so I recently had this what some people call a mystical experience I went to Singapore and went into a flotation tank yeah alright so this is a big tub filled was ever with water that's a body temperature and Epsom salt you still completely naked you lie inside of it you close the layer Ignace complete darkness soundproof so very quickly you become body less because you're floating and you're naked you have no rings no watch no nothing you don't feel your body anymore it's no sound soundless there's no surf if a photon a sightless timeless because after while early on you actually hear your heart but then that you you sort of adapt to that and then sort of the passage of time ceases yeah and if you train yourself like in a meditation not to swing early on you think aloud you it's a little bit spooky you feel somewhat uncomfortable or you think well I'm gonna get bored but which I do not to think actively you become mindless so there you are body less timeless you know sound less sightless mindless but you're in a conscious experience you're not asleep you're not asleep you're you you are being of pure your pure being there isn't any function you aren't doing any computation you're not remembering you're not projecting you're not planning yet you're fully conscious you're fully conscious there's something going out there it could be just a side effect so what is the the you mean epiphenomenon so what is the salaat effect meaning why what what what what is the function of you being able to lay in this sense sensory free deprivation tank and still have a conscious experience additionally myself obviously we didn't evolve with flotation tanks in our in our environment I mean so biology is not totally bad at asking why question to the nominal question why do we have two eyes why don't we have four eyes or three eyes or something well no there's probably a there is a function to that but it's we're not very good at answering those questions we can speculate endlessly where biology is very or science is very good about mechanistic question why is that charged in the universe right we find a certain universe where there positive negative charges why why does quantum mechanics Hall you know what why doesn't some other theory hold quantum mechanics hold in our universe is very unclear why so Tillie nominal question why questions are difficult to answer clearly there's some relationship between complexity brain processing power and consciousness but however in these cases in these three examples RK I gave one is an everyday experience at night the other one is a young Tom on third one it's in principle you can everybody can have these sort of mystical experiences you have a dissociation of function form of intelligence from from conscious no consciousness you caught me asking a white question let me ask a question that's not a white question you're giving a talk later today on the Turing test for intelligence and consciousness drawing lines between the two so is there a scientific way to say there's consciousness present in this entity or not and to anticipate your answer because you also there's a neurobiological answer so we can test a human brain but if you take a machine brain that you don't know tests for yet how would you even begin to approach a test if there's consciousness present in this thing okay that's a really good question so let me take in two steps so as you point out for for for for humans let's just stick with humans there's now a test called the zap and zip it's a procedure where you ping the brain using transcranial magnetic stimulation you look at the electrical reverberations essentially using EEG and then you can measure the complexity of this plain response and you can do this in a way people in asleep normal people you can do it in a wake people and then anesthetize them you can do it in patients and it's it it has hundred percent accuracy that in all those cases when you're clear the patient or the person is either conscious or unconscious the complexity is either high or low and then you can adopt these techniques to similar creatures like monkeys and dogs and and and mice that have very similar brains now of course you you point out that may not help you because we don't have a cortex you know and if I send a magnetic pulse into my iPhone or my computer it's probably gonna break something so we don't have that so what we need ultimately we need a theory of consciousness we can't just rely on our intuition our intuition is well yeah if somebody talks they're conscious however then they're all these page children babies don't talk right but we believe that that the babies also have conscious experiences right and then there are these patients I mentioned did and they don't talk when you dream you can't talk because you're paralyzed so so what would we ultimately need we can't just rely on our intuition we need a theory of constants that tells us what is it about a piece of matter what is it about a piece of highly excitable matter like the brain or like a computer that give rise to conscious experience we all believe none of us believe anymore in the old story it's a soul but that used to be the most common explanation that most people accept that in still a lot of people today believe well there's there's God and doubt only us was a special thing that animals don't have Rene Descartes famously said a dog if he hit it with your carriage may Yelp me cry but it doesn't have this special thing it doesn't have the magic the magic salt oh yeah it doesn't have restaurants the soul now we believe that isn't the case anymore so what is the difference between brains and and these guys silicon and in particular once a behavior matches so if you have cereal of tea or Alexan 20 years from now that she can talk just as good as any possible human what counts do you have to say she's not conscious in particular if she says it's of course he well cuz I'm conscious you are sir are you doing and she'll say well you know they will generate some way to yeah she'll behave like a like a person now there are several differences one is so this relates to the problem they're very hard why is consciousness a hard problem it's because it's subjective right only I have it for only I know I've direct experience of my own consciousness I don't have experience your consciousness now I assume as a sort of Bayesian person who believes in probability theory and all of that you know I can do I can do an abduction to the to the best available facts I deduce your brain is very similar to mine if I put you in a scanner your brain is graphic on a behavior same with I do if if you know if I give you this muesli and ask you how does it taste you tell me things that you know that that I would also say more or less I yes so I infer based on all of that that you're conscious now we're silly I can't do that so there I really need a theory that tells me what is it about above any system this or this it makes it conscious and we have such a theory yes so the the integrator information theory is but let me first maybe his introduction for people are not familiar the car can you you talked a lot about pants psychism can you describe what physicalism versus dualism this you mentioned the soul what what is the history of that idea what the idea psychism although the debate really out of which pan site-- chasm can emerge of dualism versus physicalism or do you not see pants psychism is fitting into that and no you can argue there's some well ok so let's step back so kemp psychism is a very ancient belief that's been around I mean Plato and us talks about it modern philosophers talk about it of course in Buddhism the idea is very prevalent that I mean the different versions of it one version says everything is in sold everything arcs and stones and dogs and people and forests and iPhones all of us all right all matter is in soil that's sort of one version another version is that all biology all creatures small a large from a single cell to a giant sequoia tree feel like something that's one I think is somewhat more realistic so the different were willing me invite feel like something I have have feeling have some kind of like some it may well be possible that it feels like something to be a Paramecium I think it's pretty likely it feels like something to be a bee or a mouse or dog sure so okay so so that you can see that's also so pants item is very bored and you can to some people for example Bertrand Russell try to advocate this for this idea it's called gazelian monism that that pant psychism is really physics viewed from the inside so the idea is that physics is very good at describing relationship among objects like charges or like gravity all right you know this card the relationship between curvature and mass distribution okay that's the relationship among thing physics doesn't really describe the ultimate reality itself it's just relationship among you know quarks or all these other stuffs from like a third-person observer yeah yes and consciousness is what physics feels from the inside so my conscious experience it's a way the physics of my brain particular my cortex feel from the inside and so if you are Paramecium you gotta remember you see Paramecium well that's a pretty dumb creature this but it has already a billion different molecules probably you know five thousand different proteins assembled in a highly highly complex system that no single person no computer system so far on this planet has ever managed to accurately simulate its complexity vastly escapes us yes and it may well be that that little thing feels like a tiny bit now it doesn't have a voice in the head like me it doesn't have expectations you know it doesn't have all that complex things but it may well if you like something yep so this is really interesting can we draw some lines and maybe try to understand the difference between life intelligence and consciousness how do you see all of those if you have to define what is a living thing what is a conscious thing and what is an intelligent thing do those intermix for you or they totally separate okay so a that's a question that we don't have a full answer right after a lot of this stuff we're talking about today is full of mysteries and fascinating ones right it was approximately can go to Aristotle who's probably the most important scientists and philosophers ever lived in certainly in Western culture he had this idea it's called halo morphism it's quite popular these days that there are different forms of soul the soul is really the form of something he saw he says all biological creature have a vegetative soul that's life principle today we think we understand something molded it's biochemistry nonlinear thermodynamics all right then he says they have a sensitive so only animals and humans have also a sensitive soul or an appetitive soul they they can see they can smell and they have drives they want to repeat use they want to eat etc and then only humans have what he called a rational soul okay right and that idea that made it into christen dome and then the rational soul is the one that lives forever he was very young he wasn't really I mean different readings of Aristotle give different was that did he believe that rational soul was immortal or not I probably think he didn't but then of course that made it into its who Plato in the Christianity and then this world became immortal and then became the connection where after to God now you so you ask me essentially you what does our modern conception of these free Aristotle would have called them different forms life we think we know something about it at least life on this planet right although we don't understand how to originate it but it's it's been difficult to rigorously pin down you see this in modern definition of death it's in de facto right now there's a conference ongoing again that tries to defined legally and medically what is death it used to be very simple des is you stop breathing your heart stops beating you're dead right yeah totally unconverted if you're unsure you wait another 10 minutes if the patient doesn't breathe you know he's well now we have ventilators we have half a pacemaker so it's much more difficult to define what death is typically des is defined at the end of life and life is defined before yes so before that okay so we don't have really very good definitions intelligence we don't have a rigorous data definition we know something how to measure it's called IG IQ or G factors right and and we're beginning to build it in in a narrow sense right like go alphago and and and and Watson and you know Google cars and uber cars and all of that that still narrow AI and some people are thinking about the artificial general intelligence but roughly as we said before it's something lose ability to learn and to adapt to new environments but that is as I said also it's radical difference from experience and it's very unclear if you build a machine that has AGI it's not at all a priori it's not at all clear that this machine will have consciousness it may or may not so let's ask it the other way do you think if you were to try to build an artificial general intelligence system do you think figuring out how to build artificial consciousness would help you get to an AGI so or put another way do you think intelligent requires consciousness in human it goes hand in hand in human or I think Ambala G consciousness intelligence goes hand in hand quite a solution because the the brain evolved to be highly complex complexity via the theory integrated information theory is sort of ultimately is what is closely tied to consciousness ultimately it's causal power upon itself and so in evolution evolved systems they go together in artificial system particular in digital machines they do not go together and if you asked me point-blank is Alexa 20 point O in the year 2041 she can easily pass every Turing test is she conscious no even if she claims she's concerts in fact you could even do a more radical version of this thought experiment we can build a computer simulation of the human brain you know what Henry Markram in the Blue Brain Project or the human brain project in Switzerland is trying to do let's grant him all the success so in ten years we have this perfect simulation of the human brain every new one is simulated in hasil Amex and it has motor neurons it has a Ibaka's area and of course it'll talk and it'll say hi I just woken up I feel great okay even that computer simulation that can imprint some map on to your brain will not be conscious why because it simulates it's a difference between the simulated and the real so it simulates the behavior salted with consciousness it might be it will if it's done properly will have all the intelligence that that particular person they're simulating has but simulating intelligence is not the same as having conscious experiences and I'll give you a really nice metaphor that engineers and physicists stupidly get I can write down in Stein's field equation nine or ten equations that describe the link in general relativity between curvature and and mass I can do that I can run this on my laptop to to predict that the sample the black hole at the center of our galaxy will be so massive that it will twist space-time around it so no light can escape I it's a black hole right but funny have you ever won that why doesn't this computer simulation suck me in alright it simulates gravity but it doesn't have the causal power of gravity it's a huge difference so it's a difference between the real and and the simulator just like it doesn't get wet inside the computer when the computer runs code that simulates a weather storm and so in order to have to have artificial continents you have to give it the same causal power as a human brain yes you have to build so-called a neuromorphic machine that has hardware that is very similar to the human brain not a digital clock for normal computer so that's just to clarify though you think that consciousness is not required to create human level intelligence it seems to accompany in the human brain but for a machine not of court so maybe just because this is AGI let's dig in a little bit about what we mean by intelligence so one thing is the G factor these kind of IQ tests of intelligence but I think if you maybe another way to say so in 2040 2050 people will have Siri that is just really impressive do you think people will say serious intelligent yes intelligence is this amorphous thing so it to be intelligent it seems like you have to have some kind of connections with other human beings in a sense that you have to impress them with your intelligence and their feels you have to somehow operate in this world full of humans and for that there feels like there has to be something like consciousness so you think you can have just the world's best natural NLP system natural language understanding a generation and that will be that will get us happy and say you know what we've created an AGI I don't know happy no well yes I do believe we can get what we call high-level functional intelligence particular sort of the G you know this this fluid like intelligence that we cherish particularly the place like MIT right in in in machines I see a boy I know reasons and I see a lot of reason to believe it's gonna happen very you know over the next 50 years of 30 years so for beneficial AI for creating an AI system that's so you mentioned ethics that is exceptionally intelligent but also does not do does you know aligns its values with our values as humanity do you think then in his consciousness yes I think that that is a very good argument that if we're concerned about AI and the threat of a aisle and Nick Bostrom accidentally said I think having an intelligent that has empathy right why do we find abusing a dog why do most of us find that apartment abusing any animal right why do we find that apartment because we have this thing called empathy which if you look at the Greek really means feeling with I feel a compass of empathy I have feeling with you I see somebody else suffer that isn't even my conspecific it's not a person it's not a lot but it's not my wife or my kids it's it's a dog but I feel naturally most of us not all of us most of us will feel emphatic and so it may well be in the long-term interest of survival of Homo sapiens sapiens that if we do build AGI and it really becomes very powerful that it has an emphatic response and doesn't just exterminate humanity so as part of the full conscious experience to create a consciousness artificial or in our human consciousness do you think fear maybe we're gonna get into the earlier days with Mitch and so on but do you think fear and suffering are essential to have consciousness do you have to have the full range of experience of it to have a system that has experience or can you have a system that only has a very particular kinds of very positive experiences look you can have in principle you can people have done this in the rat where you implanted electrode in the hypothalamus the pleasure center of the head and the rat stimulated some above and beyond anything else it doesn't care about food or natural sex or drink anymore to stimulate itself because it's it's such a pleasurable feeling I guess it's like an orgasm just you have you know all day long and so a priori I see no reason why you need different forever I need a great variety now clearly to survive that wouldn't work but if I'd engineered artificially I don't think I don't think you need a great variety of conscious expense you could have just pleasure or just fear it might be a terrible existence but I think that's possible at least on conceptual logical count cause any real creature whether artificially engineered you want to give it fear the fear of extinction that we all have and you also want to give it a positive appetitive states states that it wants to that you want the Machine encouraged to do because if they give the Machine positive feedback so you mentioned pants.i chasm to jump back a little bit you know everything having some kind of mental property how do you go from there to something like human consciousness so everything having some elements of consciousness - well is there something special about human consciousness what so so just it's not everything like a spoon there's no I the the form of Pam Jochum I think about doesn't ask I consciousness - anything like this the spoon my liver however it is the theory the integrated information theory does say that system even ones that look from the outside relatively simple Atlee if they have this internal causal power they are they it does feel like something the theory a poet doesn't see anything what's special about human biologically we know what the one thing that special about human is we speak and we have a overblown sense of our own importance right we believe we exceptional and where does God's give - - into the universe but the but behaviorally the main thing that we have we can plant we can plan over the long term and we have language and that gives us enormous amount of power and that's why we are there the current dominant species on the planet so you mentioned God you grow up a devout Roman Catholic you know Roman Catholic family so you know with consciousness you're sort of exploring some really deeply fundamental human things that religion also touches on so where does where does religion fit into your thinking about consciousness and you've you've grown throughout your life and changed your views and religion as far as I understand yeah I mean I'm now much closer to so I'm not a Roman Catholic anymore I don't believe there's sort of this God the God I was I was educated to believe in you know sits somewhere in the fullness of time I'll be united in some sort of everlasting bliss I just don't see any evidence for that look the world the night is large and full of Wonders right there are many things that I don't understand I think many things that we as a cult I look we don't even understand more than four percent of all the universe right dark matter dark energy we have no idea what it is maybe it's lost socks what do I know so so all I can tell you is it's a sort of mom my current religious or spiritual sentiments much closer to some form of Buddhism can you - without the reincarnation unfortunately there's no evidence for in reincarnation so can you describe the way Buddhism sees the world a little bit well so the you know they talk about so when when I spent several meetings with with the Dalai Lama and what always impressed me about him he really unlike for example at either the Pope was some Cardinal he always emphasized minimizing the suffering of all creatures so they have this from the early beginning they look at suffering in all creatures not just in people but in in everybody this universal and of course by degrees right in an animal Jerell will have less is less capable of suffering than a then a well developed normally developed human and they think consciousness pervades in this universe and they have these techniques you know you can think of them like mindfulness etc and meditation that tries to access sort of what they claim of this more fundamental aspect of reality I'm not sure it's more fundamentalist I think about it there's a physical and then this is inside view consciousness and those are the two aspects that's the only thing I've I have access to in my life and you gotta remember my conscious experience and your conscious experience comes prior to anything you know about physics comes PI to knowledge about the universe and atoms and super strings and molecules and all of that the only thing you directly are acquainted with is this world that's populated with with things in images and and sounds in your head and touches on all of that I actually have a question so and it sounds like you kind of have a rich life you talk about rock climbing and it seems like you really love literature and consciousness is all about experiencing things so do you think that has helped your research on this topic yes particular if you think about it the the various states so for example you do our climbing or now I do oink-cool going and a bike every day you can get into this thing called the zone and I've always I want to I want to Bob about a particular with respect to consciousness because it's a strangely addictive state you want to you want to appear I mean once people have it once you want to keep on going back to it and you wonder what is it so addicting about it and I think it's the experience of almost close to pure experience because in this in the zone you're not conscious over in a voice anymore but there's always this inner voice nagging you right you have to do this you have to do that you have to pay your taxes yet this fight was your ex and all of those things are always there but when you're in the zone all of that is gone and your justice in this wonderful state while you're fully out in the world a job you're climbing or you're hauling or biking or doing soccer what or whatever you're doing and sort of consciousness sort of is is your all action or in this case of pure experience you're not action at all but in both cases you experience some aspect of of can't you touch some basic part of off of conscious existence that is so basic and so deeply satisfying you I think you touch the root of being that's really what you're touching there you're getting close to the root of being and that's very different from intelligence so what do you think about the simulation hypothesis simulation theory the idea that we all live in a computer simulation have you knowit's justice for her I think it's as likely as the hypothesis had engaged hundreds of scholars for many centuries are we all just existing in the mind of God right right and this is just a modern version of it it's it's it's it's equally plausible people love talking about these sorts of things I know their book written is about the simulation hypothesis if that's what people want to do that's fine it seems rather esoteric it's never testable but it's not useful for you to think of in those terms so maybe connecting to the questions of free will which you've talked about I think vaguely wherever you saying that the idea that there's no free will it makes you very uncomfortable so what do you think about free will and from that you from a physics perspective or a consciousness perspective what is it all okay so from the physics perspective leaving inside quantum mechanics we believe we live in a fully deterministic world right but then comes of course quantum mechanics so now we know that certain things on principle not predictable which as you said I prefer because the idea that at the initial condition of the universe and then everything else we're just acting out the initial condition of the universe that doesn't that doesn't mean it's not a romantic notion no certainly not right now when it comes to consciousness I think we do have certain freedom we are much more constrained by a physics of course and by our past and by our own conscious desires and what our parents told us and what our environment tells us we we all know that our there's hundreds of experiments that show how we can be influenced but finally in the in the final analysis when you make a lifetime talk not really about critical decision what you really think should I marriage should I go to this school that could should I take this job is that job should I cheat on my taxes or not these sort of these are things what you really deliberate and I think under those conditions you are as free as you can be when you when you bring your entire being anti conscious being to that question and try to analyze it on all the the various condition and then you take you make a decision you are as free as you can ever be that is I think what what free will is it's not a will that's totally free to do anything it wants that's not possible right so as Jack mentioned yet you actually read a blog about books you've read amazing books from I'm Russian from Bulgakov cha oh yeah Neil Gaiman Carl Sagan Murakami so what is a book that early in your life transformed the way you saw the world something that changed your life Nietzsche against did that spokes are twister because he talks about some of these problems you know he was one of the first discoverer of the unconscious this is you know a little bit before for it when it was in the air and you know he makes all these claims that people sort of under the guise or under the mask of charity actually are very non charitable so he sort of really the first discoverer of the great land of the of the unconscious and that that really struck me and what do you think what do you think about the unconscious what do you think about Freud we think about these ideas what's what's just like dark matter in the universe what's over there in that unconscious a lot I mean much more than we think this is what a lot of last hundred years of research has shown so I think he was a genius misguided towards the end but he was all he started out as a neuroscientist but he contributed he did the studies on the on the lamprey he contributed himself to the neon hypothesis the idea that there discreet units that we call nerve cells now and then he started then he he vote you know about the unconscious and I think it's to there's lots of stuff happening you feel this particular when you're in a relationship and it breaks a son alright and then you have this terrible you can have love and hate and lust and anger and all of its mixed in and when you try to analyze yourself why am I so upset it's very very difficult to penetrate to those basements those caverns in your mind because the prying eyes of conscience doesn't have access to those but they're they are made in the amygdala or you know lots of other places they make you upset or angry or sad or depressed and it's very difficult to try to actually uncover the reason you can go to a shrink you can talk with your friend endlessly you couldn't start finally a story why this happened why you love you or don't love or whatever but you don't really know whether that's actually the with that that actually happened because you simply don't have access to those parts of the brain and they're very powerful you think that's a feature or a bug of our brain the fact that we have this deep difficult to dive into subconscious I think it's a feature because otherwise look we are we are see is like any other brain or nervous system or computer we are severely band-limited if we if everything I do every emotion I feel every my movements I make if all of that had to be another control of consciousness I couldn't I I couldn't I wouldn't be here all right so so what you do early on your brain you have to be conscious when you learn things like typing or like riding on a bike but then you what you do you train up or out I think that involve basal ganglia and stratum you bear you train up different parts of your brain and then once you do it automatically like typing you can show you do it much faster without even thinking about it because you've got this highly specialized what Francis Crick and I called zombie agents that are sort of that taking care of that while your consciousness can sort of worry about the abstract sense of the text you want to write and I think that's true for many many things but for the things like all the fights you have with the ex-girlfriend things that you would think are not useful to still linger somewhere in the sub conscious so that seems like a bug that it would stay there you think it would be better if you can analyze it and then get it out of there or just forget it ever happened you know that that seems a very buggy kind of well yeah but in general we don't have and that's probably functional we don't have an ability unless it's extremely of cases clinical dissociations right when people are heavily abused when they completely repress them they the memory but that doesn't happen in in in you know in normal people if we don't have an ability to remove traumatic memories and of course we suffer from that on the other hand probably if you had the ability to constantly wipe your memory you probably do it to an extent that isn't useful to you so yeah it's a good question with the balance so on the books is Jack mentioned correct me if I'm wrong but broadly speaking in academia and the different scientific disciplines certainly an engineering reading literature seems to be a rare pursuit perhaps I'm wrong in this but that's in my experience most people are read much more technical text and do not sort of escape or seek truth in literature it seems like you do so what do you think is the value what do you think literature asks the pursuit of scientific truth do you think it's good it's useful for given access to much wider array of human experiences how valuable do you think it is well if you want to understand human nature and nature in general then I think you have to better understand wide variety of experiences not just sitting in a lab staring at a screen and having a face flashed onto you've won a million pushing a button that's what that's what I used to do that for most psychologists do there's nothing wrong with that but you need to consider lots of other strange states you know and literature is a shortcut for this well yeah as literature that's that's what literature is all about all sorts of interesting experiences that people have the you know the contingency of it the fact that you know women experience what different black people experience the world different and you know the one way to explain that is reading all these different literature and try to find out you you see everything so relative read eBooks million years ago they thought about certain problems very very differently than us today we today like any culture think we know it all that's common to every culture every culture believes that at a day they know it all and then you realize well there's other ways of viewing the universe and some of them may have lots of things in their favor so I this is a question I wanted to ask about time scale or scale in general when you with IIT or in general try to think about consciousness try to think about these ideas we kind of naturally think in human timescales do you or and also entities that are sized close to humans do you think of things that are much larger much smaller its containing consciousness and do you think of things that take you know well is this you know it ages eons to uh to operate in their conscious cause effect cause effect it's a very good question so I think a lot about small creatures because experimentally you know a lot of people work on fly then and bees alright so and most people just think they are tormented a this box for heaven's sake right but if you look at their behavior like bees they can recognize individual humans they have this very complicated way to communicate if you've ever been involved or you know your parents when they bought a house what sort of agonizing decision that is and bees have to do that once a year right when they swarm in this spring and then they have this very lab that way they have three nut Scouts it did they go to the individual sites they come back there this power this dance literally where they danced for several days they try to recruit other needs is that a complicated decision wait when they finally want to make a decision the entire swarm the scouts warm up the entire swarm then go to one location they don't go to fifty location they go to one location that the scouts have agreed upon by themselves that's awesome if you look at the circuit complexity it's 10 times more denser than anything we have in our brain or the only of a million neurons but then you know it's amazing complex complex behavior very complicated circuitry so there's no question they experience something their life is very different they're tiny they only live you know for four workers live maybe for two months so I think IIT tells you this in principle the substrate of consciousness is the substrate that maximizes the cause-effect power over all possible space temple grains so when I think about for example do you know the science fiction story the black cloud okay it's a classic by Fred Hoyle the astronomer he has this cloud intervening between the earth and the sand the Sun and leading to some sort of to global cooling this is written in the 50s it turns out you can using the the radio dish they communicate was actually an entity it's actually an intelligent entity and they they sort of they convinced it to move away so here you have a radical different entity and in principle IT says well you can measure the the integrated information in principle at least and yes if that if the maximum of that occurs at a time scale of month rather than enough it sort of fraction of a second yes and they would experience life where each moment is a month rather than or microsecond right rather than a fraction of a second in in the human case and so there may be forms of constants that we simply don't recognize for what they are because they are so radically different from anything you and I are used to again that's why it's good to read or to watch science fiction what we want to think about this like this is friend you know Stanislav LEM this polish science fiction writer he wrote Solaris I was turned into a Hollywood movie yes his best novel it was in the 60s if there is a very ingenious and engineering background his most interesting novel is called the victorious where human civilization they they they they they have this mission to this planet and everything is destroyed and they discover machines humans got killed and then these machines took or when there was a machine evolution a Darwinian evolution he talks about this very vividly and finally the dominant they're the dominant machine intelligence organism that survive they're gigantic clouds of little hexagonal Universal salata mater this is what in the Cygnus so typically they are all lying on the ground individual by themselves but in times of crisis they can't communicate the assembly into gigantic Nets into clouds trillions of these particles and then they become hyper intelligent and they can beat anything that Youmans can can control at it it's a very beautiful and compelling where you have an intelligence where finally the humans leave the planet they simply unable to understand and comprehend this creature and they can say well either we can nuke the entire plan and destroy it or we just have to leave because fundamentally it's an it's an alien it's so alien from us and our ideas that we cannot communicate with them yeah actually in conversation so your talent US Steel or from brought up is that there could be his ideas you know you already have these artificial general intelligence like super smart or maybe conscious beings in the cellular Tom so we just don't know how to talk to them so it's the language the communication because you don't know what to do with it so that's one sort of view is consciousness there's only something you can measure so it's not conscious if you can't measure it but so you're making an ontological and an epistemic statement one is there they are it's it's just like seeing their multiverses that might be true but I can't communicate with them I don't have I can't have any also that's an epistemic argument right so those are two different things so may well be possible look at not in other case that's happening right now people are building these mini organoids do you know about this so you know you can take stem cells from under your arm put in one dish add four transcription factors and then you can induce them to code to go into large or large they're a few millimeters they're like a half a million neurons that look like matter of cells in a dish called mini organoids at Howard at Stanford everywhere they're building them it may be well be possible that they're beginning to feel like something but we we can't really communicate with them right now so people are beginning to think about the ethics of this right so yes he may be perfectly right but they made its one question are they conscious or not it's totally separate question how would I know those are two different things right if you could give advice to a young researcher sort of dreaming of understanding or creating human level intelligence or consciousness what would you say follow your dreams read widely no I mean I suppose what discipline what what is the pursuit that they should take on is it neuroscience this is a competition cognitive science is it philosophy is a computer science or products no in in a sense that okay so the only known system that have high level of intelligence is Homo sapiens so if you wanted to build it it's probably good to continue to study closely what humans do so cognitive neuroscience you know somewhere between cognitive neuroscience on the one hand and some philosophy of mind and then ai ai computer science you can look at all the original ideas in your network they all came from neuroscience right reinforcement whether it's snarky Minsky building is snarky or whether it's you know the early supalen visa experiment that how about that then gave rise to networks and then multi-layer networks so it may well be possible in fact some people argue that to make the next big step in AI once we'll realize the limits of deep convolutional net works but they can do certain things but they can't really understand but they don't they don't really can't really I can't really show them one image I can show you a single image of some knee a pickpocket who steals a wallet from a purse you immediately know that's a big pocket right now computer system will just say well it's a man it's a woman it's a purse right unless you train this machine on showing it a hundred thousand pickpockets right so it doesn't it doesn't have this easy understanding that you have right so so some people make yeah I mean in order to go to the next step or you really want to build machines that understand in a way you and I we have to go to psychology we need to understand how we do it and our brains enable us to do it and so therefore being on the cusp it's also so exciting I'd to try to understand better our nature and then to build to take some of those inside and build them so I think the most exciting thing is somewhere in the interface between cognitive science neuroscience AI computer science and philosophy of mind beautiful yeah I'd say if there is from the machine learning for from the computer science computer vision perspective many of the researchers kind of ignore the way the human brain works nor even psychology or literature or studying the brain I would hope Josh Tenenbaum talks about bringing that in more and more and that's yeah so you've worked on some amazing stuff throughout your life what's the thing that you're really excited about what's the mystery that you would love to uncover in the near term beyond beyond all the mysteries already surrounded by well so there's a structure called the Klaus poem this is structures underneath our cortex it's yay big you have one on the left am i right underneath this pie underneath a insula it's very sane it's like one millimeter it's embedded in in wiring in white matters it's very difficult to image and it has it has connection to every cortical region and Francis Crick the last paper you have of all he dictated Corrections the day he died in hospital on this paper he now we hyper hypothesize well because it has this unique Anatomy it gets input from every cortical area and projects back to every call every cortical area that the function of this structure is similar that it's just a metaphor to work the role of a conducting and symphony orchestra you've all the different cortical players you have some that do motion some they do theory of mind some that infer social interaction and color I'm hearing and all the different modules and cortex but of course what consciousness is consciousness puts it all together into one package like the binding problem all of that and this is really the function because it has a relatively few nuance compared to cortex but it it talks it is all receive the input from all of them and it projects back to all of them and so we are testing that right now we've got this beautiful neuronal reconstruction in the mouse called crown of song and town of Thal nuan said there in the closed room that if the most widespread connection of any nerve Neon I've ever seen they're very deep yeah you have individually on to sit in the clouds from tiny but then they have this very single you have this huge axonal tree that cover both FC and contralateral cortex and and trying to turn using you know fancy tools like optogenetics trying to turn those neurons on or off and study it what happens in them in the mouse so this thing is perhaps where the parts become the whole it very interface it's one of the structures it's a very good way of putting it where the the individual parts turn into the whole of the whole of the conscious experience well with that thank you very much for being here today thank you jack thank you much you
Max Tegmark: Life 3.0 | Lex Fridman Podcast #1
as part of MIT course six as $0.99 artificial general intelligence I've gotten the chance to sit down with max tegmark he is a professor here at MIT is a physicist spent a large part of his career studying the mysteries of our cosmological universe but he's also studied and delved into the beneficial possibilities and the existential risks of artificial intelligence amongst many other things he's the co-founder of the future of life Institute author of two books both of which I highly recommend first our Mathematica universe second is life 3.0 he's truly an out-of-the-box thinker and fun personality so I really enjoyed talking to him if you would like to see more of these videos in the future please subscribe and also click the little bell icon to make sure you don't miss any videos also Twitter linked in AGI that MIT that I do if you want to watch other lectures or conversations like this one better yet go read Max's book life 3.0 chapter 7 on goals is my favorite it's really where philosophy and engineer and come together and it opens with a quote by dusty s key the mystery of human existence lies not and just stayin alive but in finding something to live for lastly I believe that every failure rewards us with an opportunity to learn in that sense I've been very fortunate to fail in so many new and exciting ways and this conversation was no different I've learned about something called radio frequency interference or RFI look it up apparently music and conversations from local radio stations can bleed into the audio that you're recording in such a way that almost completely ruins that audio it's an exceptionally difficult sound source to remove so I've gotten the opportunity to learn how to avoid RFI in the future during recording sessions of also gotten the opportunity to learn how to use Adobe Audition and isotope rx6 to do some noise some audio repair of course this is exceptionally difficult noise to remove I am an engineer I'm not an audio engineer neither is anybody else in our group but we did our best nevertheless I thank you for your patience and I hope you're still able to enjoy this conversation do you think there's intelligent life out there in the universe let's open up with an easy question I have a lien minority of you here actually when I give public lectures Alfred asked for show of hands who thinks there's intelligent life out there somewhere else and almost everyone put their hands up and when I ask why they'll be like oh there's so many galaxies out there there's gonna be but I'm a numbers nerd right so when you look more carefully at it it's not so clear at all if we when we talk about our universe first of all we don't mean all of space did we actually mean I don't you can throw me in the universe if she wants behind you there it's we simply mean the spherical region of space from which light has a time to reach us so far during the fourteen point eight billion year 13.8 billion years since our Big Bang there's more space here but this is what we call a universe because that's all we have access to mm-hmm so is there intelligent life here that's gotten to the point of building telescopes and computers my guess is no actually no the probability of it happening on any given planet is some number we don't know what it is and what we do know is that the number can't be super-high because there's over a billion earth-like planets in the Milky Way galaxy alone many of which are billions of years older than Earth and aside from some UFO believers in other reason is much evidence that any super 20 civilization has come here at all and so that's the famous Fermi paradox right and then if you if you work the numbers what you find is that if you have no clue what the probability is of getting life on a given planet could be 10 to the minus 10 and the minus 20 or 10 minus to any power tensor equally likely if you want to be really open-minded that translates into it being equally likely that our nearest neighbor is 10 to the 16 meters away 10 to the 17 meters away 10 to the 18 now by the time he gets much less than than 10 to the 16 already we pretty much know there is nothing else that's close and when you get the other would have discovered us they yeah they would have been discovered as long or if they're really close we would have probably know that some engineering projects that they're doing and if it's beyond 10 to the 26 meters that's already outside of here so my guess is actually that there are we are the only life in here they've gotten the point of building advanced tech which i think is very um puts a lot of responsibility on our shoulders not screw up you know I think people who take for granted that it's okay for us to screw up have an accident in a nuclear war or go extinct somehow because there's a star trek-like situation out there with some other life forms are gonna come and bail us out and doesn't matters what I think they're lulling us into a false sense of security I think it's much more prudent to say you know let's be really grateful for this amazing opportunity we've had and make the best of it just in case it is down to us so from a physics perspective do you think intelligent life so it's unique from a sort of statistical view of the size of the universe but from the basic matter of the universe how difficult is it for intelligent life to come about the kind of advanced tech building life I in is implied in your statement that is really difficult to create something like a human species well I think I think what we know is that going from no life to having life that can do ARCA a level of tech there's some sort of - going beyond that than actually settling our whole universe with life there is some road major roadblock there which is great filter as that's sometimes called which which tough to get through it's either that roadblock is either beef behind us or in front of us I'm hoping very much that it's behind us I'm super excited every time we get a new report from NASA saying they failed to find any life on Mars like just awesome because that suggests that the hard part maybe what maybe he was getting the first ribosome or or some some very low-level kind of stepping stone so they were home free cuz if that's true then the future is really only limited by our own imagination it'd be much luckier if it turns out that this level of life is kind of a dime a dozen but maybe there is some other problem like as soon as a civilization gets advanced technology within a hundred years they get into some stupid fight with themselves and poof yeah no that would be a bummer yeah so you've explored the mysteries of the universe the cosmological universe the one that's sitting between us today I think you've also begun to explore the other universe which is sort of the mystery the mysterious universe of the mind of intelligence of intelligent life so is there a common thread between your interest or in the way you think about space and intelligence oh yeah when I was a teenager yeah I was already very fascinated by the biggest questions and I felt that the two biggest quite mysteries of all in science where our universe out there and our universe in here yeah so it's quite natural after having spent a quarter of a century on my career thinking a lot about this one I'm now indulging in the luxury of doing research on this one it's just so cool I feel the time is right now for you Trane's greatly deepening our understanding of this just start exploring this one yeah because I think I think a lot of people view intelligence as something mysterious that can only exist and biological organisms like us and therefore dismiss all talk about artificial general intelligence is science fiction but from my perspective as a physicist you know I am a blob of quarks and electrons moving around in a certain pattern and processing information in certain ways and this is also a blob of quarks and electrons I am NOT smarter than the water bottle because I'm made of different kind of works I'm made of up quarks and down quarks exact same kind as this it's a there's no secret sauce I think in me it's all about the pattern of the information processing and this means that there's no law of physics saying the way that we can't create technology which can have helped us by being incredibly intelligent and helped us crack mysteries that we couldn't in other words I think we really only seen the tip of the intelligence iceberg so far yeah so the perceptron ium yeah so you can't go in this amazing term it's a hypothetical state of matter sort of thinking from a physics perspective what is the kind of matter that can help as you're saying a subjective experience emerged consciousness emerge so how do you think about consciousness from this physics perspective very good question so again I think many people have underestimated our ability to make progress on this and by convincing themselves it's hopeless because somehow we're missing some ingredient that we need or some new consciousness particle or whatever I happen to think that we're not missing anything and and that it's or not the interesting thing about consciousness that gives us this amazing subjective experience of colors and sounds and emotions and so on is rather something at the higher level about the patterns of information processing that's why that's why I am like to think about this idea of perceptron Neum what does it mean for an arbitrary physical system to be conscious in terms of what its particles are doing or its information is doing I don't think I don't hate carbon chauvinism you know this attitude you have to be made of carbon atoms to be smart or conscious something about the information processing yes kind of matter performs yeah and you know yeah I have my favorite equations here describing various fundamental aspects of the world I feel that I think one day maybe someone who's watching this will come up with the equations that information processing has to satisfy to be conscious I'm quite convinced there is big discovery to be made there yeah because let's face it sumit we know that some information processing is conscious because we are yeah conscious but we also know that a lot of information processing is not conscious like most of the information processing happening in your brain right now is not conscious there is like 10 megabytes per second coming in and even just through your visual system you are not conscious about your heartbeat regulation or or most things by even even like if I just ask you to like read what it says here you look at it and then oh now you know what it said but you don't aware of how the computation actually happened you're like the your consciousness is like the CEO that got an email at the end we leave with a final answer so what is it that makes the difference I think that's a both of us great science mystery we're actually starting it a little bit in my lab here at MIT but I also I think it's just a really urgent question the answer for started I mean if you're an emergency room doctor and you have an unresponsive patient coming in and wouldn't it be great if in addition to having a CT scan you had a consciousness scanner mm-hmm that could figure out whether this person is actually having locked-in syndrome or is actually comatose and in the future imagine if we build the robots or the machine that we can have really good conversations which I think it's mostly very likely to happen right wouldn't you want to know like if your home helped a robot is actually experiencing anything or just like a zombie I mean would you prefer what would you prefer would you prefer that it's actually unconscious so that you don't have to feel guilty about switching it off or giving me boring chores or would you prefer well the certainly would we would prefer I would prefer the appearance of consciousness but the question is whether the appearance of consciousness is different than cost consciousness itself and sort of ask that as a question yeah do you think we need to you know understand what consciousness is solve the hard problem of consciousness in order to build something like an a GI system no I don't think that and I think we we will probably be able to build things even if we don't answer that question but if we want to make sure that what happens is a good thing we better solve it first so it's a wonderful controversy you're raising there there where you have basically three points of view about the heart problem sir there are two different points of view that both conclude that the hard problem of consciousness is BS you're on one hand you have some people like Daniel Dennett who say this is our consciousness is just BS because consciousness is the same thing as intelligence there's no difference so anything which acts conscious is conscious just like like we are and then there are also a lot of people including many top AI researchers I know you say all conscience is just because of course machines should never be conscious tonight they're always gonna is gonna be zombies never have to feel guilty about how you treat them and then there's a third group of people including Giulio Tononi for example and and another just of chakana brothers I would put myself Falls on this middle camp who say that actually some information processing is conscious and in some is not so let's find the equation which can be used to determine which it is and I think we've just been a little bit lazy kind of running away from this problem for a long time it's been almost taboo would even mention the c-word a lot of circles because look but we should stop making excuses this is a science question and we can the rock there are ways we can even test test any theory that makes predictions for this and coming back to this helper robot I mean so you said you'd want to help a robot to certainly act conscious and treat you like to have conversations with us I think so wouldn't you would you feel would you feel a little bit creeped out if you realize that it was just glossed up the tape recorder they know there was just Sambi and there's some faking emotion would you prefer that it actually had an experience or will you prefer that it's actually not experiencing anything so you feel you don't have to feel guilty about what you do to it it's such a difficult question because you know it's like when you're in a relationship and you say well I love you and the other person I love you back it's like asking well do they really love you back or are they just saying they love you back do you don't you really want them to actually love you I it's hard to it's hard to really know the difference between everything seeming like there's consciousness present there's intelligence present there's affection passion love and and actually being there I'm not sure do you have a question let's just like to make it a bit more pointed so Mass General Hospital is right across the river right yes suppose suppose you're going in for a medical procedure and they're like you know furnish the agent what we're gonna do is we're gonna give you a muscle relaxant so you won't be able to move and you're gonna feel excruciating pain during the whole surgery but you won't be able to do anything about it but then we're gonna give you this drug that erases your memory of it would you be cool about that no what difference that you're conscious about it or not if there's no behavioral change right right that's a really that's a really clear way to put it that's yeah it feels like in that sense experiencing it is a valuable quality so actually being able to have subjective experiences at least in that cases is valuable and I think we humans have a little bit of a bad track record also of making these self-serving arguments that other entities aren't conscious you know people often say oh these animals can't feel pain right it's okay to boil lobsters because we asked them if it hurt and they didn't say anything and now there was just the paper out saying lobsters did do feel pain when you boil them and they're banning it in Switzerland it and and we did this with slaves too often and say oh they don't mind they don't maybe or aren't conscious or women don't have souls or whatever I'm a little bit nervous when I hear people just take as an axiom that machines can't have experience ever I think this is just this really fascinating science question is what it is let's research it and try to figure out what it is it makes the difference between unconscious intelligent behavior and conscious intelligent behavior so in terms of so if you think of a Boston Dynamics human robot being sort of with a broom being pushed around the its starts it starts pushing on his consciousness question so let me ask do you think an AGI system like a few neuroscientists believe needs to have a physical embodiment needs to have a body or something like a body no I don't think so you mean to have to have a conscious experience to have consciousness I do think it helps a lot to have a physical embodiment learn the kind of things about the world that they're important to us humans for sure but I don't think bah diamond is necessary after you've learned it just have the experience think about when you're dreaming right your eyes are closed you're not getting any sensory input you're not behaving or moving in any way but there's still an experience there right and so there's clearly the experience that you have when you see something tool in your dreams isn't coming from your eyes it's just the information processing itself in your brain which is that experience right but if I put another way I'll say because it comes from neuroscience is the reason you want to have a body and a physical something like a physical like a you know a physical system is because you want to be able to preserve something in order to have a self you could argue would you you need to have some kind of embodiment of self to want to preserve well now we're getting a little bit on Drop amorphic that's inter and super more fising things miss Mamie tossing like self-preservation instincts I mean we are evolved organisms right right so Darwinian evolution endowed us and other involve all organism with the self-preservation instinct as those that didn't have those self-preservation genes are clean out of the gene pool right right but if you build an artificial general intelligence the mind space that you can design is much much larger than just a specific subset of minds that can evolve that happen so they CERN a GI mind doesn't necessarily have to have any self-preservation instinct it also doesn't necessarily have to be so individualistic as I'd like imagine if you could just first of all it or we're also very afraid of death you know I suppose you could back yourself up every five minutes and then your airplane is about to crash you like shucks I'm just counted I'm gonna lose the last five minutes of experiences it's my last cloud backup you're dying you know it's not this big a deal or if we could just copy experiences between our minds easily like me which we could easily do if we were silicon based right then maybe we would feel a little bit more like a hive mind actually but maybe is he so so there's a so I don't think we should take for granted at all that AG I will have to have any of those sort of competitive as alpha male instincts right on the other hand you know this is really interesting because I think some people go too far and say oh of course we don't have to have any concerns either that advanced AI will have those instincts because we can build anything you want that there's there's a very nice set of arguments going back to Steve Omohundro and Nick Bostrom and others just pointing out that when we build machines we normally build them with some kind of goal you know win this chess game drive this car safely or whatever and as soon as you put in a goal into machine especially if it's kind of open-ended goal and the machine is very intelligent it'll break that down into a bunch of sub goals and one of those gold will almost always be self-preservation because if it breaks or dies in the process it's not gonna accomplish the goal right like suppose you just build a little you have a little robot and you tell it to go down the Starmark get here and and and get you some food make your cookin Italian dinner you know and then someone mugs it and tries to break it down the way that robot has an incentive to not destroy it and defend itself or run away because otherwise it's gonna fail and cooking you dinner it's not afraid of death but it really wants to complete the dinner cooking gold so it will have a self-preservation instinct continue being a functional Asian yeah and and similarly if you give any kind of warm and they she's go to an AGI it's very likely they want to acquire more resources so it can do that better and it's exactly from those sort of sub goals that we might not have intended that but some of the concerns about AGI safety come you give it some goal which seems completely harmless and then before you realize it it's also trying to do these other things which you didn't want it to do and it's moment be smarter than us so so lastly and let me pause just because I in a very kind of human centric way see fear of death is a valuable motivator haha so you don't think do you think that's an artifact of evolution so that's the kind of mind space evolution created they were sort of almost obsessed about self-preservation kind of genetic well you don't think that's necessary to be afraid of death so not just a kind of sub goal of self-preservation just so you can keep doing the thing but more fundamentally sort of have the finite thing like this ends for you at some point the interesting do I think it's necessary before what precisely for intelligence but also for consciousness so for those for both do you think really like a finite death and the fear of it is important so before I can answer well before we can agree on whether it's necessary for intelligence or for consciousness we should be clear or how we define those two words because share a lot of really smart people to find them in very different ways I was in this on this panel and with AI experts and they couldn't they couldn't agree on how to define intelligence even so I define intelligence simply as the ability to accomplish complex goals I like your broad definition because again I don't want to be a carbon chauvinist right and in that case no it certainly certainly doesn't require fear of death I would say alpha go alpha zero is quite intelligent I don't think alpha zero has any fear of being turned off because it doesn't understand the concept of that even and and similarly consciousness I mean you could certainly imagine very simple kind of experience if you know if certain plants have any kind of experience I don't think they were afraid of dying if there's nothing they can do about it anyway much so there wasn't that much value and but more seriously I think if you ask not just about being conscious but maybe having what you would we might call an exciting life for you feel passion and I didn't really appreciate the little things maybe there but somehow maybe there perhaps it does help having having my backdrop today it's finite you know let's let's make the most of this this live to the fullest so if you if you knew you were gonna slip forever if you think you would change your yeah in some perspective it would be an incredibly boring life living forever so in the sort of loose subjective terms that you said of something exciting and something in this that other humans would understand I think is yeah it seems that the the finiteness of it is important well the good news I have for you then is based on what we understand about cosmology everything is in our universe is Pro ultimately probably finite alone although pay crunch or bit or big what's to expand anything yeah we couldn't have a Big Chill or a Big Crunch or a big rip or that's the big snap or death bubbles all over more than a billion years away so we should we certainly have vastly more time than our ancestors thought but there is still it's still pretty hard to squeeze in an infinite number of compute cycles even though there are some loophole let's just might be possible but I think I you know some people like to say that you should live as if you're about you're gonna die in five years or something that sort of optimal maybe it's a good it subs we should build our civilization as if it's all finite to be on the safe side right exactly so you mentioned in defining intelligence as the ability solve complex goals where would you draw a line how would you try to define human level intelligence and superhuman level intelligence where this consciousness part of that definition no consciousness does not come into this definition so so I think your intelligence is it's a spectrum but there are very many different kinds of goals you can have you can have a goal to be a good chess player a good goal player a good car driver a good investor good poet etc so intelligence that bind by its very nature isn't something you can measure but it's one number overall goodness no no there are some people who are more better at this some people are better than that um right now we have machines that are much better than us at some very narrow tasks like multiplying large numbers fast memorizing large databases playing chess playing go and soon driving cars but there's still no machine that can match a human child in general intelligence but but artificial general intelligence AGI in the name of your course of course that is by its very definition the the quests the build a mission in seen that can do everything as well as we can up to the old holy grail of AI from from back to its inception and then 60s if that ever happens of course I think it's gonna be the biggest transition in the history of life on earth but it but it doesn't necessarily have to wait the big impact about until machines are better than us at knitting the really big change doesn't come exactly the moment they're better than us at everything the really big change comes first there big changes when they start becoming better at us at doing most of the jobs that we do because that's it can takes away much of the demand for human labor and then the really whopping change comes when they become better than us at AI research right right because right now the timescale of AI researcher is limited by the human research and development cycle of years typically at all along the tape from one release of some software or iPhone or whatever to the next but once once we have once Google can replace 40,000 engineers by 40,000 equivalent pieces of software or whatever right then that doesn't there's no reason that has to be years it can be in principle much faster and the timescale of future progress in AI and also all of science and technology will will be driven by machines not so it's this point simple point which lives right this incredibly fun controversy about whether it can be an intelligence explosion so-called singularities Vernor Vinge called it that the idea is articulated by IJ good obviously way back fifties but you can see Alan Turing and others thought about it even earlier not did you ask me what exactly what I define England's yeah so this the the glib answer is it to say something which is better than us at all cognitive tasks will look better than any human and all cognitive tasks but the really interesting bar I think goes a little bit lower than that actually it's when they can when they're better than us it AI programming and can a general learning so that they can can if they want to get better than I said anything by just studying so they're better is a keyword and better as towards this kind of spectrum of the complexity of goals it's able to accomplish yeah so another way to so no and that's certainly a very clear definition of human law so there's it's almost like a sea that's rising you could do more and more and more things as a graphic that you show it's really nice way to put it so there's some Peaks that and there's an ocean level elevating and you saw more and more problems but you know just kind of to take a pause and we took a bunch of questions and a lot of social networks and a bunch of people asked a sort of a slightly different direction on creativity and and things like that perhaps aren't a peak the it's you know human beings are flawed and perhaps better means having being a having contradiction being fought in some way so let me sort of yeah start and start easy first of all so you have a lot of cool equations let me ask what's your favorite equation first of all I know they're all like your children but like which one is that it's the master key of want the mechanics of the microworld this equation to protect you like everything to do with atoms molecules and all that we have yeah so okay it's a quantum mechanics is certainly a beautiful mysterious formulation of our world so I'd like to sort of ask you just as an example it perhaps doesn't have the same beauty as physics does but in mathematics abstract the Andrew Wiles who proved the firm as last theta so he just saw this recently and it kind of caught my eye a little bit this is three hundred fifty eight years after it was conjectured so this very simple formulation everybody tried to prove it everybody failed and say here's this guy comes along and eventually it proves it and then fails to prove it and proves it again in 94 and he said like the moment when everything connected into place the in an interview said it was so indescribably beautiful that moment when you finally realized the connecting piece of two conjectures he said it was so indescribably beautiful it was so simple and so elegant I couldn't understand how I'd missed it and I just stared at it in disbelief for twenty minutes then then during the day I walked around the department and at Keamy keep coming back to my desk looking to see if it was still there it was still there I couldn't contain myself I was so excited it was the most important moment on my working life nothing I ever do again will mean as much so that particular moment and it kind of made me think of what would it take and I think we have all been there at small levels maybe let me ask have you had a moment like that in your life where you just had an ideas like wow yes I wouldn't self and the same breath as Andrew wilds but I've certainly had a number of um aha moments mo when I realized something very cool about physics just as completely made my head explode in fact some of my favorite discoveries I made later I later realize if they had been discovered earlier someone who sometimes got quite famous for it so I find this too late for me to even publish it but that doesn't diminish in any way an emotional experience you have when you realize it like yeah Wow yeah so what would it take and at that moment that wow that was yours in a moment so what do you think it takes for an intelligent system and a GI system an AI system to have a moment like that that's a tricky question because there are actually two parts to it right one of them is cannot accomplish that proof it cannot prove that you can never write a to the N plus B to the N equals 3/2 that equals e to the N for all integers well etc etc when when n is bigger than 2 the simply in any question about intelligence can you build machines that are that intelligent and I think by the time we get a machine that can independently come up with that level of proofs probably quite close to AGI the second question is a question about consciousness when will we will willins how likely is it that such a machine will actually have any experience at all as opposed to just being like a zombie and would we expect it to have some sort of emotional response to this or anything at all I can to human emotion work no but when it accomplishes its machine goal it did the views it to somehow it's something very positive and right and and sublime and and and and deeply meaningful I would certainly hope that if in the future we do create machines that are our peers or even our dis since yeah I would certainly hope that they do have this sort of sublime sublime appreciation of life in a way my absolutely worst nightmare would be that in at some point in the future the distant future maybe I cost much as teeming with all this post biological life doing all the seemingly cool stuff and maybe the fun last humans or the time era our species eventually fizzles out we'll be like well that's ok because we're so proud of our descendants here and look what I like my most nightmare is that we haven't solved the consciousness problem and we haven't realized that these are all the zombies they're not aware of anything anymore than the tape recorders it has an any kind of experience so the whole thing has just become a play for empty benches that would be like the ultimate zombie apocalypse me III would much rather in that case that we have these beings which just really appreciate how how amazing it is and in that picture what would be the role of creativity we had a few people ask about creativity do you think when you think about intelligence I mean certainly the the story told the beginning of your book involved you know creating movies and so on yeah sort of making making money you know you can make a lot of money in our modern world with music and movies so if you are intelligent system you may want to get good at that yeah but that's not necessarily what I mean by creativity is it important on that complex goals where the sea is rising for there to be something creative creative or am I being very human centric and thinking creativity somehow special relative to intelligence my hunch is that we should think your creativity simply as an aspect of intelligence and [Music] we we have to be very careful with with human vanity we had we have this tendency to very often one and say as soon as machines can do something we try to diminish it that's a long but that's not like real intelligence you know you're the night trader or there were or this or that or the other thing maybe if we ask ourselves to write down a definition of what we actually mean by being creative what we mean by Andrew Wiles what he did there for example don't we often mean that someone takes you very unexpected leap mm-hmm it's not like taking feet 573 and multiplying in my 224 by justice step of straightforward cookbook like rules right if this you may be making you even make a connection between two things that people have never thought was connect very surprising or something like that I think I think this is an aspect of intelligence and this is some actually one of the most important aspect of it maybe the reason we humans are tend to be better at it than traditional computers is because it's something that comes more naturally if you're a neural network then if you're a traditional logic gate based computer machine you know we physically have all these connections and you activate here activator here activate here ping you know I my hunch is that if we ever build a machine where you could just give it the task hey hey you say hey you know I just realized that I have I want to travel around the world instead this months can you teach my eight a GI course for me and it's like ok I'll do it and it does everything that you would have done and they provides us and so yeah that that would in my mind involve a lot of creativity yeah so I had such a beautiful way to put it I think we do try to grab grasp at the you know the definition of intelligence is everything we don't understand how how to build so like so we as humans try to find things well that we have on machines don't happen maybe creativity is just one of the things one of the words we use to describe that that's really interesting where to put it out think we need to be that defensive I don't think anything good comes out of saying oh we're somehow special you know I it's contrariwise there are many examples in history of we're trying to pretend that were somehow superior to all other intelligent beings has led the pretty bad results right Nazi Germany they said that they were somehow superior to other people today we still do a lot of cruelty to animals by saying that we're social superiors and how and the other they can't feel pain slavery was justified by the same kind of really weak weak arguments and and I don't think if we actually go ahead and build artificial general intelligence it can do things better than us I don't think we should try to found our self-worth on some sort of bogus claims of superiority in in terms of our intelligence I think it's we shouldn't stand Joe find our calling and then the meaning of life from from experiences that we have right you know I can have I can have very meaningful experiences even if there are other people who are smarter than me you know when I go to faculty meeting here and I was talking about something that I certainly realize oh boy he has a Nobel Prize he has a Nobel Prize he has no pride I don't have what does that make me enjoy life any less or would enjoy talking those people less of course not see my and the contrariwise I I feel very honored and privileged to get to interact with with other very intelligent beings that are better than me a lot of stuff so I don't think there's any reason why we can't have the same approach with with intelligent machines that's a really interesting so people don't often think about that they think about when there's going if there's machines that are more intelligent you naturally think that that's not going to be a beneficial type of intelligence you don't realise it could be you know like peers of Nobel Prizes that that would be just fun to talk with and they might be clever about certain topics and you can have fun having a few drinks with them so well another example is we can all relate to it why it doesn't have to be a terrible thing to be impressed the friends of people are even smarter than us all around is when when you and I were both two years old I mean our parents were much more intelligent than us right here worked out okay yeah because their goals were aligned with our goals yeah and that I think is really the number one T issue we have to solve its value align the value alignment problem exactly because people who see too many Hollywood movies with lousy science fiction plot lines they worry about the wrong thing right they worry about some machines only turning evil it's not malice they wish that the issue probably concerned its competence by definition intelligent makes you makes you very competent if you have a more intelligent goal playing mr. computer playing as the less intelligent one and when we define intelligence is the ability to accomplish go winning right it's gonna be the more intelligent one that wins my and if you have a human and then you have an AGI and that's more intelligent in all ways and they have different goals guess who's gonna get their way right so I was just reading about I was just reading about this particular rhinoceros species that was driven extinct just a few years ago bummer is looking at this cute picture mommy run oestrus with it's it's child you know and why did we humans private extinction wasn't because we were evil Rhino haters right as a whole it was just because we our goals weren't aligned with those of the rhinoceros and it didn't work out so well for the rhinoceros because we were more intelligent right so I think it's just so important that if we ever do build AGI before we we have to make sure that it it learns to understand our goals that it adopts our goals and it retains those goals so the cool interesting problem there is being able us as human beings trying to formulate our values so you know you could think of the United States Constitution as a as a way that people sat down at the time a bunch of white men but which is a good example I should we should say they formulated the goals for this country and a lot of people agree that those goals actually hold up pretty well that's an interesting formulation of values and failed miserably in other ways so for the value alignment problem and a solution to it we have to be able to put on paper or in in in a program human values how difficult do you think that is very but it's so important we really have to give it our best and it's difficult for two separate reasons there's the technical value alignment problem of figuring out just how to make machines understand our goals adopt them and retain them and then there's a separate part of it the philosophical part whose values anyway and since we it's not like we have any great consensus on this planet on values how what mechanism should we create them to aggregate and decide okay what's a good compromise right at that second discussion can't this be left the tech nerds like myself right that's right and if we refuse to talk about it and then AGI gets built who's gonna be actually making the decision about whose values it's gonna be a bunch of dudes and some tech company yeah yeah and are they necessarily - it's it's so representative of all humankind that we want to just entrusted to them or they even is uniquely qualified to speak the future human happiness just because they're good at programming any I I'd much rather have this be a really inclusive conversation but do you think it's possible sort of so you create a beautiful vision that includes so the diversity cultural diversity and various specs on discussing rights freedoms human dignity but how hard is it to come to that consensus do you think it's certainly a really important thing that we should all try to do but do you think it's feasible I I think there's no better way to guarantee failure than to try to refuse to talk about it or or refuse to try and I also think it's a really bad strategy to say okay let's first have a discussion for a long time and then once we reach complete consensus then we'll try to load it into the Machine know it we shouldn't let perfect be the enemy of good instead we should start with the kindergarten ethics - pretty much everybody agrees on and put that into our machines now we're not doing that even look at the you know anyone who builds this passenger aircraft wants it to never under any circumstances fly into a building or mountain right yet the September 11 hijackers were able to do that and even more embarrassingly you know and that he has Lubitz this depressed Germanwings pilot when he flew his passenger jet into the Alps killing over a hundred people he just told the autopilot to do it he told the freaking computer to change the altitude 200 meters and even though it had the GPS maps everything the computer was like okay no so which we should take those very basic values though where the problem is not that we don't agree that maybe the problem is just we've been too lazy to try to put it into our machines and make sure but from now on air airplanes will just which all have computers in them but we'll just never just refuse to do something like that go into safe mode maybe lock the cockpit door or than here at the airport and and there's so much other technology in our world as well now where it's really quite becoming quite timely to put in some sort of very basic values like this even in cars we were have enough vehicle terrorism attacks by now of you love different trucks and bands into pedestrians that it's not at all a crazy idea to just have that hardwired into the car just yeah there are a lot of there's always gonna be people who for some reason want to harm others most of those people don't have the technical expertise to figure out how to work around something like that so if the car just won't do it it helps it let's start there so there's a lot of that's a great point so not not chasing perfect there's a lot of things that a lot that most of the world agrees on yeah and this look there let's start there and and then once we start there we'll also get into the habit of having these kind of conversations about okay what else should we put in here and I have these discussions this should be a gradual process then great so but that also means describing these things and describing it to a machine so one thing we had a few conversation was Stephen Wolfram I'm not sure if you're familiar with Stephen but yeah I know quite well so he is you know he played you know works with a bunch of things but you know cellular automata are these simple computable things these computation systems and he kind of mentioned that you know we probably have already within these systems already something that's AGI meaning like we just don't know it because we can't talk to it so if you give me this chance to try to try to release form a question out of this is I think it's an interesting idea to think that we can have intelligent systems but we don't know how to describe something to them and they can't communicate with us I know you're doing a little bit work an explainable AI trying to get AI to explain itself so what are your thoughts of natural language processing or some kind of other communication how how does the AI explain something to us how do we explain something to it to machines or you think of it differently so there are two separate parts to your question there are them one of them has to do with communication which is super interesting you don't get that insect the other is whether we already have AGI but we just haven't noticed it yeah right there I beg to differ right and don't think there's anything in any cellular automaton or anything or the internet itself or whatever that has artificial it didn't really do exactly everything we humans can do better I think today if the day that happens when that happens we will very soon notice we will probably notice even before andif because in a very very big way but for the second part though sorry so the because you you have this beautiful way to formulating consciousness as as a you know as information processing you can think of intelligence and information processing and this you can think of the entire universe there's these particles and these systems roaming around that have this information processing power you don't you don't think there is something with the power to process information in the way that we human beings do that's out there that that needs to be sort of connected to it seems a little bit philosophical perhaps but there's something compelling to the idea that the power is already there would you know yes the focus should be more on these I'm being able to communicate with it mhm well I agree that that and some in a certain sense the hardware processing power is already out there because our universe itself can think of it as being a computer already right it's constantly computing what water waves have evolved the water waves and the river Charles and how to move the air molecules around that s Lloyd has pointed out my colleague here that you can even in a very rigorous way think of our entire universe as being a quantum computer it's pretty clear that our universe supports this amazing processing power because you can even the within this physics computer that we live in right we can even build actually laptops and stuff so clearly the power is there it's just that most of the compute power that nature has it's in my opinion kind of wasting on boring stuff like simulating yet another ocean wave somewhere where no one is even looking right so in a sense of what life does what we are doing when we build computers is where we channeling all this compute that nature is doing anyway into doing things that are more interesting than just yet another ocean wave you know and let's do something cool here so the raw hardware power and sherbet and then and even just like computing what's gonna happen for the next five seconds in this water ball you know it takes in a ridiculous amount of compute if you do it on a human computer in yeah this water ball was did it but that does not mean this water bottle has AGI and because AGI means it should also be able to like have written my book during his interview yes and I don't think it's just communication problems as far as you know don't think it can do it and other Buddhists say when they watch the water and that there is some beauty that there's some depth and being sure that they can communicate with communication that's also very important here because I mean look part of my job is being a teacher and I know some very intelligent professors even who just have a better hard time communicating they come up with all these brilliant ideas but but to communicate with somebody else you have to also be able to simulate their own mind yes and pettite build well enough and understand model of their mind that you can say things that they will understand and that's quite difficult and that's why today it's so frustrating if you have a computer that makes some cancer diagnosis and you ask it well why are you saying I should have a surgery if it and if they don't know can only reply or I was trained on five terabytes of data and this is my diagnosis boop boop beep beep yeah I didn't doesn't really instill a lot of confidence right right so I think we have a lot of work do one on communication there so what kind of what kind of I think you're doing a little bit work and explainable eh uh yeah what do you think are the most promising avenues is it mostly about sort of the Alexa problem of natural language processing of being able to actually use human interpretable methods of communication so being able to talk to a system and talk back to you or is there some more fundamental problems to be solved I think it's all of above human the natural language processing is obviously important but they're also more nerdy fundamental problems like if you if you take you play chess mmm I have to give this Paris key when did you learn Russian nobody watching papyrus key I talk after the back more people can you teach yourself Russian to Tao what amalgam of bills of sim through dinner Wow but I would see languages do you know wow that's really impressive I've had some contact base but my point was if you play chess but you have you looked at the alpha zero games there are the actual games now just checking out some of them are just mind-blowing really beautiful and if you ask how did it do that you got that talk to them is hassabis I know others from beef mine all they will ultimately be able to give you is big tables of numbers matrices that define the neural networking and you can stare at these know people's numbers till your face turned blue and it's you know I can understand much about why it made that move and even if you have a natural language processing that can tell you in human language about all five seven points to eight still not gonna really help so I think think there's a whole spectrum of a fun challenge they're involved in and taking a computation that does intelligent things and transforming me into something equally good equally intelligent but it's more understandable and I think that's really valuable because I think as we put machines in charge of evermore infrastructure in our world the power grid the trading on the stock market weapons systems and so on it's absolutely crucial that we can trust these a is a do or I want and trust really comes from understanding all right in a very fundamental way and that's why I'm that's why I'm working on this because I think the more if we're gonna have some hope of ensuring that machines have adopted our goals and that they're gonna retain them that kind of trust and thank you needs to be based on things you can actually understand preferably even make it perfectly to improve theorems on even with a self-driving car right if someone just tells you it's been trained on tons of data and I never crashed it's it's less reassuring than if someone actually has a proof maybe it's a computer verified proof but still it says that under no circumstances is this car just gonna swerve into oncoming traffic and that kind of information helps will build trust and build the alignment the alignment of goals the at least awareness that your goals your values are aligned and I think even a very short term if you look at her you know that today right this absolutely pathetic state of cybersecurity that we have when it's or is it three billion yahoo accounts which are packed almost every American's credit card and so on you know it's why is this happening it's ultimately happening because we have software took nobody fully understood how it worked that's why the bugs hadn't been found right now and I think AI can be used very effectively for offense for hacking but it can also be used for defense know hopefully automating verifiability and creating is systems that are built in different so you can actually prove things about them right and it's it's important so speaking of software that nobody understands how it works of course a bunch of people asked by your paper about your thoughts of why does deep and cheap learning work so well that's the paper but what what are your thoughts on deep learning these kind of simplified models of our own brains have been able to do some successful perception work pattern recognition work and now with alpha zero and so on do some some clever things what are your thoughts about the promise limitations of this piece great I think there are a number of very important insights very important lessons we can already draw from these kind of successes one of them is when you look at the human brain you see it's very complicated a tenth of eleven neurons and there are all these different kinds of neurons and yadda-yadda and there's been a long debate about whether the fact that we have dozens of different kinds is actually necessary for intelligence which a now I think quite convincingly answer that question no it's enough to have just one kind if you look under the hood of alpha zero there's only one kind of neuron and it's ridiculously simple a simple mathematical thing so it's it's not the it's just like in physics it's not the D if you have a gas with waves in it it's not the detailed nature of the molecule the matter it's the collective behavior or somehow it similarly it's it's it's this higher-level structure of the network that matters not that you have twenty guys I think whom our brain is such a complicated mess because it wasn't devolved just to be intelligent it was evolved to also be self assembling right and self repairing right and evolutionarily attainable matches and so on yeah so I think it's pretty my my hunch is that we're gonna understand how to build a GI before we fully understand how our brains work just like we we understood how to build flying machines long before we were able to build a mechanical work bird yes are you going names you're given that the example exactly of mechanical birds and airplanes yeah my plans do a pretty good job of flying without really mimicking bird flight and even now after 100 is 100 years later did you see the TED talk with a mr. mechanical bird you mentioned it's amazing but even after that right we still don't fly in mechanical birds because it turned out the way we came up with but simpler and it's better for our purposes and I think it might be the same there that's one lesson and another lesson it is one what did when our paper was about well first we wife is a physicist thought it was fascinating how there is a very closed mathematical relationship actually between our artificial neural networks and a lot of things that we've studied for in physics go by nerdy names like the renormalization group equation and napoleons and yada yada yada and when you look a little more closely at this you have you at first there was a well there's something crazy here that doesn't make sense because we know that if you even want to build a super simple neural network with hell that part cat pictures and dog pictures right that you can do that very very well now but if you think about it a little bit you convince yourself it must be impossible because if I have one megapixel even if each pixel is just black or white there's 2 to the power 1 million possible images which is way more than there are atoms in our universe right so in order to and then for each one of those I have to assign a number which is the probability that it's a dog right so an arbitrary function of images is a list of more numbers than there are atoms in our universe so clearly I can't store that under the hood of my my GPU or maybe my computer yet somehow works so what does that mean well it means that the out of all of the problems that you could all try to solve with a neural network almost all of them are impossible to solve with a reasonably sized one but then what we should show it in our paper was was that they the fraks the kind of problems the fraction of all the problems that you could possibly pose that there that we actually care about given the laws of physics is also an infinitesimally tiny little part and amazingly they're basically the same part yeah it's almost such that our world was created for I mean they kind of come together yeah but you could say maybe where the world created the world that the world was created for us but I have a more modest interpretation which is that instead evolution in downest but neural networks precisely for that reason because this particular architecture as opposed to the one in your laptop is very very well adapted solving the kind of problems of nature kept presenting it our ancestors will read so it makes sense that why do we have a brain in the first place it's to be able to make predictions about the future mm-hm and so on so if we had a sucky system which could never solve it wouldn't have a logic so but it's so this is this is a I think you're very beautiful fact yeah we also we also realize that there's there that we they've been it's been earlier work on on why deeper networks are good but we were able to show an additional cool fact there which is that them even incredibly simple problems like suppose I gave you it I found the numbers and asked you to multiply them together in re you can write it's the few lines of code boom done trivial if you just try to do that with a neural network that has only one single hidden layer in it you can do it but you're gonna need two to the power a thousand neurons and to multiply a thousand numbers which is again more neurons than their atoms in our universe okay that's nothing but if you're allowed if you love yourself make it a deep network with many layers you only need four thousand neurons it's perfectly feasible so that's really interesting there yeah yeah so on another architecture type I mean you mentioned Schrodinger's equation and what what are your thoughts about quantum computing and the role of this kind of computational unit in creating an intelligent system in some Hollywood movies that are a lot mentioned my name you don't want to spoil them the the NAD is building a quantum computer list yes because the word quantum sounds cool and so it's right mines first of all I think we don't need quantum computers they build a GI I suspect your brain is not quantum computer and then they found sense so you don't even wrote a paper about that what many years ago would excite Chocula the decoherence so-called B coherence time that how long it takes until the quantum computer nosov what your neurons are doing gets erased mm-hmm why just random noise from the environment and then it's about 10 to the minus 21 seconds so as cool as it would be to have a quantum computer in my head I don't think that fast yeah on the other hand there are very cool things you could do with quantum computers though I think we'll be able to do soon when we get big what bigger ones that might actually help machine learning do even better than the brain mm-hmm though for example one this is moonshot but hey you know learning is very much same thing is search mm-hmm if you have if you try to train a neural network to get really learn to do something really well you'd have some lost function you have some you have a bunch of knobs you can turn represented by a bunch of numbers and you're trying to tweak them so that it become as good as possible at this thing so if you think of a landscape with some Valley where each dimension of the landscape corresponds to some number you can change you're trying to find the minimum and it's well-known that if you have a very high dimensional landscape complicated things super hard to find the minimum later quantum mechanics is amazingly good at this right if I want to know what's the lowest energy state this water can possibly have incredibly hard to compute but we can but nature will happily figure this out for you if you just cool it down and make you very very cold if you put a ball somewhere it'll roll down to its minimum and this happens metaphorically and the energy landscape too and quantum mechanics even used as a mode some clever tricks which today is machine learning systems don't like if you're trying to find the minimum and you get stuck in a little local minima here in quantum mechanics you can actually tunnel through the barrier and get unstuck in Yemen and that's really interesting yeah so it may be for example it will one day use quantum computers that help train neural networks better that's really interesting okay so as a component of kind of the learning process for example yeah let me ask sort of wrapping up here a little bit let me let me return to the questions of our human nature and and love as I mentioned so do you think you mentioned sort of a helper robot you can think of also personal robots do you think the way we human beings fall in love and get connected to each other it's possible to achieve in an AI system and human-level AI intelligence system do you think we would ever see that kind of connection or you know in all this discussion about solving complex goals yeah as this kind of human social connection do you think that's one of the goals and the peaks and valleys that with the raising sea levels that we'll be able to achieve or do you think that's something that's ultimately or at least in the short term relative to other goals is not achievable I think it's all possible and I mean in in recent there's that there's a very wide range of guesses as you know among AI researchers when we're gonna get a GI some people you know like your friend Rodney Brooks says it's gonna be hundred hundreds of years least and then there are many others I think it's gonna happen relative much sooner and recent polls and be half or so or AI researchers think it's we're gonna get AGI within decades so if that happens of course then I think these things are all possible but in terms of whether it will happen I don't I think we shouldn't spend so much time asking what do we think will happen in the future as if we are just some sort of pathetic your passive bystanders you know waiting for the future happen to us hey we're the ones creating this future right so we should be proactive about it and ask us of what sort of future we would like to have happen that's right trying to make it like that well what I prefer it to some sort of incredibly boring zombie like future where there's all these mechanical things happen it is no fashion no emotion no experience maybe even no I would of course much rather prefer it if all the things that we find that we value the most about humanity our subjective experience passion inspiration you love you know if we can create a future where those are those things do exist no I think ultimately it's not our universe giving meaning to us just us giving me the universe and if we build more advanced intelligence let's let's make sure we're building in such a way that meaning these but it's part of it I want a lot of people that seriously study this problem and think of it from different angles have trouble and the majority of cases if they think through that happen you know are the ones that are not beneficial to humanity right and so yeah so what what are your thoughts was an engine what's what should people you know I really don't like people to be terrified you should what's a way for people to think about it in a way that instead you know we can solve it okay to make it better yeah no I don't think panicking is gonna help in any way it's not increase chances of things going well either even if you are in a situation where there is a real threat does it help if everybody just freaks out right no of course of course not I think yeah there are of course ways in which things can go horribly wrong first of all it's important when we think about this thing this about the problems and risks that also remember how huge the upsides can be if we get it right I had everything everything we love about society and civilization of the product of intelligence so if we can amplify our intelligence or machine intelligence and not anymore lose our loved one to what we're told as an uncurable disease and things like this of we should aspire to that so that can be a motivator I think reminding ourselves that the reason we try to solve problems is not just because we're trying to avoid gloom but because we're trying to do something great but then in terms of the risks I think um the entry the important question is to ask what can we do today they will actually help yes how come good may in it and dismissing the risk is not one of them you know it I find it quite funny often when I'm in on discussion panels about these things how the people who work for come for companies lobbies they're always like oh nothing to worry about nothing to worry about nothing to worry about and the it's always all it's only academics sometimes it's expressed concerns that's not surprising at all if you think about it Upton Sinclair quipped right that it's hard to make your man believe in something when you think some the fans are not believing in it and and frankly we know a lot of these people in companies and that they're just as concerned as anyone else but if you're the CEO of a company that's not something you want to go on record saying when you have silly journalists so we're gonna put a picture of a Terminator robot when they quote you so so the issues are real and the way I am the way I think about what the issue is it is basically you know but the real choice we have is first of all are we gonna stir this dismiss this the risks and say well you know let's just go ahead and build machines that can do everything we can do better and cheaper you know let's just make yourselves obsolete as fast as possible or what could possibly go wrong right that's one attitude the opposite attitude that I think is to say is incredible potential you know let's think about what kind of future we're really really excited about what are the shared goals that we can really aspire towards and then let's think really hard on how about how we can actually get there as it's a start with it no don't start thinking about the risk start thinking about the goals goals yeah and then when you do that then you can think about the obstacles you want to avoid well they often get students coming in right here into my office for career advising always ask them this very question where you want to be in the future man if all she can say is oh maybe I'll have cancer maybe I'll run over by a tortoise and obstacles instead of the bill he's just gonna end up a hypochondriac paranoid yeah whereas if she comes in and fire in her eyes and it's like I want to be there and then we can talk about the obstacles and see how we can circumvent them that's I think a much much healthier attitude and um that's really well plan and I I feel it's it's very challenging to come up with a vision for the future which we wish we are unequivocally excited about I'm not just talking now in the vague terms like yeah let's cure cancer fine I'm talking about what kind of society do we want to create what do we want it to mean you know to be human in the Age of AI in the age of AGI so if we can have this conversation broad inclusive conversation and gradually start converging towards some some future that with some direction at least that we want to steer towards right then then no we'll be much more motivated to constructively take on the obstacles and I think if I had if I had the I think if you make if I try to wrap this up in a more sixteenth way I think I think we can all agree already now that we should aspire to build AGI but doesn't overpower us but that empowers us and think of the many various ways that can do that whether that's from my side of the world of autonomous vehicles I I'm personally actually from the camp that believes there's human level intelligence is required to to achieve something like vehicles that would actually be something we would enjoy using and being part of so that's one example and certainly there's a lot of other types of robots in medicine and so on so focusing on those and then and then coming up with the obstacles coming up with the ways that that can go wrong and solving those one at a time and just because you can build an autonomous vehicle even if you could build one that would drive this finalize you know maybe there are some things in life that we would actually want to do ourselves that's right my like for example if you think of our society as a whole there's something that we find very meaningful to do and that doesn't mean we have to stop doing them just because machines can do them better you know I'm not gonna stop playing tennis just the base of my build a tennis robot yeah beat me people are still playing chess and even go yeah and I in this in the very near term even some people are advocating basic income replace jobs but if you if the government is gonna be willing to just hand out cash to people for doing nothing then one should also seriously consider whether the government should also just hire a lot more teachers and nurses and the kind of jobs which people often find great fulfillment in doing right I get very tired of hearing politicians saying oh we can't afford hiring more teachers but we're going to maybe have basic income if we can have more serious research and thought into what gives meaning to our lives and the jobs give so much more than income right mm-hm and then think about in the future well what are the role of the yeah what are the roles that we want to have people feeling empowered by machines and I think sort of I come from the Russia from the Soviet Union and I think for a lot of people in the 20th century going to the moon going to space was an inspiring thing I feel like the the the universe of the mind so AI understanding creating intelligence is that for the 21st century so it's really surprising and I've heard you mention this it's really surprising to me both on the research funding side that it's not funded as greatly as it could be but most importantly on the politicians side that it's not part of the public discourse except in the kilobots Terminator kind of view that people are not yet I think perhaps excited by the possible positive future that we can build together certainly should be because politicians usually just focus on the next election cycle right the single most important thing I feel we humans have learned and the entire history of science is there were the Masters of underestimation we underestimated the science of our cosmos again and again realizing of everything we thought existed was just a small part of something grander right planet solar system the galaxy clusters of guises universe so and we now know that we but the future has just so much more potential than our ancestors could ever have dreamt of this cosmos well imagine if all of Earth was completely devoid of life except for Cambridge Massachusetts that would wouldn't it be kind of lame if all we ever aspired to it to stay in Cambridge Massachusetts forever and then go extinct in one week even though Earth was gonna continue on for longer that that sort of attitude I think we have now on the cosmic scale we can fluid life can flourish on earth not foreign for four years but for billions of years yes I can even tell you about how to move it out of harm's way when its own Sun gets too hot and and then we have so much more resources out here which today yeah maybe there are a lot of other planets with bacteria or a cow like life on them but I most of this all this opportunity seems as far as we can fail to be largely dead like the Sahara Desert and yet we have the opportunity but to help life flourish promise there are billions of year and so like let's quit squabbling about when some little border should be drawn one-fifth one mile to the left to right and realize hey you know we can do such incredible things yeah and that's I think why it's really exciting that yeah you and others are connected with some of the working la mosque is doing because he's literally going out into that space we're exploring our universe and it's wonderful that is exactly why Elon Musk is so it misunderstood right misconstrue him is some kind of pessimistic dooms there the reason he cares so much about the I safety is because he more than almost anyone else appreciates these amazing opportunities they will squander if we wipe out out here on earth and we're not just gonna wipe out the next generation but all generations and this incredible opportunity that's out there that would be really a waste and AI for people who think that we better to do without technology well let me just mention that if we don't improve our technology the question isn't whether humanity is gonna go extinct question is just whether we're gonna get taken out by the next big asteroid or the next supervolcano or something else dumb that we could easily prevent with more tech right and if we want life to flourish throughout the cosmos AI is the key to it as I mentioned a lot of detail in my book right there even many of the most inspired sci-fi writers I feel have totally underestimated the opportunities for space travel especially to other galaxies because they weren't thinking about the possibility of AGI which just makes it so much easier right yeah so that goes to your view of AGI that enables our progress that enables a better life so that's a beautiful that's a beautiful way to put it and then something to strive for so max thank you so much thank you for your time today it's been awesome thank you so much thanks Super Bowls Rory yes you