HomeGALA 2022 Objectivism & Technological Trends PanelEducationAtlas University
No items found.
GALA 2022 Objectivism & Technological Trends Panel

GALA 2022 Objectivism & Technological Trends Panel

|
February 20, 2023

Original Video: https://www.youtube.com/watch?v=4zswRE-iM6E

The accelerating advance of technological breakthroughs in automation, robotics, and artificial intelligence has prompted both wildly techno-optimistic visions of endless abundance as well as dark techno-pessimistic fears of unemployment and eroding privacy.  How can we best objectively evaluate the opportunities and challenges presented by these transformative technologies?   Atlas Society Senior Scholar Stephen Hicks, Ph.D. and Senior Fellow Robert Tracinski lead the way in this 45-minute panel discussion with our CEO Jennifer Grossman during our 6th Annual Gala. We invite you to listen HERE or read the transcript below. 

Line-up of Speakers (panelists): AF-Ana Freund; JAG-Jennifer Anju Grossman; SH-Stephen Hicks; RT-Robert Tracinski

AF:    Up next, we have Objectivism and Technological Trends. Today's breakthrough technologies from artificial intelligence to robotics, blockchain, to virtual reality are all converging to drive exponential change, sparking both hope and fear. Our scholars today will explore which reaction is most rational, how to prepare today, and how to thrive in the world of tomorrow. JAG, our CEO, moderates this panel and is joined by faculty members, Stephen Hicks, Professor of philosophy at Rockford University, author of six books, and our country's peerless expert on postmodernism. We also have Robert Tracinski of the famous Tracinski Letter, and author of So Who is John Galt, Anyway? Welcome to the stage. 

The Triumph and Glory of Air Conditioning

JAG:    Alright, so, got some audience feedback: they would like to have a bit of a reason to hope, they want to be motivated.  And, I think that this panel is built for that. You know we are going to be hearing also from Peter Diamandis tonight, a man who's written not just Abundance: The Future Is Better Than You Think, but also The Future Is Faster Than You Think. And, of course, Michael Saylor, who with his Mobile Wave anticipated many of the changes, and is advocating and evangelizing a technology that has the potential to return rationality to our financial system and return freedom and property rights to the human race. So, Stephen, where do you see us, in terms of the pessimism and optimism scale with regards to technology?

SH:    Let me warm up to that judgment call, which I want to go back to. When I was in university and reading Rand for the first time, probably the outstanding thing that appealed to me in the novels was that theme of creative innovation. The character of Howard Roark blew me away, you know, with dramatic changes in engineering, and then combining that with his creative vision for a new form of architecture. And Atlas Shrugged: Rearden Metal, new smelters that D'Anconia is working on. John Galt's motor. The possibilities of the new technologies were astounding to me. Now, of course, in both novels there's a surrounding culture with many dark challenges. Second-handers, cronies of various sorts, terrible politicians, and so forth. We know all of those types. But nonetheless, the overarching theme that spoke to me as a young person was to channel your inner Howard Roark or Dagny Taggart and be the most creative, innovative person you can be in your life for your own benefit.  And more broadly, socially.

SH:     Now, it's been almost 80 years next year, I think, since The Fountainhead was published, and over half a century, yes, half a century since Atlas was published. And that kind of Enlightenment, American dream, promise that Rand was writing about is still largely with us. And if we just think in my lifetime, our lifetime, how many more technological changes have come along, despite all of the darkness and adversaries that people had to fight along the way. So what is animating us on this panel is to try to anticipate what some of those on-the-verge technological innovations are. We can list some of them. I've got a list here, but we can also sort of think about which ones are likely to be transformative. So, just in the space of 250, almost 300 years: steam engines and the industrial revolution, right?

SH:    We have electricity, revolutions in communication, revolutions with automobiles and flying machines, and so forth. Then transistors, personal computers, the internet. It's been just astounding. And I still want my flying car to come along pretty, pretty soon. But here's a list just in our generation. So, digital currencies, right? Artificial intelligence and various forms, right? Machine learning, discrete intelligences, and then work on more general intelligences, pharmaceuticals that are customized to each person's DNA. Eliminating side effects for more generic types of pharmaceuticals; biotech; stem cell-grown replacement parts, combining with robotics, of course; robotics across the board, but robotics with respect to surgeries; nanotechnologies; technologies, multiple uses. Then also with respect to biotech as well: surveillance, which of course can be positive or negative, but enhancing our personal security, giving us tools to be aware of our environments and potential threats and opportunities that are out there.

SH:    Energy developments in biofuels: developments with respect to fusion, whether fission's going to make a comeback, in better form, transportation technologies. We're on the cusp, actually. There already are self-driving vehicles all over the place on a fairly regular basis. Will that be transformative? Human enhancements, genetic, digital, things that work more intimately with us than we even are. And then, second generation social media, right? Already social media has transformed our culture, and that's just in 25 or 30 years, but that's still, from what I can tell, a baby infant technology. Where will it be in another 25 years? And combining with various forms of virtual reality, which of course have power to transform personal entertainment space, how we do surgeries and, and so on. Now, that's a list.

SH:    And I'm just a philosopher and I read about these things and I get excited and pessimistic in various ways, right, about all of them. But I do have some questions, and these are three questions that are hopefully going to inform our panel here. One is that I'm now old enough, and well-read enough in the history of this, that every time there is a new technology that comes along, there's an immediate polarization of opinion. There's one set of people who say, this is amazing, this is so cool, and imagine all of the things that we can possibly do with it. And they're largely oblivious to the downside, at least in their public pronouncements. But immediately also, there are people who are saying, this is terrible. This is awful. This spells doom for humankind. And they're largely oblivious to the upside. And of course, there's a large number of people who have trained themselves to say, well, we need to do cost/benefit, risk analysis, and so on.  

SH:    But that's a more sophisticated position. The question though is what is behind the immediate optimism or the immediate pessimism, and why so many people fall into the pie-in-the-sky, techno-optimist camp, and into down-in-the-dumps, techno-pessimist camps. Is this something universal in the human condition? Is it something that we can learn from so that we can have healthier, more efficient discussions about the new technologies when they come along, not just necessarily doomster, outright and so on? So that's one. The second question I have is about crystal ball-gazing that in many cases I noticed people in my field, and I'm a philosopher by training. And our job is to provide a certain kind of perspective and, hopefully, wisdom, particularly about the value issues that are involved in new technologies, and so on.

SH:    But, typically, people trained in the humanities and, even more broadly speaking, in social sciences and policy, are not very up to speed on technology. And they're very reactive, right? New technology comes along and it takes them 10 or 15 years to get up to speed on just what this thing is before they're in the position to provide any sort of perspective on it. So my question then is for us in Objectivism, can we, in the next 10 years or so, predict which of these technologies are likely to be transformative? I think all of them will make incremental improvements, but are there some that are likely to be transformative such that we can identify them and start to get ourselves up to speed so that when the public policy discussions, the public value discussions emerge, we're in a position to have something to say about that. And then closely related to that question, since this is an Objectivist group or Objectivist-friendly group, what is the uniquely Objectivist value added to that discussion going to be? So with that, I wanted to ask Rob for his thoughts. Do you want to take the crystal ball-gazing question first?

RT:    I like the crystal ball-gazing question because some of my background in this and part of the reason I'm on this panel is around 2016 to save my sanity during the 2016 election, I pitched and started up a website called Real Clear Future, which was the idea of looking at emerging technology, what's coming in 5 years, 10 years, 15 years, et cetera. And so for a while I was working on that. And the great thing about writing about technology is it gives you a much greater sense of optimism about the future. Because when you're writing about politics, it's always, you know, everything is going bad, and how come people never learn? And when you're writing about technology, it's: Isn't this an amazing new thing they've done in this other new amazing thing?  And here's this other thing that's in the pike that's probably coming along.

RT:    But at the same time, you also see the problem with crystal ball-gazing, which is, it's inherently unpredictable because it's something that is new. It's something that hasn't been done yet. So at the time, you know, circa 2015, 2016, they were saying, oh well, we're going to have self-driving cars by 2018. Well, it's 2022. They have some little demonstration projects here and there, but it hasn't taken over. And that's on the optimistic side. People get it wrong. And people on the pessimistic side, there was a guy about that time who wrote an article saying, you know, in between two and 10 years from now, all the truck drivers in the country will be unemployed because we'll have self-driving trucks. Well, that also hasn't happened or gotten close to happening.

RT:    So it's inherently unpredictable because you're talking about technology that hasn't been created yet, and sometimes something that's not on your radar screen has become hugely important to take over. And sometimes something you think is going to take over next year is not going to happen. I was having a discussion last night with someone who works on fusion energy, and you know, it's the old line about fusion, which is, it's 15 years in the future and always will be (laugh). And you know, there's a lot of stuff now about how finally it's happening now, and maybe it is, but it's such a hugely difficult and complex thing involving so many different specialties that it is very hard to predict exactly when it's going to happen. So, I think gazing into the crystal from a philosophical perspective, this comes from the fact that innovation and progress come from the effort of many individuals working together, it's not something that's centrally planned from the top down.

RT:    Now, talking about fusion, one of my favorite examples on that is when Bill Clinton was elected in 1992, he brought into his administration a guy named Ira Magaziner who was a big advocate of industrial policy. And this was the idea that no, don't call it “central planning,” but we're going to have the government come in and we're going to have it decide who the winners and losers are going to be and what technologies to support. Well, a year earlier, this guy Ira Magaziner who had been working as a lobbyist in Congress, gave testimony to Congress on how we need to have massive funding for cold fusion. And that was because he was working for the University of Utah, I think it was, where some people may remember they had this big excitement that, oh, we've got this tabletop set up operating at room temperature that can produce fusion.

RT:    And it turned out to be totally not a replicable experiment, and it was a bust and it never happened. But that gives you an idea that if you're planning, or you're going to plan out or predict what the new great new technology's going to be, you have this cautionary tale of “we all thought cold fusion was going to happen, and it was a total bust.” So the philosophical aspect of this, all this comes from the action of individuals going out there solving problems and confronting reality and, and taking on all this complexity. And so, that's why you never know what's going to work and where it's going to come from. Now, what I would say, though, is that I'm going to take your first point, which is techno pessimism.

RT:    And that I think is really rampant today.  I did an interview a couple weeks back on another sub-newsletter called Symposium with a guy named Louis Anslow, who runs a great website and Twitter feed called The Pessimist Archive. And you know, if you've encountered it, it's very amusing because he goes back and he finds every techno panic that we have today about how technology is ruining the world is not new. It's a retread or something. They said a hundred or 150 years ago about the bicycle, the paperback novel, (laugh), everything: he had a great thing about where the New York Times bought out Wordle, which is this word game online. The New York Times is famous for its crossword puzzle.  Well, he went back and found that back a hundred years ago, the New York Times was publishing an article about the terrible scourge of crossword puzzles -- how they're ruining people's minds (laugh). 

RT:    So there's literally no technological panic about how technology is ruining the world that hasn't been retreaded from something earlier. And Louis Anslow talks about how there's this thing where a new technology comes from being the scary thing that's going to destroy the world to being this sort of the normal everyday thing that is just taken for granted. And then I would add a third stage, which is: then it becomes our cherished historical tradition that's being threatened by the new technology coming up, right? (laugh). And I think, philosophically, it comes from a bias against reason, you know— Adam and Eve eating the apple, eating from the fruit of the tree of knowledge, that longstanding bias against the idea that the mind is going to get us into trouble.

RT:    And also when you think about what technology does, it's disruptive, it changes things. And you know, it's kind of a natural thing that if you look at most of human history, change happened -- but it happened at a slow scale where through a lifetime of 70 years or probably more realistically 50 years, (laugh), given longevity changes, not much in your life would change. Not much in the way people lived would change. It's only in the last 200 years, really, that we sort of had that geometric takeoff and live in an era where things will dramatically change from the time you're a kid to the time that you grow up. And I think people have that reaction that if you are not attuned to having that self-confidence of using your mind to understand the world and adapt to change and understand how things are changing and how to react to it.

RT:    And so, you're talking about what is it that is the proper response to technology; I think the thing we have to inculcate is this sense of you taking ownership, and control of your own life and viewing technology not as this thing that's passively washing over you and changing your world and making you frightened and confused and instead your viewing it as this is giving me lots of great tools that I, as a human being, can go out and I can take charge of my life and adapt and not just adapt to the new technology, but use the new technology to do all sorts of exciting things. And that's the mentality we need. And, to go back just to finish off with what I liked about what Stephen said about Ayn Rand's novels is that I think one of the most telling things is that when she wrote The Fountainhead, her working title for it for a long time was Second Hand Lives. 

RT:    And it was referring to this idea of the second hander—and second hand lives of people like Ellsworth Toohey and Peter Keating—well she changed the title of the novel to The Fountainhead. Why did she do that? Because the novel wasn't about Ellsworth Toohey and Peter Keating. It wasn't about the bad guys, it wasn't about the second handers. It wasn't about all the things, all the terrible bad things that were happening. It was about Howard Roark. It was about the guy going out there and doing new and innovative things and thinking thoughts nobody had thought before. And that was the center of the novel. Atlas Shrugged: I fell in love with Atlas Shrugged because of Dagny Taggert, this woman going out there and she's has to build this railroad line on a deadline and all these obstacles, and she's always out there solving problems and overcoming the obstacles that I call a culture of achievement. And, I think that's the real essence of what Ayn Rand and her novels and her philosophy have to add to the world, the idea of a culture of achievement where your orientation is always to the world that is out there: “I have the ability to understand it, to do new things, to create things,” and that should be the center and focus of life. So, that's where I wanted to stop with that.

SH:    My answer to the question about the crystal ball-gazing: I'm going to start with artificial intelligence and robotics, and the boundary between those two strikes me as a very clear example of exactly all of these issues on the table, but it can give us a kind of specificity. So already for 200 years, we in effect have had robotics transforming our culture and increasingly intelligent machines from handheld calculators and embedded chips, and so on. So now we're on the cusp of another revolution in this area. And some of the progress is just amazing, you know, from defeating chess players and taking mathematical problems that used to take something on the order of 100,000 algorithms that needed to be calculated separately, and putting that through a machine learning process and the machine reduces it to four algorithms that need to be calculated.

SH:    So all of the gains and knowledge and productivity that are going to come from that, but still, we have the same issues that come up with respect to AI and robotics. And one of the issues is this time it's a different reaction. And so one of my questions then is, is it really going to be different this time, or is it just going to be an incremental thing, that process, what we've seen for 2000 years? Is it going to be disruptive not only qualitatively, how we need to think about what it is to be a human being, or about work lives, politics and so on, or that the rate of change is going to be faster? And this is what you were alluding to.

SH:    We had 70 years to get used to this new technology, and then we had 40 years to get used to it, and then we had 15 years, but now we've got 4 months and we have to be up to speed on whatever the new thing is. And, maybe we're not cognitively prepared for that. So this time it's different issues, certainly from the public policy perspective, the economic impacts are important and people then immediately fear for, say, their jobs. And it's not just the traditional blue collar jobs, the people flipping fries and cutting potatoes at McDonald's, right? Not just that we can have robots to do that, but it's also the traditional white collar jobs, all of the secretarial jobs, all of the translators, all of the medical transcriptionists, and people who file and process insurance claims, and so on, with all of the machine parsings, and the hundreds of thousands of new lawyers who read all of the cases when they're doing their early legal training to find all of the precedents, and so forth.

The Message of Alexander Graham Bell

SH:    So all of those jobs then will go away. And what happens then? So that's another category there. Then there are the political implications that all of these new AI robotics technologies can be used domestically by bad governments to threaten rights that are important to each of us. But then also in a foreign policy context, all of these technologies can be used by foreign bad guys to undermine our way of life and be a threat in various forms, and so on. And then, this one is still a little bit science fiction-y, but if we get past the stage of all of the discrete intelligences, being able to talk to my phone or talk to Alexa or work my way through the phone tree and hopefully smarter fashion (laugh) when I'm trying to get to someone who can solve my problem, all of those discreet intelligence tasks, specific intelligence to developing machines that can in fact conceptualize and then do serious semantics and syntax and think in terms of narratives and theories, will there then be some change to human status in this new AI type of world?  Now, I have thoughts on each of those, but let me kick it back to you for your thoughts on AI and robotics.

RT:    AI: I'm very interested, and excited by, because I think, somebody came up with this idea of a general purpose technology, and they're looking at the history of economic growth and technological change. And they said that the big increases in productivity, the big leaps forward come from something that's a general purpose technology. And what that means is it's a technological advance that doesn't just affect one field, it goes across the economy and across the whole society. And, it changes all sorts of different things. So, it has this broad general effect. And now to give you an idea how in the source I was looking at, how few and far between those genuine ones are, that the last person who identified the last one was electrification. You know, (laugh), it's about a hundred years ago that you had this process that had begun in the 1880s, but by the 1920s it was completing, of electrical power becoming widely available, and the way that transformed everything from home appliances to how factories were run, and had radio and its communications were changed because you had something you could plug your radio into, et cetera.

RT:    So, when looking into what's the new general purpose technology, one of the theories, which I think has a lot to it, is that AI could be the general purpose technology because it's a leap forward in automation. So, beginning with the Industrial Revolution, we start having automation in the forms of you design a machine that when powered by a steam engine can run a loom and weave cloth, right? This is one of the big innovations of the industrial revolution. And so, you design a machine especially to do a certain task, and then you power it with steam power or then, later, other kinds of power. Then the next level of automation is you design a machine that can do a number of different things, which is much more recent, and then you program digital code and you can update the code and you can have a CNC machine, a cutting machine that can make any shape you like.  You just have to program in what the shape is and the bit  will cut that shape out, and you can customize things, and you're much more flexible.

RT:     So it's not one machine custom developed to create one particular part, it's one machine that could do many different things. The next stage after that, which I think AI promises, is a sort of even higher level automation that you can program. You could have a machine that programs itself to do new tasks, and figures out how to, and develops, basically writes, its own code and learns how to perform a task. And I think that's really interesting. That's really fascinating. It has that sort of general purpose technology about the possibility of creating vast changes across many different fields. So things like self-driving cars, right, which are behind schedule, but it's still in the process.

RT:    And the idea that instead of me having to get in a car and drive around, it's been hours of my life going from one place to another. I can sit in the back and read something or, you know, take a nap or whatever it's that I want to do while the car drives itself. Um, now as for the sort of scary scenario as well, what does that do to employment? You know, is it different this time because people still have that sort of Luddite thing, where it's, well, when the machines are weaving the cloth, then what? We'll all lose our jobs. That was Ned Ludd who supposedly, according to legend, smashed one of those power looms because it's taking away the job from all of us weavers. And of course what's happened over the last 200 years is it hasn't taken away people's jobs.

RT:    It's actually created greater productivity and you end up, instead of weaving one cloth at a time you have a guy working, managing the machines who's weaving hundreds of cloths and hundreds of bolts of cloth at a time. And so,  it doesn't take away people's jobs. It makes their jobs more productive. Will AI be different? And that's, I think, the way I see it, and just to give a quick thing, is that I think what is going to happen is, yes, it's definitely going to eat into a bunch of blue collar, a bunch of white collar jobs now. So think of legal Zoom as a great example of this. Like where you have this website where you can do all sorts of legal things that were just sort of paperwork, filling out paperwork that you normally would've had to hire a lawyer to do.

RT:    You can just go on this website and you can do it yourself, and it helps you select what are the right things to do. It's not even using sophisticated technology, but that's the sort of thing that's going to happen. But what I think will happen is that AI will augment human will, it will fit the same pattern we've had since the Industrial Revolution. It will augment human activity: the AI will perform tasks for you that normally would've required a lot of drudge work, a lot of rote effort that can ease, that can then be automated by the AI and human beings will still have to do the conceptual thinking behind that to decide what should the AI be doing, why should it be doing it? What is the overall strategy and plan? But I think the implication is it's going to require more out of the humans that you then have to be sure that you're not just the guy shuffling papers, because shuffling papers can be done by the machines. You have to be the guy who actually is doing the higher level conceptual thinking. So I think it's going to be something that's going to not take away people's jobs, but it's going to give them more productive jobs. But at the same time, jobs that are more demanding in terms of your having to actually exercise your uniquely human faculties of conceptual thought and of creativity, rather than just a lot of jobs today where people are shuffling papers.

JAG:    All right, we have about 15 more minutes. I'm hopeful that we might be able to get to a question or two, but if not, we'll be breaking for lunch after this and you'll have the opportunity to buttonhole one of these guys and get your questions asked.

SH:    I want to jump in with a couple of comments of my own on the artificial intelligence robotics thing. And I'm hugely gungho optimistic about the future on all of these things, but I do think there are going to be challenges along the way, not just dealing with the obstruction of policy makers and techno pessimists of various sorts, but actually genuine problems that are built into transformative leaps in technology. And ones where I think there's a role for a good philosophy like Objectivism to have some value added. So take this issue of the new kinds of jobs that are being created. I was looking at a McKinsey study a month ago. They're tracking all of the technologies over the last, I think it only went back to 1880 or so, but it was that for every job that is creatively destroyed by new technologies coming along, 2.6 new jobs are created.

SH:    And they didn't go so far as to say that's a universal constant, but that was the average over the course of the last century and a half. And so that's pretty impressive. Now, there's a public relations issue of getting statistics like that out there, but there also is the issue about the kind of jobs that are created because the kind of jobs that get creatively destroyed are the kind of jobs that robots can do, right? The China jobs that machines can do. They are the boring, repetitive types of jobs that are first replaced. And on the one hand, people are protective, well, I have this job and I don't want to lose that job. At the same time as human beings, we don't like doing those kinds of repetitive dehumanizing jobs, but we do have a huge amount of inertia culturally where people will stay in dehumanizing jobs recognizing that their job is dehumanizing, but they don't do anything about that.

SH:    That's a values issue. Where is this lack of ambition for one's personal life? Is it a matter of not having developed the kind of cognitive habits so that you think you can go out into an uncertain world, face new challenges and find a way to solve the problem? You don't have that confidence because you haven't had the right kind of education. Now, if we think that these are going to be transformative technologies and they're going to be general technologies, in the way that Rob is defining here, then there is going to be a social cognitive problem because there will be a significant minority of people who will make that transition happily and not effortlessly, but smoothly. But there are going to be a huge number of people who will not make that transition.

SH:    And, they are going to be a cultural force. They will be a political force, and we need to be in a position to say something about that. Now, it might be that we then say, well, we have to talk about education again, because the education system should be preparing people to enter into the adult environment as we think it's going to be for them 10 or 12 years from now. And the current education establishment is failing abysmally at doing so. So really this just reverts to being an education problem, but then we need to make the better philosophical arguments in education. The other thing I was thinking about is that there are kind of genuine issues that come up with these new technologies in the values sphere. That one seems like a more cognitive epistemology kind of issue. But think about, just using self-driving cars as an example. Now my understanding is that the technology is pretty far advanced. I know there are self-driving trucks delivering stuff, you know, important beer runs between Colorado Springs, where Coors is made, up to Denver on a fairly regular basis. And, it's all worked out. And the Google cars have been all over everywhere, right? Mapping everything. So there's a significant.

RT:    John Deere just announced a self-driving tractor.

SH:   Yeah, that's right. So all of that stuff is there, but there is a real philosophical problem then. And my understanding is that a big part of the slowdown has to do with insurance because once you have the self-driving vehicles on the road, accidents are going to happen. But who is liable for the insurance when the accident has to occur? And for the self-driving vehicles to be out on the road, they are going to have to have built into their algorithms decisions for various scenarios that are going to arise. So, you know, they're just driving down the road. I'm just going to make one up right now. And, I don't know, there's a fireworks that goes off over here and that startles one little kid over here who runs into the street, but three little old ladies over there who run into the street, and does the car then go and hit the little old ladies?

SH:    Well, there's three of them, but they're old, right? (laugh), or does the car swerve this way and hit the little kid? There's only one, but it's young. And so we're immediately into the trolley problems that are all over the internet, right? And that of course is a fun set of abstract exercises, but that has real world applications because all of those value issues need to be sorted out by the philosophers so that the software engineers can program them into the algorithms and then the insurance companies and the regulators can be happy with it. And then we can have self-driving vehicles out on the road. And so philosophers who are now in ethics—not doing a very good job, right? We might say there's a value proposition for us if we have a good ethical system and we can make some traction in that area.

JAG:    Yeah. Second, if the CDC is programming the car, then we know who it's going to be swerving for, so…

RT:    But I think we should, I just want to endorse that thing about the psycho epistemological issue, but I think we should go to questions with what we have. 

JAG:    Yes, let’s do that. All right. Yay. We've got eight minutes. So rapid fire round. Anybody have a question?

Question:    First again. So what you just touched on, the ethics piece, this is what fascinates me and concerns me. We talk about artificial intelligence, everybody talks about the intelligence part, not the artificial part. Very concerned about the thoughts about ethics because, I mean, are we going to have programmers keen in Objectivist ethics? Not likely. And if these things are able to learn, like humans learn and we look at our ethical systems and how we behave, what's to say we aren't going to have a learning thing that learns how to destroy, and why would we think for a moment it would develop some sort of wonderful, pleasant, ethical system that wouldn't be harmful to others and just advantageous to itself? Because as I like to say, Hobbes was right.

SH:    So, let me take that one first if you don't mind. So yes, I do think the value component to artificial intelligence and robotics is foundational. I mean, there are the cognitive issues and the technical issues, and a lot of it is behind the fears. And the fears are mostly about value fears. So we say, suppose we then have computing machines that are a lot smarter than we are right now. Is that a good thing or is that a bad thing? And it depends on what your value framework is. And if we go back to human cases, suppose we say, okay, well Albert Einstein, maybe Albert Einstein is like 2.3 times as intelligent as I am. Is that a good thing or is that a bad thing? And it depends on whether Albert Einstein is a good guy or a bad guy, right?

SH:    So it's his intelligence, which is a power that can be used for good or bad. It's not the power, per se. And so the same thing is going to happen in the case of machines that might then be multiples of intelligent more than we are, as long as the machines have in them the right kind of value framework. What's that value framework? Or, if we go on the robotics side, you know, here I always go, we have to go to Terminator, right? And so Arnold Schwarzenegger, right? That sort of thing. So then we say, there’s going to be these robots and they are going to be faster, more powerful. They don't get tired, right, more than we are. Is that a good thing or a bad thing? But it's the exact same question in parallel, again. So someone like Arnold Schwarzenegger, in his prime, he was what, four times as strong as I, (laugh) I am.

SH:    Is that a good thing or a bad thing? Well, it's a good thing if Arnold Schwarzenegger is a good guy, and it’s a bad thing if he's in some sense a value threat to me. So the robot technology, per se, is not good or bad. It depends on the value framework that gets coded into the robot. And, of course, we can combine them. So we say then we have machines that are not only smarter than we are, but also, physically, more capable than we are. And then it's just a Venn diagram of the exact same problem. So figuring out the ethics and values piece and making that part and parcel of the technological development, I do think you're right, that's exactly where the action is.

JAG:    One, one more question.

RT:    Hold on. I just want to say I disagree with some of that because I think too much of this has been poisoned by Hollywood, right? So if you look for art, art and movies especially, help us create models for what new technology is going to look like. Well, what's the model we've gotten for AI? Well, there's Hal 9,000: “I'm sorry Dave, I'm afraid I can't do that.” Yes. People of a certain age will recognize that.  There's the Terminator, there's, I guess, Altron for the kids who watch the Marvel movies these days. Allright? So, you know, the model we have is AI's going to take over and it's going to be malevolent. I think that the big thing philosophically is AI doesn't have a will of its own. That's the whole purpose we're developing. It is so it can do stuff for us, right?

RT:    It doesn't have a will of its own, it doesn't have motivation. What does a robot need, right? My robot doesn't have needs, it doesn't have motivation. I think that's why the robots won't become smarter than us in the relevant sense that they won't be capable of conceptual consciousness in the way we are. But it's also the reason why, you know, this idea that the robots are going to take over and control us and compete with us somehow. I think that they're going to be tools for us. And so the question is, you should fear the people who are developing and using technology and their motives just as you should with all existing technology. You should fear and hope for that. You hope they'll be good people and fear that they might be bad people, but the machines themselves are not the problem.

JAG:    All right, Ahmer, I see you nodding in the back, do you want to weigh in Ahmer?  

Question:    I agree with you. My background is in robotics and AI, so I completely agree. That was nodding that I concur with you. A lot of the conversations in AI and robotics are owned right now by Hollywood. And there's a lot of fear that's created about these technologies. These technologies are there to create an abundant future for us, and we have to embrace it and shape it. So, that's all.  I would just want to add that.

Question:    I can be super brief.

Jennifer Grossman: Here is the Defiant Speech Mark Zuckerberg Should Give to Congress

JAG:    All right, we've got three minutes.

Question:    My name's Farsam. I run a technology incubator up in Silicon Valley, and I was just up there two days ago and came down for this event. And I'm very, very concerned with the broad dumbing down of society that we're seeing. And I do believe, unfortunately, a lot of technology has been hijacked inappropriately. To your points about value schema, arguably the work we're doing as the only explicitly Objectivist group that is actually using applied objective epistemology, and some of the design of the work we do, including artificial intelligence, self navigation. But it gets very tricky. I don't think there's a bigger, to use Rand's notion of the anti-concept, a bigger anti-concept out there right now than the culturally casual notion of AI, which is, to the Gen Z generation, just the entitlement thing that's going to be fed through them, through the way Silicon Valley's already in a very premeditated way, calculating how everything is going to be dealt with commercially.

Question:    So there's going to be a kind of AI woke-ism, which sounds weird, but, this is going to morph and merge with status cultural practices. And I think we're just seeing the early chapters of it. Most people here have probably seen The Social Dilemma. I'm working with people who are principled to that, though they're inaccurate and in terms of their appeals to legislative statism. There is a real concern right now about the conflation between artificial and intelligence, and to the extent that we cherish and make central to our philosophy, the notion of free will and how we concept-form and how we utilize our will. The gradual chipping away at the average person's intelligence and use of their free will, their faculty of free will, is what we are observing. And there's a lot of psychological evidence for that right now. And it does take an intergenerational analysis to identify and see what's being worn away as a kind of subjectivism and entitlement comes in vis-a-vis this cultural apparatus. So I do want to ask you, to me, artificial intelligence and machine learning is one thing, but an artificial intelligence system is never going to have values. It'll never have a moral compass, and it is 100% incumbent upon the designers to determine actually how it's going to perform.

RT:    Yes, one of the things I want to mention on that is that there's a flip side to the techno pessimism of “the robots are going to rise up and they're going to destroy us.” There's a techno utopianism that says, “oh, the robots are going to do all of our jobs for us so we can all live on the universal basic income.” The government will give us a check and we'll all live in opulence. And, somebody called it fully-automated luxury-communism that we're going to have because the machines are going to be doing all the work. Now, by the way, I think the first advocate of that really was Karl Marx because he thought, oh, these factories, this new factory system, this new industrial production is so great. You won't really have to have a regular job. You can just sort of hang out and do what you want, and be a farmer in the morning and a critic, and a music critic in the afternoon.

RT:    And  the factories basically will produce everything for you. So that utopian dream has always existed. But I think the answer is, I think Stephen's on the right track with this, which is the approach to education has to change for the age of automation, which is you have to become a lifelong learner. You have to be dedicated to, okay, I'm going to be out there using my head, I'm not going to stuff my head full of skills up to the age of 18 and then go out and work the rest of my life using what I learned  before I was an adult, you're going to have to be constantly learning new things, and ingesting new ways of working, and new technologies, and becoming fully committed to conceptual level thinking and doing the things that the machines can't do.

AF:    We can continue this conversation during our 30-minute lunch break. 

JAG:    And I'd also like to say, for those of you who aren't on Clubhouse, we are continuing these kinds of conversations, both the one that we had earlier and this one, and the one that we're going to have this afternoon. So please see one of the Atlas Society staff if you're not on Clubhouse to figure out how we can get you an invitation to that because our scholars are on there, and they shine. David and Stephen just did a really powerful 90-minute one on philosophy last night. So, that's another place where we can keep this conversation going.

About the author:
No items found.
No items found.