Gill Verdon is the founder of Extropic, a startup AI hardware company to meet the demanding power and computation requirements of generative AI. A physicist, applied mathematician, and researcher in quantum machine learning, Gill is also known under his online persona, @BasedBeffJezos, and for his creation of effective accelerationism (e/acc), which advocates for rapid technological progress as an ethically preferred path for human progress, emphasizing optimism and proactive efforts to shape a better future.
He joined Atlas Society CEO Jennifer Grossman for a special interview before his flight to Austin, Texas to attend our annual student conference, Galt’s Gulch 2025. Watch the interview HERE or read the transcript below.
JAG: Jennifer Anju Grossman
GV: Gill Verdon
JAG: Hey everyone and welcome to the 255th episode of Objectively Speaking. I'm JAG. I'm the CEO of The Atlas Society. I'm super excited to have Gill Verdon join us today. He is the founder of Extropic, a startup tech company working on a new kind of computer chip to make artificial intelligence processing faster and more energy efficient; also, @BasedBeffJezos on X. Gill's also the founder of the Effective Acceleration, or E/Acc, movement, which promotes rapid, unregulated technological progress particularly in AI to maximize human potential and economic growth. I'm also very thrilled to announce that he is the keynote speaker at next week's Galt's Gulch in Austin. Gill, thanks for joining us.
GV: Yes, thanks for having me. It's great to be here.
JAG: You grew up in Canada and historically ex-Canadians are over-represented among the ranks of Objectivists. Do you feel that growing up in a place that was infused with an egalitarian ethic and an embrace of a bigger role for government influenced your later views on regulation and bureaucracy or did those views evolve over time?
GV: Certainly I would say growing up—I grew up in Montreal, Quebec. It has, I guess, one of the highest densities of language laws in the world. For example, you're not supposed to learn English too early on. They want to keep you around paying taxes for life. So, they have all these extra laws to regulate literally how you speak or how you write, what you can put on your menu, on your packaging, and so on. You know, that's just the tip of the iceberg in terms of how you feel the weight of all this government overreach and this bureaucracy hanging over your head growing up. That creates a sort of back reaction eventually and I guess . . .
JAG: . . . Shrugging, Atlas shrugging.
GV: Yes, that's right, that's right. So, in my case I was really feeling my whole life like I belonged more in the United States ideologically and yearning to go south. Here I am, but ultimately at least trying to escape the trap there inCanada. I would also say that it's not just top-down and forced. Eventually once people become demoralized that there's no freedom to capture the infinite upside, there's a sort of tall poppy syndrome that seeps in where if you try to be too ambitious people are like, aren't you happy? Why don't you just have a median job? Or why are you too ambitious? Don't rock the boat. That sort of mentality was definitely not a fit for me. So, it's sort of self-selecting. The Canadians that are okay with that they stay there and the ones that are more ambitious and want maximal freedom and to be able to capture the upside of what they create, the value they create for the world, they tend to move to the United States, which is the flagship when it comes to freedom and we should keep it that way.
JAG: Yes, agreed. Actually Peter Copses, who is one of our trustees here at The Atlas Society, he was one of the co-founders of the Apollo Fund, he and his wife were very ambitious. They wanted not to live mediocre lives and after reading Atlas Shrugged, that is when they decided to also shrug and come to America. Let's talk a little bit about quantum computing. How did you get into it and even move beyond it before most of us even knew it was a thing?
GV: Right. Yes, it's been quite the journey.
Originally I was trying to understand the universe. In Quebec there are echoes of Catholic authoritarianism in general. They tell you how you're supposed to speak and what you're supposed to think. For me, I think there was a big back reaction. I didn't trust authorities for answers and I had to rethink everything from first principles. I wanted to learn the first principles and reconstruct how the universe works for myself so that I can trust in my first principles, inference.
So, again, I would say due to where I grew up and my back reaction to authoritarian thinking, I wanted to understand how the universe works. I became a theoretical physicist. I was working on black holes and quantum information, quantum cosmology. How did the universe begin? How may it end? What are the limits of physics? What are the limits of space, time? Naturally, I started trying to understand how nature computes and viewing the universe as a computer. To me this is the most promising path to understand all of physics through a single lens. It's the school of thought now called Qubits. I think there naturally there was a bridge to try to reverse-engineer nature and work with computers that are computing in a way that is physics-based. A quantum computer is a computer that leverages quantum mechanics to do certain operations and essentially allows us to understand pieces of nature that are operating quantum mechanically. They're not deterministic, they're not in a probability overstate, they're in a superposition. You could think of it as parallel universes. So, more precisely, I was a specialist in quantum AI or quantum machine learning, arguably one of the pioneers of the field.
Some of the first algorithms later got me noticed by Google. There the idea was to take inspiration from how black holes compress information. They are the most efficient compression in the world and in the universe. Essentially could we inspire ourselves in terms of algorithms we could run on a quantum mechanical computer to have an optimal compression algorithm. Another name for AI is machine learning. You could phrase most of machine learning as learning compressions, compressed representations of the world. If you install the ChatGPT or Grok model maybe it'd fill out your hard drive, but essentially you'd have an approximate backup of the whole Internet.
So, that led me down the path to pioneering AI on quantum computers and eventually getting approached by Google and going to build a product there known now as TensorFlow quantum and later on working for Sergey Brin on all sorts of special projects including quantum sensors and quantum Internet. Over time I realized that quantum technology was technology similar to nuclear fusion. You know, there's a sort of break-even point we call fault tolerance where what you get out is more than what you put in, in terms of computation. I see a path for that, but that path was on a far too long timescale for my impatient self. I ended up jumping to detecting that there's an opportunity in something akin to nuclear fission versus nuclear fusion. Something we can do right now that arguably is more scalable immediately and gets us a sort of energy density gain that is similar to the energy efficiency and density gain of going from TNT to the nuclear bomb. So, that is what I set out to do three years ago approximately. Now we are having the first results and are scaling our approach of what we call thermodynamic computing with my company, Extropic, and I'm sure we'll go into more detail.
JAG: We're going to get into that in a minute. But I don't want to leave Google. You arrived at Google in 2019, that was about a year after James Damore became one of the first very high profile victims of cancel culture after he was fired for writing the fateful Google memo, officially titled “Google's Ideological Echo Chamber.” Are you familiar with that episode and how did it square with your experience at Google?
GV: Yes, you know Google is a great organization. I don't think it's homogenous ideologically but certainly having an engineer try to point out something in statistics and having an opinion and him getting completely canceled was a warning shot for anybody else who would try to voice an opinion. That wasn't the median or the mode of the population within the walls. I would say that that just created a shadow network of engineers that are thinking some things don't make sense, some things that are prescribed top-down don't make sense. It's not just Google, I think it's across big tech players. There's kind of an ideological capture that we saw across the board in the mid-2010s and again it didn't seem like it's just always suboptimal whenever there's one culture that captures everything and is sort of self-reinforcing and there's no ability to discuss ideas. I personally was more interested in the discussion internally and externally of: “Should big tech and the top AI institutions in the world work with the government and defense sector to put the right technology in the right hands so that we have national security?” That was also something that there was again in homogeneity within the organization. There was Project Maven 1.0 where Google tried to work on AI for defense and there was the camp that cancels people and tried to cancel Google itself and walked out.
I thought it was very unfortunate at the premier organization in the world for AI research. I would say mid to end of 2010s was the golden era for Google. Everybody who mattered was there for research and that's what brought me there. I had a great time research-wise. They invented the transformer which led to the prosperity we see today amongst others, amongst many other things. But the fact that there is an ideological echo chamber and that, for example, it converged that we can't discuss various social things or various national security interests and we weren't mature enough to have an open discussion. There was not a free market of ideas internally. I thought that was suboptimal for the growth of the company. That's just things on the outside, I think. On the inside, in general, this happens a lot. Opinions crystallized across allowed opinions, and it could be across technical opinions as well.
I would say big orgs tend to suppress ideas at scale because if every smart engineer they hire (they have a hundred thousand plus), if they went with everyone's idea, they’d go in all sorts of directions, but then sometimes they overdo it and sometimes they even miss big opportunities for AI for coding. Google invented the transformer. They could have captured the market. They didn't. That was an artifact of a bubble of opinion. It's not just social downside. There's actual shareholder value impact to having echo chambers. To me, it was just a lesson that actually having a free market of ideas and being very open and encouraging a variance of ideas to flourish and be considered was paramount to the functioning of any society or large organization. Google itself is a giant org. It's a sort of microcosm. It's a bubble in itself. But for it to function well, it needs a free marketplace internally.
Certainly that episode and my experiences, again, more on the defense side, I think, opened my eyes to the importance of culture. That's kind of what led me to start voicing opinions or ideas and prototyping them anonymously and eventually starting this movement for a free marketplace of ideas and freedom of speech, freedom of thought, and freedom of—we'll get into the AI part as well—but that was the E/Acc movement, which I started pretty much after leaving Alphabet.
JAG: You and I met last fall at the XPRIZE Visioneering Summit. Peter Diamandis, of course, has been a guest on this podcast and an honoree at our previous gala as someone who has read Atlas Shrugged five times and considers it his bible. Now, at the Summit, you were pitching a prize to solve the challenge of making computing more efficient in order to meet the coming energy demands of AI. Tell us a little bit about your vision for that and how it ended up.
GV: Yes, again, I think just even for technological progress, getting caught in a local mode of thinking prescribed by authorities or establishments is a bad idea. I think we've built all of this technology based on Silicon operating deterministically for many decades now. So there's a lot of inertia and skepticism that any massive disruption could just surprise us and be around the corner. For me it's been an exercise in having an extremely contrarian thesis, but again derived from first principles of physics and then taking a lot of heat for having such a contrarian thesis and then now starting to show or correct. It's been very validating.
You know, the benefits to society are going to be massive as long as we keep scaling. In our case, we looked at essentially: What is a computation? At the end of the day, it's a thermodynamic process. You have some input distribution over inputs and you have a distribution over outputs. You could phrase many, many algorithms that run the Internet this way, including AI algorithms because this process is between something that's probabilistic, that's not a deterministic input, and a deterministic output. You can run computations probabilistically. If you look at the physics of electrons and of matter when you go small enough, things are sort of jiggly and they're non-deterministic. You don't know where each particle is, right? Usually that's a problem for a deterministic computer. We like to filter out that noise and that costs us a lot of energy. But instead we decided to start using that noise in order to run the algorithms more naturally. The energy efficiency gains you get and the density, even spatially you get is over 10,000x, right? That's pretty massive.
Again, that's on the order of going from TNT to the nuclear bomb. I'd like to say we're sort of a Manhattan Project for AI. But again, I didn't think a big lab or a government lab would move fast enough to execute on this technology and decided to do it in the private sector. I can show our progress so far. It's been a few chips now, but we have a chip that is in Silicon and operates at room temperature and is a computer that's between a thousand and ten thousand x more energy efficient for these probabilistic algorithms that underlie so much of what we do. This is the most important thing we could be working on because it's one thing to produce more energy, which is what I talk about with the E/Acc movement climbing the Kardashev scale. We'll get into that. But it's another thing to make how you use the energy and turn the energy into intelligence or the energy into value more efficient. That's what we're solving.
JAG: And if we don’t solve this problem, I remember there was a bit of debate: it's a competition, somebody says we need to solve ovarian cancer, somebody says we need to save the whales. You were saying that basically unless we get this done right, we're going to be limited in how we're able to solve America’s and the world's other great challenges to humanity.
GV: Really it's like we're solving the problem that solves other problems, right? If you have really potent artificial intelligence and you have a lot of intelligence per watt, then you can just apply that intelligence to solve your other problems, right? So from a point of maximal leverage, this is the problem we should focus on in order to solve the other ones. That was my thesis and, you know, I think I was very correct when pitching at XPRIZE that the market poll is so strong for artificial intelligence that we were going to start building massive nuclear reactor-based buildouts for these AI clouds. That's been the case. You just keep hearing news that more clouds with tons of tons of reactors are getting built. It's good that we're creating more energy, but at the same time it's not the most energy efficient way to do it. Again it's not a question of “for the environment.” It's also from a return on investment capital. It just makes a lot more sense to invest in buildouts. If you're spending a trillion dollars, you want it to last. If five years from now there's a technology that makes your current technology somewhat irrelevant, then you know that was a risky investment.
So I think both from a philanthropic standpoint, like making sure that intelligence is abundant and cheap for everyone and everyone has access and in order to avoid having an over-centralization of AI and AI being controlled by very few parties that then can prescribe what we think and what we say, I think a densification of intelligence in terms of energy efficiency and spatial density is necessary in order to maintain individuality. In the era of AI, in the era where we augment our own cognition with external AI agents, we need to have personalized AIs that we own and control personally. That's an extension of oneself rather than one centralized cloud that runs all the AI for the world where whoever is in the background can tilt the distribution of our thinking in much more subtle ways that are almost untraceable. To me, that leads to top-down authoritarian thinking and can lead to—from a pure power-seeking standpoint—whoever is in control of such a system will convince people to converge onto a collectivist mindset. Then while everyone's kind of fighting for scraps, those in control would have a lot of power and wealth and they'd suppress the free market of ideas, which is really the sort of error-correction mechanism against tyranny. All that would be gone. I think this is something that also resonates with Elon. It's why he started opening eyes and now he started xAI. We can get into that. But I really believe that we're solving the hardest part of the decentralization of AI, which is the densification of intelligence at the hardware layer.
JAG: Yes. Elon Musk interacts with your @BasedBeffJezos account regularly. I don't know if yesterday was a record, but it was pretty close. Marc Andreessen, my neighbor here in Malibu, is another account that regularly interacts with you. He's also a big fan of Atlas Shrugged. He described this chilling meeting that he and his colleagues had with the Biden administration's AI advisors, in which the latter shared a vision of having only a few big AI players in coordination with government controlling the future of the industry, one in which competing AI startups would be severely curtailed. He left that meeting and immediately decided to endorse Donald Trump. Do you share Andreessen's perspective on the dangers of this overly intrusive, centralized government approach when it comes to a flourishing AI environment?
GV: Yes, absolutely. You know, I would say that the previous administration's stance towards AI was kind of my personal key issue that drove me to lean towards the current admin. I would say that, yes, that is the key, one of the key issues that I've been fighting against with E/Acc, it's essentially that we were seeing a sort of convergence of centralization of AI and some corporations wanting to merge with government. To me, that was very risky because, again, if the only people allowed to have AI were the government and a few corporations that are incorporated into it, then that leads to tyranny. It just does. That's too much of a great opportunity for power-seeking folks. The solution to that is to keep AI power decentralized and for there to be the power of the individual, how much AI an individual can purchase and control to be sufficient. There's a deterrence similar to the Second Amendment.
You know, there's no absolute monopoly on violence. There is a backup. If people own their own weapons, they can defend themselves against tyranny. But I would say that violence in our age has been largely intellectualized or virtualized. It's more how you can control people. If you have an ability to predict people's behavior, you can steer them and you can engineer control signals or things, you tell them to steer themselves towards a certain outcome. If individuals wouldn't have AI to help augment their cognition and augment their individual skepticism, and you just had augmented capabilities of centralized agencies and governments to subvert people, if those capabilities were jacked up to 11 and individual capability to resist cognitively were not, then that would lead to really bad outcomes. I think Marc Andreessen was aware of this. I would say Elon was aware of this as well. He instantly created a competitor to make sure that there's no one player that runs away with the whole pie.
I think hopefully, that chapter of some of the AI companies trying to be crowned kings too early in the game, hopefully that's over and now it's just a free market, which honestly has been great so far. Essentially our thesis won. There's not been only a few companies allowed to do AI. Many companies are allowed to do so again because of our efforts. Elon and Mark are those that spearheaded things in government. I've just been on more of the grassroots side. But essentially we've had a free market competition and you have all sorts of AIs with different cultural biases. The technology has just been accelerating and getting cheaper for everyone to access. Everybody benefits instead of only a few using the technology and centralizing it to consolidate their power.
JAG: So, let's get back to E/Acc and effective acceleration. To what extent did you see this as a reaction to the effective altruism movement that Sam Bankman Fried and others had started promoting?
GV: Yes, so effective altruism is a sort of funny movement. It's essentially hedonic utilitarians. They try to maximize hedonism and how good people feel or not. Well, not just people sometimes shrimp as well for some reason. But essentially they have this weird moral framework that yields really suboptimal optima. If you try to maximize hedonism, you can converge onto wire heading. You're just like in a VR world or in a simulation and just in this near Nirvana forever, you're not a productive member of society. But overall, I think EA was starting to find ways to capture capital in any way, shape or form from the free market. They're trying to deform the free market and then reallocate it to these nonprofits. Could be for mosquito nets or shrimp farms or weird stuff. But eventually they ended up all concentrating 95% plus, don't quote me on that figure. But it was very, very much most of their portfolio. They concentrated it on AI safety and sort of regulatory capture for AI, which is really just trying to put themselves in power. Their whole thing was that essentially AI is dangerous. We need responsible adults in the room that control it. We're going to be the responsible adults, put us in charge. And they would fund all these think tank organizations that then would become arms of the big labs to continue this fearmongering and spread what is called AI doomerism, which is a spin off of EA—effective altruists.
So we just saw this whole really well-funded complex that was converging us onto a very bad outcome. We had to start the resistance to that movement. That's originally how E/Acc started. It was antithetical to EA in some ways, but really that's just the first battle in general. This sort of enemy, if you will, creeps up as a different name all the time. But essentially E/Acc was there to fight for freedom, freedom of speech, individuality, individual agency, celebrating the individual and to make sure that there's no massively oppressive government or weird complex that restricts your ability to be productive and over-centralizes power.
JAG: Yes. How did you? It's not just you, it's a crew of people and allies, some of whom have pseudonyms. I don't know if others have been outed as you were. You might want to talk about how that felt at the time and how you decided to lean into it.
GV: Yes. The movement gained quite a bit of influence in Silicon Valley and it was starting in the shadows to get some influence in Washington. For example, I think we were opposing the previous executive order on AI that was going to really kill open-source AI. But essentially for some reason or another, I got doxxed by reporters and really it was, I think, their goal because I was becoming a problem because I was getting people to rally around this cause and that was an impediment to the over-concentration of power. I got doxxed and essentially the traditional media pylon would fabricate and try to associate me with all sorts of movements I'm not associated with or deform the message or try to clickbait -- “this man is building Skynet” -- or something ridiculous. That was a lot. That was a big change in my life. Going from some scientists in a room -advising all sorts of important people, but usually I'm kind of the asset in the background to front-and-center and having my face plastered all over the timeline and getting 100 million views a month, I think that was a big change.
But now it's been a year-and-a-half since the doxing and I guess I've gotten used to it, but essentially I wanted to turn this attack against them. If you have an anti-fragile mindset, you can turn any sort of adversity into some upside. So, I used it as an opportunity to go on podcasts, spread the message further than it could have ever gone and also leverage it to acquire talent for my company and raise awareness about this important challenge we're pursuing with Extropic. That helped us hire some of the best and the fact that we've achieved these results on such a short time scale since then is our testament that sometimes getting more attention can be useful to get the best talent and to move the ball forward for civilization technologically.
JAG: Here at The Atlas Society we are huge fans of Andreessen's “Techno-Optimist Manifesto.” Curious whether you had any role in that or was it maybe just indirect. What can you tell us?
GV: Yes, Mark was a big fan of E/Acc from the beginning or from pretty early on. Essentially we were corresponding for quite a few months, very actively fusing our views on the world. That was the time during which he was writing the manifesto. I am one of the first cited influences there. More in the background, but I would say it's very much a version of the manifesto that's maybe more or less cosmic scale, as I tend to think as a physicist, and more practical, immediate, more policy prescriptions. I fully endorse it. I consider myself a techno-optimist. So, it's what I saw in terms of how ideological capture happens: that you have a meme or an idea that spreads and then it just keeps mutating and then it comes up under different names, and that's what we're fighting. The intent with the act was always to spread a sort of central meme or complex of ideas and for it to mutate into many several forks and then for those to have influence. Then it's much harder to take down something with several heads than just one. Even though they did try to take me down and take down the central branch, by that time it had already forked. So it's sort of compartmentalizing memetic brand risk. Techno-optimism is an example of something very akin to E/Acc that's more, maybe professional, less from some weird corner of the Internet, and can be marketed in Washington. Now the vice president literally said he's a techno-optimist. I think in terms of influence, it's been really great. I would say that at least this administration, at least from what they say, have been very supportive of our requests for policies that maintain American competitiveness and AI and maintain openness.
JAG: So, speaking of a cosmic scale, tell us, explain in layman terms what the Kardashev scale is and how we can climb it.
GV: Yes, the Kardashev scale is a set of milestones that are on the log scale, so they're exponentially hard. It's milestones to track how big our civilization is and how much energy we are producing and/ or consuming. There's three big markers on that scale, and then you can interpolate on the log scale, a sort of linear interpolation. Kardashev type one is essentially we produce as much energy and leverage as much energy as there is impeding upon the Earth from the sun. That is a certain amount of watts that is pretty massive. I think we're at barely 1% of the Kardashev scale, not on the log scale, just in terms of Kardashev type one. Kardashev type two would be we leverage all the energy or the equivalent of all the energy that is being radiated by the sun. Again, it's not energy, but wattage, so power.
Then type three would be all in the entire galaxy.
So, essentially I saw in my studies of thermodynamics and more precisely stochastic thermodynamics, which is the physics underlying life itself, I saw that actually there is a sort of Darwinian-like natural selection over all of matter, I call it thermodynamic selection that occurs and the fitness function is whether or not the system has dissipated more heat, which is really weird. But essentially it tells us that the odds that you fluctuate back to zero—the system completely dies—they get much smaller as the system gets bigger. That makes sense. I saw that as this is fundamental to life. If we get bigger as a civilization as measured by thermodynamics, what is our consumption of energy that will ensure a lower likelihood of the destruction of this phase of matter or the extension of civilization? I felt like we had a responsibility to scale up the Kardashev scale. That's the key issue and the one metric we should strive to improve for our civilization because unfortunately, GDP and capital, it's hard to track. It's imperfect. Money sometimes gets inflated away or printed. It's not an objective. . .
JAG: . . . metric.
GV: Whereas with energy, you can't. A joule is a joule wherever you go in the universe and so is a watt. That was just a better thing to optimize than hedons or hedonism, which is completely subjective and leads to weird optima. Now, this has been kind of Elon's whole thing, but I guess we kind of merged memetic complexes and now he’s very focused on climbing the Kardashev scale as the key issue. For me, what will accelerate our ascent is creating a way to convert energy into value as efficiently as possible. You get more value per unit of energy and that's going to increase the demand for energy and thus create a sort of positive pressure to scale up the scale of civilization.
Creating this technology that's the most energy efficient way to convert energy into intelligence, a sort of steam engine for intelligence, operating at the limits of thermodynamic efficiency, to me was the way to create that sort of pressure to climb up, but also creating the social movement to raise awareness that this is the key issue we should all be aligned towards. It's naturally something that free markets optimize for. Right? Free markets select for organizations that utilize their resources in a way that maximizes growth. It's literally a fundamental algorithm that leads to self-assembly of complex systems that have emergent properties that are optimal. Our bodies are kind of a free market of cells and they all have some coupling with one another, they have some interchange chemically. They have a chemical and energetic economy. But then the emergent property is this functioning organism that is you.
I view capitalism itself as an AI-like or physics-like algorithm that is far more efficient at capital allocation and growth than any sort of top-down prescription or top-down control. Imagine a human trying to design every cell in your body. We wouldn't achieve what we wouldn't be able to do, so we wouldn't be able to design ourselves. It's a lesson in humility. I don't think any one committee can design a whole complex system, but a complex system can design itself from self-assembly. It does so by constantly competing and having freedom to explore and optimizing for growth.
JAG: I'm going to get a rebellion on my hands if I don't get to some of our audience questions. Let's try to take a few. MyModernGalt, always great to see you, is asking Gill, “What are your thoughts on the risk of disinformation/ misinformation online today? Do you think AI opens up new risks that we haven't yet accounted for like AI-generated audio or video?z’
GV: Yes, I would say AI-generated audio and video are already here. I would say that, again, if you have a sort of symmetry between the side generating and the side discerning in terms of capabilities, if you had your own AI assistant that you trust, and you own it, you control it, that tells you whether something is real or fake and can detect it and augments your own cognition instead of putting more cognitive load on you, then that's fine. You just want symmetry in terms of capabilities. I would say trying to suppress these capabilities is not the way forward. I think there's a lot of upside left on the table and really everything's always an acceleration, it's a race in terms of capabilities. Again, as long as it's not just the government that has access to these tools and then can generate propaganda and it's so good, you can't even discern if it's real because you don't have access to AI you control, that would be really bad. That would be the main risk. But if peer-to-peer, we have the ability to generate and discern just like humans. A smart human can tell you something that's completely false, that updates your world model, and you either have the cognitive security or you augment your cognition with a group of people you can talk to. It's like, is this real? Is this correct?
There's peer-to-peer ways to validate information. I think if everybody has access to more intelligence then we'll be able to collaboratively filter things either between us and our AIs or collections of subgroups of AIs for people you trust and you feel aligned with their values. I think that's the future, that's how it's been since the dawn of civilization. I think there's been an increase in capabilities for generation, but there's also increased capabilities for discernment of truth.
JAG: Okay. Anne M asks, “When do you estimate launching or shipping the first commercial version of your product?”
GV: Yes, this chip that I just showed, we're packaging it into a development kit for enterprise and innovative startups and maybe a handful of individuals. It is a small batch. It is our test chip that we're aiming, by the end of summer, to put in the hands of the first customers, which is very exciting in terms of timelines. To go from concept to prototype that's delivered to customers on desks in three years is pretty great. Next year is when our million probabilistic bit chip is launched and should be widely accessible. That chip is a proper product, not just an experimental development kit for those trying to get ahead. But depending on the org, already starting to experiment with thermodynamic computing is essential because the disruption is coming next year and they need to get ready in terms of how this affects their algorithmic stack. Whether you're in finance or defense, obviously it's mission critical for you to have the most cutting edge capabilities. Then if you're in general AI, obviously there's a free market competition there that's heating up and so any advantage you can get you should take. So yes, if anybody wants to use the dev kit early we have a sign-up form and we're going to put out some software more in the open first and then the hardware because we just didn't do a very large run of these first chips. You know, it's between 200 to 1,000 early customers but you can apply on the website Extropic AI and there's a sign-up form for those interested.
JAG: Great. I have another comment here from Kingfisher. He says he's bullish for AI but he thinks people are overhyping what AI is currently capable of. Too many think it's the be-all-and-end-all. How would you respond to that?
GV: Yes, I would say the current AI capabilities are not the end game. I think calling a human-like AI or human-level AI AGI is very short sighted. I compare it to geocentrism. But in the space of intelligence, I think human intelligence is a mile marker on our ability to understand the complexity of the world and predict it and steer it. I worked on AI for physics which is much harder than emulating a human, understanding biology matter. I was working on generating quantum matter and superconductors and esoteric materials using AI. I think it's going to keep going. I think current systems are not human level and they can emulate what a human would respond, but they don't have the agency yet and they don't have the ability to have curiosity to seek out new information to decide whether they explore or exploit. They don't have agency yet.
Right now we just have raw intelligence, we have raw compression ability to predict the next token in a sequence, but we don't have that sort of agency. Right now whoever leverages AI and becomes the source of agency for a fleet of AIs can create products that generate a lot of revenue and really impact the world in a really positive way and encourage people to do that. Everybody has the ability to do that. Even if you literally don't know how to code, you can just ask the AI to code for you nowadays. Really human agency plus artificial intelligence right now is a golden period. Eventually will we figure out agency? Yes, probably. For AIs I think we're going to need way more computing than we have right now. That's what I'm trying to bring forth. But really my goal with this form of computing is not an anthropocentric goal. I'm not just obsessed with trying to automate humans. That's not my goal.
My goal is to understand the physical world in order to increase our ability to expand to the stars. I'm really targeting if we can use AI to understand our biology and control it, to simulate it and eventually to help us with problems of material science and all sorts of scientific breakthroughs which I think would be beyond the reach of any human that's ever lived in terms of cognitive difficulty. So I think having the symbiosis between AI and humanity is going to be really important. But I think those that are closed-minded to leveraging AI are going to be left behind and those that are open-minded and leverage it are going to do very well. That's why I feel responsible to spread this message that you should try to embrace AI because there's a lot of upside in it for you and your descendants.
JAG: We are super excited to have you as our keynote speaker next week, actually a week from tomorrow, at Galt's Gulch in Austin. I know you've been busy changing the world and as we're hearing right now, getting your product ready to ship by the end of the summer. Have you had a chance to read any of Ayn Rand's literature? Because a lot of what you're describing seems pretty in line with some of her ideas.
GV: Yes, I really should finish what I started there. I think I started some of the audiobooks but haven't quite had time to finish there. But there's something as well, I think, that's very validating. I came at it from first principles, from my own path. Of course, through potentially culture obviously I've been exposed directly or indirectly to Ayn Rand's ideas, maybe that seeded some of the ideas that then became the act. It's hard to trace. Again, it goes back to what I was talking about with memetic complexes. But you know, I think in my case the fact that I came and converged onto similar principles from my own journey is really validating for this, for Objectivism and the set of principles.
Again, for me it came from a journey trying to understand the physics of the world, the physics of complex systems, the physics of capitalism, the physics of society at large. This seems frankly optimal. I guess there's two schools of thought, let's say, in academia. One is you do a literature search first and then you feel like everything's been done and then maybe you get dissuaded from exploring a set of ideas. Another way is you go forth and you build the idea out and then you see after the fact if there's existing literature that's strongly overlapping, I think there's something about our creative spurts that if you feel like it's an original idea, then you're going to be more excited. I would say there's probably a lot of overlap with Ayn Rand of what I've been a proponent of. But yes, I definitely need to connect the dots, probably looking backwards. But I will work on reading.
JAG: Atlas Shrugged definitely or even just Anthem, because that was all about . . . She had a very unique vision. She actually published Anthem, which is her dystopian post-apocalyptic novella, 12 years before George Orwell's 1984. While a lot of these dystopian writers of the time saw that this totalitarian future was going to be technologically advanced, in Rand's telling, it actually became more medieval, more primitive because they did not understand the value of individualism and freedom. They started with, as you experienced in Quebec, control of the language. So the word “I” has been abolished and lost and that's to control people's thoughts. Well, we're also going to be doing mentoring roundtables at the conference. I'm not sure how much time you'll have with us, but maybe if you could just tell us now what kind of advice you would have for young people who want to live a life of achievement and productivity and meaning with all of the changes that are coming at us as a society at warp speed.
GV: Yes, I would say you can learn from people that have done things, but no one is a central authority for everything. You should pick and choose from advice from several parties, but you should ideally derive your own worldviews from scratch, derive your own set of values. Obviously, you can take inspiration from Objectivism and from what we've been saying, but you should converge onto a set of values that you convince yourself are your set of values from scratch, ideally, because that's very robust to other people trying to influence you.
Whereas I think those that converge onto collectivism tend to defer their cognition to the group. What they don't realize is that they're giving up a lot of power and control and agency and also cognitive security by doing so. If you're just believing what is prescribed to you to believe, then you will likely not have a great life. And you don't know the life you're missing out on if you do so. I would say not taking no for an answer. We have this saying on Twitter. You can just do things, and it's really true. Some people will tell you you can't do it, but you could be like, really? Why not? Then you can keep going, right?
I was in Quebec, I was like, hey, I want to be a theoretical physicist in the best schools in the US and then after that I want to be a quantum computer scientist. The physicists were asking me, what do you mean? What are you doing? Then when I was a quantum computer scientist, I said, I want to start a new paradigm of computing. They're like, you're crazy. You have a great thing going on. So, people will tell you whenever you have really high agency, they'll tell you you’re crazy, or, you're taking too much risk. But that's usually the direction you want to go, because people that want to keep you in their prisons or keep you constrained will usually indicate the gradient of lower risk, and you should take more risk. I think the highest risk is to take no risk. I mean, this is common advice, but I really try to live by that. Anyway, thanks so much for having me.
JAG: Yes, absolutely. It dovetails very much with the kind of ethos here at The Atlas Society of our open Objectivist approach, in which I remind people no one can think for you, not even Ayn Rand. Thank you. I'll see you next week. Very much looking forward to it.
GV: Looking forward to being in Austin.
JAG: Being back in person again. Thanks everyone, for joining us today. Be sure to join us next week when we will be in Austin. I'm going to be interviewing author Jimmy Soni to talk about his book, The Founders: The Story of PayPal and the Entrepreneurs Who Shaped Silicon Valley. We'll see you then.