Expert Spotlight: Professor Karl Van Bibber | UC Berkeley Department of Nuclear Engineering

 

On Nuclear Infrastructure

in the Global Age of AI

 

Professor Karl van Bibber
UC Berkeley Department of Nuclear Engineering





 

You have now the need due to AI data centers, which, my understanding is, are projected in very few years to drive up the demand for electricity worldwide by 30%.

That's a scary number.‍


If one's going to do that, one has to do that in an environmentally friendly way, which is going to be nuclear.”






Introduction

As artificial intelligence systems scale and reshape global infrastructure, the question of how to power this transformation has become both a technical and geopolitical imperative. Few experts sit at the intersection of energy systems, national security, and advanced research as comprehensively as Professor Karl Van Bibber. With decades of experience spanning national laboratories, academic leadership, and high-energy physics, Van Bibber offers a rare systems-level perspective on how nuclear energy, AI, and global competition are converging.

In this conversation with the Society of Technology, Business & Law (STBL), Professor Van Bibber reflects on the accelerating demand for energy driven by AI, the resurgence of nuclear innovation, and the ethical and strategic tensions emerging as advanced technologies reshape both civilian infrastructure and defense paradigms. His insights situate nuclear power not merely as an energy solution, but as a foundational layer of the AI-driven economy—one that raises profound questions about access, governance, and global equity (Van Bibber, 2026; ).

Professor Karl Van Bibber serves as Professor and Chairman of Nuclear Engineering at the University of California, Berkeley, where he leads one of the nation’s premier programs at the intersection of energy, physics, and national security. He previously held the role of Executive Associate Dean of the College of Engineering at UC Berkeley, contributing to strategic academic and research initiatives across disciplines.

His leadership in national security and nuclear science includes serving as Executive Director of the Nuclear Science & Security Consortium (NNSA), a multi-institutional initiative advancing research and workforce development in nuclear security. Prior to his time at Berkeley, Van Bibber was Vice President and Dean of Research at the Naval Postgraduate School, where he oversaw research programs aligned with defense and advanced technology priorities.

Professor Van Bibber’s career is deeply rooted in the U.S. national laboratory system. At Lawrence Livermore National Laboratory (LLNL), he served as a Senior Physicist in the E-Division and Group Leader for High Energy Physics and Accelerator Technology. From 1993 to 1999, he was Project Leader for the SLAC-LBNL-LLNL PEP-II B Factory, a major high-energy physics collaboration. He later served as Chief Scientist for Physics & Advanced Technologies (2001–2002) and Deputy Director of the Laboratory Science and Technology Office (2002–2007).

Earlier in his career, Van Bibber held academic positions as Assistant Professor of Physics at Stanford University (1980–1985) and Instructor in the Department of Physics at MIT (1976–1977), establishing a foundation in experimental physics that continues to inform his work today, including his leading research in dark matter and quantum sensing.

Across academia, government, and national laboratories, Professor Van Bibber’s career reflects a sustained engagement with large-scale scientific systems, positioning him as a key voice on the future of nuclear infrastructure, advanced computation, and the evolving relationship between technology, policy, and society.



“There is a growing realization, even among your leading environmental groups,

that without nuclear, which can provide base power, rain or shine, wind or no wind,

the idea of decarbonizing the atmosphere is going to be a fool's errand.

It's just not going to happen unless at least in the early decades where we know that nuclear works,

we actually deploy nuclear as fast as we can.”

 



MARCH 18, 2026


STBL: We wanted to hear from you today as someone who has a range of experience in nuclear engineering research, industry, and infrastructure. How are things changing with the advent of a broader, privatized nuclear industry, especially one supporting AI?

Professor van Bibber: Well, the planets are aligning in many ways which, from the point of view of nuclear power, is all favorable. 
We now have an administration that is obviously very pro nuclear. There's been pretty strong bipartisan support, even before this administration, particularly in recent years, for nuclear power.

There is a growing realization, even among your leading environmental groups, that without nuclear, which can provide base power, rain or shine, wind or no wind, the idea of decarbonizing the atmosphere is going to be a fool's errand. It's just not going to happen unless, at least in the early decades where we know that nuclear works, we actually deploy nuclear as fast as we can.

On top of that, there have been very significant breakthroughs in recent years, particularly with the renaissance of high temperature, molten salt cooled reactors, that Per Peterson at my department has been one of the pioneers in. Like all things in the nuclear business, there's nothing new under the sun. All the ideas about specific designs and geometries for reactors, was foreseen in the '50s and '60s, but the technology was not there to really make it work. Now, there's sufficient a scientific technical base that a lot of these ideas now are going to be, I think, economically viable, and particularly, these high temperature molten solid pool reactors, which are much safer, and require much less of a footprint and a much smaller staff. They’re walkaway safe. You can even have a complete loss of electrical power on site. You can walk away from it, and the thing goes into a stable state.

They're also built in a way that's modular, such that municipalities can, kind of, plug and play. You may have a municipality that says, “okay, right now, there's 175,000 people living here, by 2040, that will be 310, by 270, we're gonna be 650,000 people.” So you can actually buy these things, and then put them in module by module at 100 megawatts or two megawatts as the municipality grows. You know, you just, you know, once you've built the basic footprint for the site.


You also have a new regulatory environment that, again, is very favorable to rapid deployment of nuclear infrastructure.
In fact, some people would say that the pendulum may have gone too far, and that perhaps some of the strictures should be tightened up again.

But for the moment now, it's actually a level playing field for new designs and new projects to be licensed. So this is all very good. On top of that, you have now the need due to AI data centers, which, my understanding is, are projected in very few years to drive up the demand for electricity worldwide by 30%. That's a scary number.‍
If one's going to do that, one has to do that in an environmentally friendly way, which is going to be nuclear. 

There are layers of ethical issues, about AI in the first place. The second point is big tech, who are buying up all the, you know, the early and future capacity for these new reactors. There are many ethical issues that need to be thought of, but the fact is, no advanced society could afford, at this point, to not get on the bandwagon with AI, because the whole future economy is AI driven. That's whether you like it or not, that's a fact of life, and therefore, you have to find a way of powering it.

“There's a lot of parts of the world that you don't want to see left behind in this as well.

So can they have data centers? Can they have, you know, nuclear reactors and so forth?‍”

STBL: What do you think are those, the range of ethical questions that come up that you mentioned, and how do those play into some of what you, in some of your classes where you discuss engineering, ethics, and society? If you are a professional in one of these sectors, what kind of ethical questions should you be thinking about, and how might those actually play out in, some specific technical questions that you're facing? 


Professor van Bibber: I think the broader ethical questions there exist more at the metal-level, not at the level of people who are working for, for example, Kairos Power, TerraPower, or NuScale, out of Oregon State, but what they're doing is fantastic work, and I think they can be very proud and very happy about what they're doing.

The larger ethical question, which is more of a societal political question, is—and here's where I disagree with my colleague, Raluca ScarlatThe new nuclear was coming online, because this is an opportunity, you know, not that we're gonna get ahead of the curve, but at least catch up to the curve in terms of supplying carbon free power for people. What you find is that issues about the future capability. Google, this was more than a year ago, maybe two years ago, it was in the Wall Street Journal, announced a deal with Kairo, based here in Alameda to buy their first 500 megawatts. I think that was the right number. One can say,

Wait .This was supposed to be power to the people. And it was being bought up by the rich and famous, Amazon, Facebook, et cetera. I have some misgivings about that.

And Raluca, who knows more about this than I do, takes a quite different point of view. She suggests,

No, quite to the contrary. We should be glad that there is a sector which is actually gonna make the first offer, because if you don't start selling your early capacity, the field will languish. So, in fact, we ought to be glad that the tech companies are buying up the early capacity.

STBL: That would anchor the broader infrastructure and economies.

Professor van Bibber: Right. and actually put money into the pockets of people who actually made a lot of deep investments early on in it. Then, of course, there'll be the usual thing of the have and have nots: this is all well and good for the U.S. Britain, China, France, or Germany, but then there's a lot of parts of the world that you don't want to see left behind in this as well. So can they have data centers? Can they have, you know, nuclear reactors and so forth?‍



“I always joked that I've never worked

an honest day in my life;

I've always been in academia or national labs,

you know.”




STBL: What do you think are some interesting aspects of what you work on, dark matter, in terms of how you approach the questions? What are the frameworks of the questions that you're asking, that are sort of unique? Dark matter as a subject is very different, than pursuing a more “known” quantity.


Professor van Bibber: Right. 


STBL: What do you think people can learn from that state of mind and that kind of state of inquiry, in some other adjacent field? For instance, people who are working in autonomous systems, what could they learn from the experience of research that you've been performing? 


Professor van Bibber: I would have to think about that, because what we do is so specialized, and as you know, I'm an outlier in my department. 


STBL: In which way are you an outlier?


Professor van Bibber Well, I was recruited here because of my administrative experience, but I'm not a nuclear engineer.  I have taught some, you know, the elementary classes in nuclear engineering, the survey courses and so forth, but, in terms of the work I do, well, we don't know what the dark matter is. We have a couple of leading candidates. 
We're looking for a very light one called the Axion, which might be a trillionth of the mass of an electron, but super dense, 100 trillion per cubic centimeter. More weakly interacting than any other force, except gravity. We build these very specialized experiments, with very strong magnetic fields, and these, extraordinarily precise, and sort of, like, highly engineered microwave cavities, and then our other colleagues in part of this collaboration are building state of the art quantum amplifiers.

So there it does intersect with the whole world of quantum information and quantum computing. Quantum sensing, or quantum metrology, are using and developing the same tools, such as these devices called SQUIDS (superconducting quantum interference devices), and devices built from squids, like Josephson parametric amplifiers, and so forth. My colleague, Konrad Lehnert of Yale has developed these things, which have then been tremendously advantageous for the quantum computing. So it's very symbiotic with that. 
I wouldn't call it yet, a practical thing, but I think people are now predicting that quantum computing is no longer 20 years off. It's not 10 years off. It's probably within five.
It actually starts becoming useful. No one believed that. for a long time, but I think we're getting close to that. Quantum computing, coupled with AI, as mind boggling as AI is, and as mind boggling as quantum, when the two get together, we're gonna live in a very scary world.


STBL: Why is the combination of AI and quantum computing a risk?


Professor van Bibber: Simply because all the power of AI, which is getting better and better and better by the month, now coupled with computers which are astronomically faster than ordinary digital computers—who knows what happens then?


STBL: What do you think are the the concerns on the horizon for AI and autonomous systems?


Professor van Bibber: It's interesting. Just the other day, we had the annual Nimitz lecture here. It's been going on since I think the '70s. Often it’s someone very distinguished, such as a former head of the Joint Chief of Staff. Now, it was Victoria Coleman, who's actually now one of our own faculty members, had been the head of DARPA, and was the Chief Scientist for the Air Force. She's now the head of the Berkeley Space Center, which has been granted 36 acres at Moffett Field NASA Ames.‍


STBL: We were hoping to hear from you about that.




Professor van Bibber: They're building effectively a billion dollar satellite campus for joint research, education, and intersection of business. Tech companies occupying our buildings, us working with them, and so forth. Victoria is the director of that. 
In her lecture—and I had the privilege of sitting next to her at the dinner afterwards—she talked alot about autonomous systems. I asked her about this issue there’s a lot of discussion about: human in the loop versus human not in the loop. I said, This is an ethical issue, and she actually said in her lecture that the U.S. does scrupulously adhere to the code of war. She said, we will never take the human completely out of the loop. The question I posted to her was: well, we may not, but our peer adversaries certainly have no qualms about that, do you worry about China, for example? And, actually, she said, No, that's the last thing I worry about. She said, I worry much more about hypersonic technology as being more dangerous. 
We have to catch up there, and fast. 


STBL: More dangerous, in terms of what?


Professor van Bibber: Inability to intercept.




We learn a lot through wars, and it's a shame that we're fighting so many wars.”



STBL: In terms of your experience administratively and at the level of program management, leading different institutions and research centers, and also with the Naval Postgraduate School, what is your perspective on how to build the strongest and most effective types of partnerships between government, the influence of regulatory bodies, the presence of academic work, and then also business influences, to build those partnerships? How do we establish strong frameworks for that? 




Professor van Bibber: This is something I'm not going to have a very well informed opinion, because I've never brought anything to market. I always joked that I've never worked an honest day in my life. I've always been in academia or national labs, you know.



STBL: Not to ask you to comment on necessarily the corporate practice, but I feel like you've had a very strong perspective on different pieces of the board game, and you have experience with the Naval Postgraduate School as well as Berkeley, as well as Lawrence Livermore, and some other security groups.



 

Professor van Bibber: I would say, well, the number one thing that I'm acutely aware of, and people I've been working on, really smart people have been working on, is does the DOD drive innovation, and how do they partner with Silicon Valley? The acquisition process in the DOD, which is, as ossified as the Tyrannosaurus Rex, you know, or the Stegosaurus. From the time you have a new concept to getting the thing deployed, it can be many, many, many years. 
There's now some glimmers of hope that were making a little bit of progress. There, I think, the DOD, it falls on the DOD, and maybe the Trump administration can kind of put his boot in their behind, needs to streamline that process, be less risk averse.

One of the things we're learning, you know, we learn a lot through wars, and it's a shame that we're fighting so many wars. The whole discussion you've read in the paper recently is that we have these incredible patriot missile defense systems, drone intercepts, but it’s incredibly expensive. If you have even an adversary, without your technical capability, starts throwing swarms and swarms of these things at you. What are you going to do? 
You're gonna run out of money and run out of parts. Well, we're learning from Iran and others. They deploy a $50,000 drone.

It turns out now, God bless them, apparently, the DOD now has created and ordered drones, which are, like, $35,000. So we're actually learning that sometimes low tech is the best tech.

We have to work that into the spectrum, as well. But I think the progress is finally being made on speeding up the the whole acquisition process. I think the administrations, although they may not like the politics of some of the Ivy Leagues and so forth, I think it is actually a valuable thing. to have our smartest officers working at our best schools, because that's exactly where you're going to get insights on how to crack this problem: how to accelerate innovation in the DOD. 




STBL: How do you see the strategies that are beneficial, and but also the ones that are, maybe despite ourselves, necessary for the U.S. to pursue within the next 10-15 years of developments around global tensions and possible warfare, and the new space race and competition over orbital space and lunar space? What kind of things will we face? How do we position ourselves effectively, but also responsibly?




Professor van Bibber: That's a very difficult question. There's obviously a huge amount of potential being paid to space. We now have, of course, the Space Command; simply setting up something administratively doesn't solve the problem, but I think it's at least a start in the right direction. We're aware of things that our adversaries are doing. There are strong suspicions that the Russians have already, or plan to nuclearize space, either for anti-satellite purposes, or even first launch type scenarios.
We don’t know. China has developed an anti-satellite capability. To be sure, the actual regimen of doing things in space is always harder than it first seems. The first counter to that is, of course, um, if people start taking out your very high end billion dollar assets there, in a row and so forth, you have to, the rejoinder for that has to be, to be being able to watch quickly within, 24 hours, 36 hours, flotillas of little cube sats which you can throw up like popcorn. 


STBL: Are those to re-establish the infrastructure?



Professor van Bibber: Yes, in other words, in other words, you replaced, you'll never, you'll never have anything as high-fidelity, in terms of resolution and so forth, as, you know, one of our fantastic satellites, but if they get taken out, you can actually have a large, you know, a flotilla of these things here, which may be networked, and give you better resolution through multiple simultaneous angles, whose data can be fused, or even combined in phase. Then it would tilt the table in the opposite way. 
There's no way that an adversary could basically take out thousands of CubeSats. They'll come down, you put them up again.

So it is alwaysacat and mouse game. Much of this thing has been going on for many, many years at the deeply classified level. So neither you nor I are going to be fully read into that.



STBL: Well, it's interesting, you present it that way, that the cat and mouse game isn't someone took down the satellite, we're going to go strike back.
It's more about building competing capacities. 



Professor van Bibber: Exactly. 



STBL: That's almost a hopeful vision for what international competition would be, versus the more violent version.







“There are many ethical issues that need to be thought of.



But the fact is, no advanced society could afford, at this point, to not get on the bandwagon with AI,

because the whole future economy is AI driven.



Whether you like it or not, that's a fact of life, and therefore, you have to find a way of powering it.