
An interview with Moshe Vardi, recipient of the 2025 Computer Pioneer Award
For Moshe Y. Vardi, University Professor and the George Distinguished Service Professor in Computational Engineering at Rice University, Houston, Texas, U.S., computer science has always been rooted in the exact musings of mathematical equations. From the age of ten, he knew that within that precision lay answers and a unique level of beauty. In fact, when Vardi’s fourth-grade teacher started a math club to teach linear algebra, he jumped at the chance to participate, and throughout his career, his mother would remind him of his immediate reaction to the math club.
“When she asked me how the math club was, I said it was very beautiful,” he recalled. “And she said, ‘You mean interesting.’ But I was adamant; it was beautiful! There is an aesthetic in mathematics. Not everybody appreciates it, but math has a core beauty, and apparently even as a fourth grader, I saw it.”
What started as a fascination with the beauty of math transformed into groundbreaking innovations that shaped computational engineering. Vardi has made numerous contributions to the fields of computer science and engineering. This year, the IEEE Computer Society celebrates Vardi’s achievements through the 2025 IEEE Computer Society Computer Pioneer in Honor of the Women of ENIAC Award, which was given for his contributions to the development of logic as a unifying foundational framework and a tool for modeling computational systems. The IEEE Computer Society recently interviewed Vardi to discuss his career, what’s developing, and what it means today. The following summary dives into Vardi’s thoughtful consideration of the field and his advice for the future.
You have had an illustrious career, shaping the foundation of computational systems. How did this work become an area of passion and focus?
The computer is the idiot savant. Now with AI, things may change, but for a long time, it was the idiot savant, and you had to give it directions in a language without ambiguity. This is what programming languages are for. My career has been spent using mathematical logic to describe computational systems.
In some of my most noted work, I was able to propose, together with a colleague, Pierre Wolper, a different algorithm for model checking. And when we produced the idea of the better algorithm, the first part was that the math was just beautiful, very elegant.
Later, a colleague approached me and said, “I think this is going to be practical. Let’s experiment with it.” So, I engaged with him and then with some of my Ph.D. students to investigate different algorithms and explore their usefulness experimentally.
What challenges did you face early on in this work? How did you overcome them?
The paper we published in 1986, “An Automata-Theoretic Approach to Automatic Program Verification,” was originally submitted to a conference in 1985. It was rejected because people said it looked very simple and wasn’t significant enough for the conference. So, my collaborator, Pierre, and I added another section that was kind of complicated, and then the paper was accepted to the Logic in Computer Science (LiCS) Conference in 1986.
But the paper’s merit lies in its simplicity. We showed that something complicated can become very simple when viewed from the right angle. The clever idea was that we knew how to make it simple. Today, nobody looks at the complicated section of the paper; the simple section remains alive and well. Simplicity is a feature, not a bug.
That work led to your time consulting at Intel. What was that experience like?
I spent many years consulting for Intel, based on the initial work we did in the seminal paper. Intel said, “Well, your logic is very nice, but it’s an academic language. We need something that will really address the needs of engineers.”
I took it very seriously. When I went there, I said, “I’m not going to come and lecture you. You need to tell me what your problems are. I’m going to listen to you.” The work that we did was really a joint project, based partly on my more theoretical ideas, and the more practical pieces came from Intel.
So, we ended up designing an industrial language inside Intel, which led to a process of standardization, and it became an IEEE standard. When we started, the idea was very theoretical, but it ended up being an actual working tool for the industry. It went all the way from theory to real life.
As you consider the profound impact you have had on the field, what do you see as your legacy?
As one gets a bit older, one thinks, “Okay, what will I be remembered for?” Most scientific papers are forgotten in 50 years; only a very small number are remembered, and that’s because they’ve been very significant.
When you look at my Google Scholar, you’ll see I have some papers that are barely cited. This is the science lottery. We don’t know in advance which paper will have an impact and which paper will not. I’ve written about 800 papers, and there is a long tail of papers that nobody will remember, but I hope to be remembered from the paper series at the top of my Google Scholar.
I hope the original theoretical paper we wrote in 1986 will have lasting value. To think that something you did 50 years ago still has value 50 years later is, indeed, pretty amazing.
Who has inspired you over the course of your career?
I have to go back to my parents. Both of my parents are Holocaust survivors. My mother was in Auschwitz and managed to survive; my father managed not to go to Auschwitz. They were both teenagers; my father was 16, and my mother was 15 when that happened. And when the war was finally over, they met in a displaced person’s camp in Germany and decided to go to Israel.
In Israel, there was what we call the Independence War, and my parents joined a group that was going to start a kibbutz, a kind of collective settlement. Not long after joining, they decided to have a family. My older sister was born just a few years after the war, in 1950, and I grew up on the kibbutz.
To me, that’s an amazing story of human resilience. My parents came from literally the seventh circle of hell, and emerged to say, “Ok, we have lost, but it’s time to start a family.” That story of resilience to me is just mind blowing.
It’s why I’ve always been inspired by the Winston Churchill quote, “If you’re going through hell, keep going.”
In your role, you have served as an advisor to many. What has struck you as an important lesson for the next generation of computer scientists and engineers?
For computer scientists and engineers, life wisdom is just as important as technical knowledge. The students who do the best are those with life wisdom, which we don’t often focus on in academia. We focus on the technical stuff; we think that’s the hard part. But I have students who are very, very bright technically, but do not do very well because they also have to manage a career. You have to interact with other people. You have to work in an organization. All of these take skills outside of technical ones. And I’ve also seen it the other way around: I have had students who were not always technically the strongest, but they knew how to bring their best and did well.
Also, you have to see people for who they are. The most important principle of my advising philosophy is that when I’m an advisor or mentor for a junior colleague, I work for them. I’m their career coach, and I need to meet them where they are. Different people bring different talents, and we must help them bring out the best in themselves.
With your illustrious career as a backdrop, what advice do you have for future leaders in the field?
They should consider all applications for their work.
Early on in computing, we thought we were just playing. It was like one big video game where programmers were playing games with computers. We were having fun, you know? It was like solving puzzles.
But suddenly, we realize that computing is running the world, and we look at the world we created, and it’s not always a pretty picture. I didn’t see it coming. I thought that we were doing good things for the world. And, of course, many aspects of computing are good for the world; you can think of many, many things that have benefited. For example, without this technology, we would have been creamed by COVID.
But you also have to look at the adverse effects of technology. The majority of our students now seek counseling. It’s very common for my students to request extensions on projects or assignments and tell me it’s because they are having a hard time. I don’t even ask questions because I know they are. Social scientists debate the causes, but the younger generations all spend an enormous amount of time on screens, and I have remorse when I think, “How did we not see this coming?”
This means that I’m now trying to do a better job of talking to students about the fact that technology is a double-edged sword. There is a good part, but also a negative part. And I try to tell them they need to do better than we did in my generation. We did not even think about the negative impact. This generation must think about the negative impact.
IEEE’s tagline is, “Advancing Technology for Humanity.” Humanity is the goal; it’s about technology to support humanity and the public good. That’s the purpose of technology.
I’m not saying that we have to look at our jobs every day and say, “What am I doing for humanity?” It takes time before you can see it, but we should never forget that we’re here to serve something bigger than ourselves.
Working to support a better world makes me think of my father. I came from probably 10 generations of rabbis, and as the oldest son, he wanted me to become a rabbi. I broke the chain when I said, “I’m going to be a scientist.” My father was a little bit disappointed, and I know that it gave him some grief. But now, if he could hear me talk about computer science and engineering for the betterment of the world, I’d like to think that he’s happy with me.
Moshe Y. Vardi is University Professor and the George Distinguished Service Professor in Computational Engineering at Rice University.