One of NASA’s top advisors on artificial intelligence and innovation, Omar Hatamleh, joined InnoLead on stage at the Impact conference last fall. In an interview with Joyce Sidopoulos, Co-Founder of the robotics incubator MassRobotics, Hatamleh said that while we’re still in the “baby stages” of artificial intelligence, “anybody using these [new AI] tools, they’re gonna make the job much better, they’re gonna have an advantage, they’re gonna give them different insights.”
Hatamleh shared how NASA is using the technology today, and predicted big changes ahead that will require humans, who evolved to think in a linear way, to deal with exponential changes and disruption.
This session was recorded at Impact 2023 on October 27, 2023. To watch highlights, click “play” above. Below is a transcript of the complete session.
• • •
Joyce Sidopoulos: In your bio, it says you have four engineering degrees. One was enough for me. You speak four languages. Did you always know you wanted to work for NASA? Or was it something in your childhood that influenced you?
Omar Hatamleh: Well, as a kid, I always wanted to work on something that had something to do with science. My special [interest] was wanting to work in aeronautics or space-related fields. So, I knew that since I was a kid. There was no ambiguity around that. Then, I started studying. I wasn’t crazy about university or studying but I ended up spending 18 years in college…. But what you do is, the more you learn, the more you read, you understand that you don’t know much. It’s so big, things are so complex, and there are multi-dimensional elements to everything.
You have to put a challenge for yourself, like, who I was last year and who I’m going to be next year, and the person next year has to be substantially better than the person that there is today. There’s always a race and a challenge to improve yourself. There’s so much competition, and so much knowledge… How do we have a small impact that made a difference?
Joyce Sidopoulos: Where did you grow up?
Omar Hatamleh: I was born in Spain and I moved here when I was 22 or 23.
Joyce Sidopoulos: Did you get your degrees all at the same school?
Omar Hatamleh: No, different ones. Probably six or seven.
Joyce Sidopoulos: Couldn’t just pick one? So, what is your definition of AI?
Omar Hatamleh: In my opinion, it’s a field of computer science that uses algorithms, complex algorithms, to try to imitate some elements of human intelligence to be able to do tasks that are usually related to humans, what humans can do. AI is nothing new. It’s been around since the 50s, being used and talked about. Why has it become so important recently? It’s because we have three different tenets: we have the data, we have the computational power, and we have the algorithms. So, until recently, we [had] very basic algorithms — neural networks, very, very complex algorithms — but the more complex they are, you need more computational power.
So, actually, NVIDIA, with their GPUs, played a big, big role in using the computational power to expedite this. As you know, we have a lot of data as well so we can put everything together. But what made it really explode exponentially is, before, you needed to have expertise in programming to be able to leverage and use artificial intelligence. Since November of , any person could take advantage of the capacity and the capabilities of what AI could offer. And that’s changed the game completely.
Joyce Sidopoulos: But AI itself is, and we like to redefine it because AI is not the intelligence piece, right? AI is what humans program: if/then/else, right? We have to rename it “algorithmic insights.” So, it’s not really intelligence. But it’s the generative AI that makes it more intelligent. We’re getting new content, right?
You have to understand — whatever we’re doing today is just the baby stages.
Omar Hatamleh: You have to understand — whatever we’re doing today is just the baby stages. We are so primitive compared to what’s coming up. In the next five to seven years, we’ll reach something we refer to as general artificial intelligence, and then there will be the point where we reach super artificial intelligence. That’s where things will change drastically, and it will be disruptive completely. We’re using these tools now. Anybody using these tools, they’re gonna make the job much better, they’re gonna have an advantage, they’re gonna give them different insights. But I don’t see them, right now, completely disrupting the market in terms of removing jobs. When we get more advanced, then much more [will be] impacted.
Joyce Sidopoulos: So you’ve been at NASA now, I said 25 years, but Scott said 26.
Omar Hatamleh: 27 in June.
Joyce Sidopoulos: So really your whole working career. How has being there shaped your views on how AI and technology are used in space?
Omar Hatamleh: NASA [has] been using artificial intelligence mainly for analyzing astronomical data. One of those recent advances is we’re able to discover a solar system with multiple suns in it. That would have been so complex to understand and interpret without the advance in algorithmic complexities that we use for that.
What I’m trying to do here, in my role, is I don’t just want to use data to analyze results. There are a lot of things I’m trying to start from the beginning. So a scientist, for example, says, “How do we need to do research?” …I’m trying to develop systems that can…go through thousands of papers every day. Go through them and say, “What gaps do we have?” Once these gaps are analyzed, what are the relevant different systems that tell you to develop a literature review for me? Instead of spending two or three months, the system will give it to you in a minute maybe. You can just tweak it on your personal touch.
Then I want to create a system that trains people writing proposals on some of the attributes of winning proposals, and train them to have a much better impact on winning that proposal…. Then, we’d like to use artificial intelligence to disrupt the way we do engineering designs as well. There’s an element system called GANs, generative adversarial networks, where we have two systems competing against each other. I want to see how we [can] use that to create very complex engineering designs with multiple modalities: structural, thermal, and optical in ways that there’s no way a human can do it, in a fraction of the time, so people can spend more time doing things that need the human capacity.
Joyce Sidopoulos: So that’s what you do now, right? What do you see as the goals or the mission of NASA in the next three to five years? What are your priorities?
Omar Hatamleh: NASA is a big organization. We have 10 centers across the nation. We look, for example, at climate change, we look at astronomy, we look at sending humans to different planets. We have now worked on a complete infrastructure and missions and analysis about how to put a human on the moon in the next few years. This time it’s going to be a female that’s going to be landing there. So we’re looking at that example, we’ll look at how to develop a system on the surface of the moon to create, oxygen, water…. For long-term radiation exposure, how do we impact this? How do you create new propulsion systems and new materials? How do you create 3D-printed structures that are actually [designed] by artificial intelligence?
Joyce Sidopoulos: Can you talk about some of the applications that you use in space and with AI? How can that translate into what everyday people are doing here at home? You use the word exaptation, which is taking knowledge or IP from one area and then using it in another area. Do you have any examples?
[Exaptation is about] how can I use technology, concepts, ideas, or solutions that were intended for one function, and then completely capitalize on that and amplify the capacity to be able to help somewhere else?
Omar Hatamleh: Absolutely, exaptation is a term I’ve been trying to push recently as much as possible. It is essentially derived from a biological term and refers to something that was created or invented for one function. Then, it was co-opted for a whole different function. The main example is the feathers on birds. The birds were never intended for flying. Feathers were created for thermal regulations and over time they evolved, it became flying. That concept is how can I use technology, concepts, ideas, or solutions that were intended for [one] function, and then completely capitalize on that and amplify the capacity to be able to help somewhere else.
For example, the digital technology processing that we use for the moon, it’s a foundation that we use for MRIs and CT scans today. We have some algorithmic systems that we use for the Hubble telescope to be able to remove noise and mesh and make images together much clearer. In studies now, they’re using these kinds of technologies for pathology. It typically takes a few days, it takes a while to be able to analyze your sample and your biopsies. Using technology like this, you can do it instantaneously.
Joyce Sidopoulos: That’s crazy. I want to turn to your book. He has a couple of books, one book, and then your new book. We’ll talk about that second. In your book, you explore the impact of AI in tech on future jobs. You talk about ethics, you talk about technologies, and economics, and you say AI is more than a technology wave. It’s a core source of power that fuels politics, business, and society, our minds, our work, our homes. There are challenges and opportunities with AI. Can you talk a little bit about if we should be worried in the next five to 10 years, what should we be looking out for as normal people in society, not people like you?
Every single industry will be impacted, will be built upon artificial intelligence.
Omar Hatamleh: …I don’t see [artificial intelligence] as a technology. I see it as the foundation. Every single industry will be impacted, will be built upon artificial intelligence. Obviously, like any other technology, it comes with advantages and disadvantages and challenges. The challenge with artificial intelligence is that the advantages and disadvantages will be substantially amplified. I’m writing a section [of the book] called, “The Good, the Bad, and the Ugly,” of artificial intelligence.
Joyce Sidopoulos: Is that in your new book, or was that in this book?
That’s the new one… [Talking] about the good. You can’t imagine the incredible progress we’re making in the medical field. We’re creating new classes of medicine. Protein folding, and designing the structure of proteins, it takes typically a PhD student his whole career, just to understand the structure of a single protein. With Alpha Fold we can do millions and millions of these proteins and understand how they are. That’s going to help us create new vaccines and new classes of medicine.
Joyce Sidopoulos: And you mentioned people living to 120?
Omar Hatamleh: Completely. There’s a field of longevity right now. I put also four elements of how that’s going to change. The first one is the biological impact of aging. We’re understanding what are the hallmarks of aging for the first time, whether we’re talking about telomere shortening, senescence cells, epigenetic noise, or stem cell depletion, we know what they are, and we started to know exactly how to start impacting them so that we can have a much better, much longer life.
Artificial intelligence is going to be able to do individualized medication for you, basically genetic opposite composition. The cool thing would be predictive analytics elements. I’m gonna tell you you have a disease before you even have a disease. After that, we’re going to be able to do 3D printed organs as well, again, genetic composition. The nanobots will have met your immunity. With all these elements, a person born today will have an average lifespan in their 20s, or 30s, probably compared to us. If that’s the case, then that means the whole Retirement System is disrupted substantially, and the whole vehicle of the economy is gonna be disrupted. So we need collectively, as a community start thinking about these foundational questions, and start tracking them as a community.
Joyce Sidopoulos: Living that long, aren’t there things like bone structures that are going to break down?
Omar Hatamleh: Cells can actually divide into a certain number, then they stop, and then you have the genetic mutation from the environment, so we’re tackling all these elements. The challenge is actually if I know how to make somebody’s life until they’re 130 or 140, then it costs a few million dollars. Then we’re creating a new class of society. What we would like to do is to have anybody have a chance to be able to use these technologies and be able to make the film that’s playing. The impact on the ugly little or the bad is, I’m very concerned about the human rights of privacy or democracies, how that can be impacted by visual recognition cameras and all that stuff.
A big part is beyond that. There’ve been studies that tell you that using functional MRI, I can actually read your thoughts and understand what you’re thinking about. If you see those results, it’s really impressive how close they are to that reality.
Joyce Sidopoulos: Wow. That could be bad. I don’t want people reading my thoughts.
Omar Hatamleh: So, a group at Meta just came up with a research paper actually just, I think, a week or two ago. Now if you’re going through the functional MRI, you can just put a cap on and understand. It’s called MEG. The system has magnetic, silicon RAM that understands what’s happening, how you are, and your thoughts. Again, this picture of what they show the person, and then the artificial intelligence was able to interpret that picture without even seeing the picture to a whole different capacity. I feel that maybe new religions will be created in the future, and people will follow them blindly.
For example, now we do a search engine and we search for something, and you have options. You have pages and pages of options, that will be fast, but it beats artificial intelligence, it gives you an answer. It’s already guided you to think that way, as opposed to showing you options and you make any decisions, right? So, how’s that gonna evolve in the future? Who’s gonna control these systems? I’m gonna have to have the capacity to influence millions of people to sell them products that I want to about certain philosophies and methodology, so, obviously I’m concerned about seeing how can we tackle them in the future.
Joyce Sidopoulos: This brings me to my next question, which is about ethics. The book mentions the need for responsible AI development, right? Who’s going to govern that? What type of ethical considerations do you think we can mandate and can we mandate it completely?
Omar Hatamleh: These qualities have to be fair, they have to be transparent, they have to be accountable, safe, and secure.
Joyce Sidopoulos: Who is governing them?
Omar Hatamleh: So two things. You’re finding corporations trying to be proactive and saying, “Okay, this is going to be a big problem, so let’s try controlling it on our own.” I know the federal government is also coming up with more initiatives at the federal level that impact everybody else. It’s very challenging, these elements. The problem is innovation and regulation have to be balanced, if you put too much regulation, you stifle the whole creativity and stop things from evolving, but at the same time, cannot be just the Wild West where people are doing whatever they want.
There were about seven companies that started coming up with ideas. They started looking for this generative AI that has electronic watermarks. They said let’s try to control them controlling in a thick video and other stuff. There are always ways around it. That’s a problem, right? There is stuff regulating something. The way we’re regulating topics has been looking at this as this is regulated, it’s simple, this is evolving. By the time you finish, it’s already obsolete.
There was a concept in the 90s called concurrent engineering, where before that, a person used to design a product and it goes to the engineer, it goes all the way through the path. Once it goes to manufacturing they would tell you, “Our machines are not capable of producing that.”
I’m thinking of creating a sandbox, where we have regulators, ethicists, philosophers, policymakers, engineers, and scientists working from the beginning from day one, to develop products. By the time it’s finished, it’s already gone through the rigors of all these multiple, creative, diverse ideas to do something different.
And just quickly, just to give you an example, about bias. I did some research on one of these generative AI tools, and I gave it a bunch of parameters to create a very nice image. I said, for example, “Create a nurse,” and ten out of ten times it was a female. I said, “Okay, draw me a picture of good competency,” and ten out of ten times it was a man. I said “Draw me an unemployed person,” it was a minority woman. The systems are not good. The systems are not bad. The systems learn from what we have in the data. As humans, we tend to be biased, we have a lot of issues. How do we create a new future and we feed and we teach these systems with unbiased and ethical data that actually can produce something along these lines?
Joyce Sidopoulos: So you just need more sampling, right?
Omar Hatamleh: Potentially, we could do something called synthetic data. We can teach them synthetic data that’s clean from all these inherited biases.
Joyce Sidopoulos: That leads back to ethics, though. Who’s making that kind of data, right?
Omar Hatamleh: Exactly. It’s both elements, and companies are doing it. The federal government is pushing for the first time a substantial impact on the US creating guidance and stuff like that. So, they’re working hand in hand, as well, now and then you see meetings between the Biden administration and these companies have to tackle this issue because everybody will. See, it’s a challenge and it’s not an easy challenge to solve. It has to be worked on collectively.
Joyce Sidopoulos: For this audience, because a lot of them are in business, can you share some key takeaways or actionable advice that you have for them? What should they be thinking about and how should they prepare for this AI joining the world?
We as humans evolved to think in a linear fashion. We were farmers, we were hunters — things didn’t change that fast. The challenge now is, we live in an exponential world.
Omar Hatamleh: The main thing is, we as humans evolved to think in a linear fashion. We were farmers, we were hunters — things didn’t change that fast. The challenge now is, we live in an exponential world. How do we fill the gap between linear thinking to exponential thinking? I think that’s going to make or break what we have. We need to find ways to change our way of thinking, and to adapt and embrace a completely new way of thinking about things.
Another thing is, the advantage these artificial intelligence systems have, in addition to everything else, is an element called collective learning. So if you have millions of driverless cars in the streets, whatever one car is learning, all the other ones are learning. As humans, we are individuals. We learn ourselves, even though as a society, we have knowledge collectively, from generation to generation, But as individual humans, we learn individually. How do we tackle what’s happening in the future, while also embracing things like creative thinking, orthogonal thinking, and empathic ways of looking at things.
Joyce Sidopoulos: Do you think this has to be put into our educational system? Someone like me, I don’t think I could ever do that.
Omar Hatamleh: The educational system, if you think about it, is so obsolete. We have a person portraying a message and trying to change and teach. Then you have a bunch of people absorbing the same message. People are different. …Some people like to learn by example, some like to repeat, and others like to learn by themselves. But we give the message to everybody [at the same time.] For the most part, it’s not enticing innovation and creativity, just cookbook recipe kind of elements.
Who are the most creative people ever? Kindergarteners. If you know the spaghetti challenge, they’ll give you spaghetti, and they did the whole array of different people working on it, from CEOs to executives to all across the board and the class of humans that actually won the competition, kindergarteners.
The problem is because of the educational system we have, we have started diminishing creativity and innovation. You start thinking, “You can do that, that’s too difficult, or “That’s impossible,” or “That’s too hard for you.” We are limiting ourselves. What’s actually ironic is that we spend millions of hours trying to make people more creative in the work environment. It’s just incredible how things are interesting.
Joyce Sidopoulos: I want to get to some of the questions from you guys. There’s a lot. There’s a big number next to that first one. I can’t read it very well, Scott.
Scott Kirsner: It has 10 votes and it says, “You say AI is aimed at mimicking human tasks and behaviors. When NASA discovers aliens, will you design AI systems to mimic extra-terrestrial Intelligence? And I don’t know if you want to tackle the “when” in that question.
Omar Hatamleh: So, alien life doesn’t mean, like us, humans. Aliens could be bacteria, it could be something minute in life. It could be anything, right? We’re looking, now, using artificial intelligence, by the way, to missions like Kepler and discovering something we call habitable planets. What we do with this, right now, I think, as of today, we have about 5,500 or so habitable planets. A habitable planet means it’s a planet that’s close to the size of Earth, and then the distance from that star is equivalent, where it could potentially have life. The way we do it is, because if you look at the sun, there is no way you can see planets, it’s impossible. The way you do it is as the planet is translating across the star, you have very, very, very dim brightness of the star. So that then actually tells you the planet, the size of the planet, and then depending on the frequency, how it’s rotating gives you the size of the planet. Then, once you find that you start looking at other things, too, because these are candidate planets, and then you have to confirm it with a different method.
Eventually, you confirm that you have a habitable planet. If you look at our solar system, our galaxy, I mean, we have trillions of almost planets. Statistically speaking, the potential of having some kind of elements of life is very high What kind of life? We don’t know yet. But from a physics perspective, it’s very high that there could be life on other planets.
Now, what life is that? Does it have intelligence or doesn’t have intelligence? That’s a whole different story.
Joyce Sidopoulos: Then you get back into what is intelligence. Is it just living there? Do you think fish have intelligence or is it all just biology? That’s how they how they act, right? So, how will you measure that?
Omar Hatamleh: That’s an excellent question. Intelligence, for example, has multiple levels. There are about 11 dimensions of intelligence. We think that if somebody has an A in the class they’re smart and a genius, when in fact this is only one element, there’s so many elements of things.
So obviously, if you’re a fish, you need to have certain elements of intelligence to be able to understand your environment and everything else. If you’ve got two humans, it keeps getting much more advanced. We’re starting to understand complex topics and things that animals cannot. It’s a different variety of transformative scales of intelligence, and then multiple dimensions of intelligence as well.
Joyce Sidopoulos: So you’ll have that built into the AI that’s going out to space to analyze these?
Omar Hatamleh: This is something we’re also pushing a lot because the further we go from Earth, the signal takes much longer, it could take almost 40 minutes to go back and forth. If you’re sending a probe to analyze something, if it’s instantaneous and something that we use to analyze, we’re not going to wait for it to send the signal and go back and forth, it has to be self-sufficient.
It has to have that advanced autonomous system, empowered by artificial intelligence to be able to make these decisions for us, right? So it’s a very, very important thing that all these things have intelligence elements in them. Also, our interest in humanoid robots is because the infrastructure we use, for example, has to be interchangeable between humans and robots. It’s gonna help and I’ve seen the capacity of a classmate to do multiple things and go into an environment that could be hazardous for humans. There’s sort of synergy between that stuff here.
Scott Kirsner: The second question is, “Could you talk a bit more about the disruptive threat to engineering design?”
Omar Hatamleh: Like I said, right now, the way you design something, and by the way, we have amazing tools compared to what we had in the 70s’, for example. In the 70s’, we used to do so many hand calculations. Now we have very complex finite element models and tools to do very advanced fluid mechanics. It is things that are helping us incredibly to create better designs. The next generation is creating designs that could have taken so much time. Let me give you an example: you have a mission, and in the mission, you have to analyze for structures. You have to analyze for thermal, you need to analyze for radiation, you need to analyze for optical things.
Sometimes, when you optimize one of these elements, the other ones lose optimization. Sometimes it becomes so complex to be able to get the best optimal design that you can have. So by using a system like advanced GANs or advanced artificial intelligence, it would create the best-optimized design you could have. Another thing is, creating elements of creativity that actually can complement your design. As humans, we can think, I can have 30 engineers, and try to solve a problem or try to create a design, but believe me it’s gonna be very limited. But if we capitalize on the power of generative AI, it’s gonna give us ideas, I guarantee you, [that] there is no way I would have thought about.
It complements even the concepts of how to design something from scratch, without any inherited ideas… It’s going to complement them from a design perspective, it is going to complement them from an analysis perspective to create missions on a different scale. Even designing new classes of materials, possibly as well for certain missions, analyzing the trajectories and navigation. It goes into every single element of a design from conceptualizing an idea to delivering and designing it, and eventually having a mission.
Scott Kirsner: The next question on the screen is, “As we consume more and more insular worlds and experiences powered by AI, do you think our natural curiosity of the unexplored will be diminished?”
Omar Hatamleh: That’s something I worry about, actually, a lot. We’ve done a study, and it tells you that when people stopped driving after a certain age, their intellectual capacity diminished substantially. What is coming with driverless cars? Most of us are going to be able to, in the next generations, be able to use these autonomous cars, and not even use intellect to be able to drive. Driving is so much mental exercise. It’s very, very good for longevity and the way we earn our intellect.
By depending more and more on these kinds of things, for making decisions, for analyzing things, there is the potential to lose some of the intellect that we have as humans over the long term. It’s biology, it’s evolution. Everything goes along the line, not only for our mental capacity but also for our physical capacity. If you notice, if you spend hours finishing something on a laptop, your vision is going to be impacted. I see potential being impacted on a race of humans from both intellectual and mental capacity to even the physical form of who we are as humans as well.
Joyce Sidopoulos: You’re gonna have us living until 120, but we’re not gonna have mental capacity. When people stop working, that’s when they usually go downhill fast, because they don’t have that interaction.
Audience member: Going back to the engineering design piece, I was at PTC in Boston at the Seaport a couple of weeks ago, seeing there’s a partnership with NASA to design, with a new kind of AI-powered CAD, a life support backpack is my understanding. It was fascinating to see some prototypes there, because the way a human would design it was [with] right angles, and the way an AI-powered assistant would design it was squiggly, and it was really cool to see.
I’m not an engineer, but I’m sure they were probably 100,000 times more effective. I’m curious how NASA is at the forefront of AI and innovation — you talked about ethics… I’m curious how you view that from a risk because think of self-driving cars, I think the way more effective, the way better. But the moment one crashes, there’s a hype cycle and the news goes crazy. I’m curious about something like a life support backpack, and how NASA views that, because although it’s probably 1,000 times better, the moment something happens or malfunctions, people will jump on that, because it’s made by AI.
Omar Hatamleh: …When you go for human missions, there’s so much rigor, so much risk analysis, and the factors of safety are amplified substantially. It’s very, very rigorous when it has to do with humans in the loop. We’re not just going to depend on what the system gives us and just go with it. Let me give you an example. So, we’ve been using something called generative AI structures design… Let’s say you have plates that have certain stress on them. The stress is distributed differently depending on how you’re pulling it on it, the material, and the thickness.
…We created this one with these generative designs. It looks like alien structures look — so interesting. That actually optimizes the material and where there was not a lot of stress, we just took it away. When stress was needed, it was high stress, compressive, or tensile, or whatever — we beefed it up completely. We use this as an initial idea and then, obviously, it has to go through rigorous analysis. We have to go with experimental or analytical and go through a whole process that’s very, very rigorous.
What you’re talking about here is very essential because it gives us ideas that potentially we didn’t think about so it complements, but doesn’t replace. It complements our capacity to create something more innovative, and more creative, and at the same time use our corporate knowledge and processes to ensure the risk is diminished and does the job according to our support to do it.
Scott Kirsner: I’m curious, Omar, how NASA views SpaceX. Are there things you’ve learned from their successes and failures? Is there somewhat of a competition between NASA and SpaceX?
Omar Hatamleh: Definitely, so, SpaceX is not a competition. It’s helping us. We also help them become successful. Because at the time of the space shuttle, there was no way for us to send astronauts to the space station, we were leveraging foreign nations to be able to give us a ride at $100 million or $120 million per person. It was a gap that we had. Our interest is how you create corporate knowledge to enable local industries to create jobs, create capacity, and fill a gap that we needed. This was the initial one, now they’re doing, with Starship, going with more robust capacity to go to different planets and so on.
We’re working with them collectively. But what’s interesting to see is how the industry does things differently than the government. In the industry, there is a lot less bureaucracy, so you can have a little bit more fun element of risk and stuff like that. You can see the evolution of designs and concepts compared to what we’re doing. Eventually, it’s how we work together as a community. I think that’s going to be what’s going to make it a success for all of us. We help each other. And we’re not going to be able to make it without all of us working at the same time together.
Audience member: Coming back to the engineering design question, one of the schools of thought says like, “We have lots of institutional knowledge and when people retire, you want to kind of grab all of that, and you train the AI.” It becomes the rule-based design where you take the principles of whatever you have learned over 50 to 60 years of practicing a discipline, and you then teach the AI with it. Then you have these examples where the AI comes up with a non-traditional design because it doesn’t know the constraints. So, how do we navigate that so you can let AI do its own thing? At the same time, you have certain principles that have come either out of practice, or people have developed and honed that. If you go too hard on the rule-based design, are we really constraining the power of AI?
Omar Hatamleh: It’s always a balance. It’s always definitely a balance and it depends from organization to organization, what you’re trying to do. You cannot just put something across the board for everybody because everybody’s gonna be different. The thing is, what is the delicate balance between both of them? With corporate knowledge, that’s something I’ve been looking at a lot lately as well. You have a person, you spend so much time training them in understanding the knowledge and becoming an expert and then this person leaves. Certain organizations keep more of this corporate law but some of them don’t keep it at all.
I was thinking of two things with artificial intelligence. You start giving corporate knowledge to artificial intelligence and then you create the collective learning of all that artificial intelligence, something called a system of systems. It goes across all the organization and doesn’t matter if it comes and leaves the system you’re on. It keeps being built upon return and you create something where you keep your corporate knowledge to a state that it’s never happened before in the history of the organization. How do you leverage these kind of things, plus, with that human element, and we complement it, create the perfect balance between both of them?
Joyce Sidopoulos: One of these questions kind of also talks about the impact on the purpose of life, as AI and generative AI takes off…[For example,] driving, you’re not going to need your mind as much, are you going to use your senses as much? You said the second book is going to be a little more philosophical.
Omar Hatamleh: Definitely. I wonder, also, about the evolution of our species, our social fabric, and how it’s going to be impacted. My generation, when I grew up when I wanted to talk to a person, you go talk to them in person or on the phone, that’s it. The newer generation spends about 60% of the time communicating with each other via social media, and the next one after that is going to be impacted substantially by AI.
If we’re talking about software systems, they are gonna have 100% compatibility with your personality. …We could have a humanoid robot in the next 30, 40 to 50 years. With this humanoid robot, I bet you, you’re not going to be able to tell the difference. Whether you are male or female, you design oneto be your companion. And you can describe and say, “Okay, I want it to be a three-star Michelin chef, I want it to be a musician like Beethoven, I want it to win a Nobel Prize, when it comes to physics and the multiplicity of thing, I want it to be a medical doctor in every single specialty, and I want the personality to be exactly matching my personality.” You want it to be everything you want, right? The physical attributes are extremely attractive as well, whether it’s a female. How is that going to affect the future evolution and interaction between humans and relationships and everything like that?
It sounds like science fiction, but if you think about it, we’re headed to somewhere along these lines in the future, and maybe less comprehensive, but there’s gonna be some elements for sure. So my challenge is, what does the future of society look like? Who’s going to evolve?
Audience member: I’m curious about this alien intelligence question… and I was realizing there’s a lot of alien intelligence on Earth — so dolphins, octopuses, and pets. We already communicate with our animals; we have some sense of theory of mind. Just take the pet sort of example, to what extent is AI going to change our definition of what counts as intelligence or create more fluid communications with our animal neighbors, with maybe plants or whatever, and also to what extent is NASA considering things like octopuses as models for what we might encounter for alien intelligence?
You have to be flexible, you have to embrace risk, you have to be able to change all the time. You have to have some emotional intelligence.
Omar Hatamleh: So the octopus, for example, it’s an interesting one. Seventy percent of the neurons are distributed not in the brain, like humans, but distributed across tentacles. That tells you the concept of centralized versus decentralized. The research has shown that decentralized governance models are substantially more advantageous in a lot of ways. It teaches you an octopus that lives for about three years and then dies and then your offspring come out, and talks about how the business model has to evolve continuously. I mean, because things are evolving, can I just use these same business models I have used for a long time? It teaches you how to merge with your environment. It teaches you how to use things that seem ineffective or trash and you look for ways that it can create new value for you.
There are multiple things that you can learn from nature. That’s where biomimicry and exaptation and all these things come to complement the ways you do things. In terms of artificial intelligence, hopefully, one day we’ll be able to understand more about how to communicate with these creatures. That will be in, for example, the functional MRIs, that help us to understand our thoughts and so on. Maybe we’ll find ways to understand what these animals want and what their community means to them or at least understand what they want and what they say and so on.
Joyce Sidopoulos: What advice would you give people who want to get into this kind of field like space exploration or AI? I mean, should every student have to take computer science?
Omar Hatamleh: You have to be flexible, you have to embrace risk, you have to be able to change all the time. You have to have some emotional intelligence. Don’t look at the specific fields of study — look at the core foundation that’s gonna give you an advantage, compared to an artificial intelligence entity. [That] is the creative element, the exponential thinking ways of looking at things. But degrees will change. Right now, if you go to college, and get degrees, by the time you graduate, a lot of the stuff you’ve learned is obsolete. It doesn’t matter what you’re studying. It matters, how to build a foundation, where whatever things are happening, you have the core to be able to adapt, to change, to feel differently, to differentiate yourself compared to anything else. This would be my advice for the future.
Joyce Sidopoulos: Thank you. That was amazing.