Close

Podcast: A Beginner’s Guide to Artificial Intelligence

By Tyler Smith |  February 13, 2023
LinkedInTwitterFacebookEmail

For many years, artificial intelligence was a fuzzy, far-off, futuristic concept for most people.

But in recent months, that changed suddenly, with the release of ChatGPT in late November; announcements from Google, Baidu, Microsoft, and Alibaba about chatbots and generative AI; and increased awareness of art-focused AI platforms like DALL-E and Midjourney.

For this “beginner’s guide” podcast, we wanted to ask some really basic questions about what AI is, how non-experts can get up to speed, how it may create competitive advantage for some companies, and how it might impact our jobs. So we rang up Matt Baker, SVP of Corporate Strategy at Dell Technologies.

You can subscribe to our podcast, “Innovation Answered,” on Spotify, iTunes, Stitcher, or Google Podcasts.


Tyler Smith: 

Welcome! You’re listening to the Innovation Answered podcast. Innovation Answered is the podcast from InnoLead, the web’s most useful resource for corporate innovators.

I’m Tyler Smith, and I’ll be your host for this bonus episode, called A Beginner’s Guide to AI. We’re taking on this topic at the 101 level, without jargon or complexity, assuming that you may be an AI newbie. We want to make things crystal clear, so you can get down to business.

Artificial intelligence is not a new concept. Computer scientists first started using the term and shaping the field in the mid-1950s, and in the 1980s, there was a boom in funding for companies working at the cutting edge of artificial intelligence hardware and software at the time. Even the Steven Spielberg movie “AI,” about a realistic robot boy abandoned by his family, is more than two decades old.

But now, with demonstrations of how AI can create images or craft poetry with a new level of sophistication and polish, the possibilities of AI are once again center stage.

We wanted to ask some really, really basic questions about what AI is, how non-experts can get up to speed, how it may create competitive advantage for some companies, and how it might impact our jobs. So we rang up Matt Baker in Austin, Texas. 

Matt Baker, Head of Corporate Strategy, Dell Technologies

Matt Baker:

I’m the Head of Corporate Strategy for Dell Technologies, so generally speaking, it’s my job to look over the horizon and see what’s coming. AI is clearly something you’d have to be buried under 10 feet of concrete to not hear about. It’s definitely an important topic for us; it’s an important topic for the industry; and I think it’s an important topic for society. 

Tyler Smith:

Matt has been at Dell since 2005, and before that he was at Intel for a decade.

Matt Baker:

I think we’re really just at the very, very beginning of the impact of AI on our lives, our businesses, etc. But I will say that AI is a difficult term, and that’s probably something we should cover. Generally speaking, my job is to sort of look after the business impacts of technology. I have a CTO partner, John Roese, who looks at the hard technology; I’m more on the philosophical side of what it means and frankly, the business side of how it could impact, help, hurt, etc.

Tyler Smith:

When we poll people in large companies right now about technologies [they are] most active[ly] exploring in 2023, AI is at the top of the list. How would you define AI so that it is distinct from just incrementally smarter technology or better algorithms?

Matt Baker:  

The range of solutions that get thrown into the AI bucket are pretty massive. And some of them, frankly, are just incrementally smarter, capable things. I would put robotic process automation or RPA into that category — it’s not really new, but it is actually being made smarter by the infusion of AI. So while RPA is not, it can be impacted and improved by that. 

I think the best way to think about AI is a living, learning capability that can evolve and optimize and improve with diminishing amounts of direct human involvement. I like to think of AI technology through the lens of the life of a child. There’s a period of this complete dependence on the parents, and early lives of children is in essence training — we get potty trained; we get trained to walk; trying to talk. But ultimately, after a period of time, the child — or the model, in this case, the AI model — becomes increasingly capable and independent, and it begins to learn on its own. The amount of intervention required by the parent, or the developer, diminishes through time, and the capability increases through time. And that is in contrast to traditional software, where every single act requires specific coding. 

[Traditional software] is hard coded; it’s like a state machine — it’s designed to respond to a finite set of stimuli with a finite set of responses. Whereas AI… there is no finite amount; it consumes large amounts of data and improves over time. In the beginning, it requires a lot of human hands on, but through time, it just builds and snowballs. That’s the way I think of AI as different from [or] just better than algorithms or traditional software.

I think the best way to think about AI is a living, learning capability that can evolve and optimize and improve with diminishing amounts of direct human involvement.

Tyler Smith: 

How would you suggest a non AI expert get up to speed on AI and gradually become more comfortable with it?

Matt Baker: 

The first thing to do is get familiar with the history of AI. There’s this unfortunate history of AI as this coming super intelligence. AI, generally speaking, back in the day, was this notion of general artificial intelligence, the notion of a human-like machine that can rival us in terms of its capabilities and its intelligence. 

I would say what we talk about today as AI is way, way, way far removed from that general artificial intelligence concept that has been discussed since the 50s, into the 70s, so on and so forth. Skynet is not going to become alive and take over anytime soon. In fact, I think [given] how little we know about human cognition, I don’t believe it’s even possible to get to that level. What we have today are very advanced algorithms that are not designed to replace human beings. They’re largely designed to augment and build on our capabilities. 

I think the way to think about it is, let’s go get a brief history. Before we got on, I looked up… a great blog from [Harvard University’s Science in the News blog]. I think it’s from 2017, and it’s a great primer on the history of AI. It offers up a lot of really good links. I mean, that’s 2017 — it’s a little dated, but what’s in there is really interesting. 

I would take the time to read about the current crop of large language models, which is one class of foundational AI; computer vision; and these other foundational models. No one’s going to develop their own large language model unless they’re in a very sophisticated space. Instead, understand what these foundational AI capabilities are, how they were developed, who has developed them — OpenAI is an example that’s in the news a lot these days. At that point, you can understand [and] you have a real base [for] it. 

What we’re seeing today, and why I think there’s so much focus on it, is that what we’re building now and seeing now is really approachable AI. Like I said, no one’s going to go develop a large language model if they’re in the marketing department at a company. But if they’re in the marketing department at a company, they’re probably going to be really interested in ChatGPT — which has been in the news a ton, which is really based on a large language model built by OpenAI, to a real approachable user interface, which is a chat-like function that you can ask it anything, and it spits copy out that could be leveraged for all sorts of stuff. 

So that’s what I would do is, I would get a brief history, understand the base technologies around these foundational models: language models, computer visions, etc. I’d also urge folks to stay away from the hype — don’t think that the machines are coming to replace us; it couldn’t be further from the truth. [Lastly], I think the most interesting thing — and the words you’ll hear thrown around — is this notion of generative AI solutions. And again, they’re based on these foundational AI models to do all sorts of useful things. I mentioned ChatGPT; you can chat with it, prompt it to come up with a story, or whatever. There’s also generative art — Stable Diffusion is what I was thinking of. 

All of these are examples that we see in the news because they’re very obvious and visible, but behind the scenes, if, for example, you’re a developer, there are tools that leverage the same language models, machine vision to do all sorts of things [like] generate code which you can then go modify and optimize. These [tools] become human-machine partnerships, like cobotics, etc. And I think that what we should do to get up to speed is just work your way through the history; understand the foundational tech; understand what we’re seeing in the news today. It’s all available to play with — you can mess around with it right now, if you can get through because it’s kind of overloaded at the moment. And then understand this notion of generative AI, which I think of is more of the approachable age of artificial intelligence. 

I’d also urge folks to stay away from the hype — don’t think that the machines are coming to replace us; it couldn’t be further from the truth.

I hate to say it, I was on a trip with a friend and his son, and his son was using ChatGPT to help him with schoolwork. Now, some people would say that’s a bad thing. I actually, as I watched him work with it, I was like, ‘Wow, this is really interesting.’ It was really driving productivity for him. It wasn’t replacing his thinking; it was more of a productivity enhancement.

Tyler Smith:

What industries do you see most being affected by AI in the next two to three years? And why are those industries going to be the first?

Matt Baker: 

I think most industries are already being impacted by AI, but they’re being impacted in the deep technical side of the business, inside of the research and development sides of things. 

I mean, think about it — the automotive sector has been impacted by AI. Short of self-driving cars, which I think are a long way away, driver’s aid and all of the stuff that you see in a car today with LiDAR (light detection and ranging sensors), and cameras, etc., those are all feeding into machine learning algorithms that keep us safe and reduce the load required for driving on the highway. 

Agriculture [is] a place that is so often overlooked, but is insanely infused with machine learning algorithms and automation. I think what we’re seeing now, again, I keep using this term, ‘approachable AI.’ 

In your R&D department, I’d be surprised if there’s any industry that’s not being impacted, and has been impacted over the last decade, by machine learning, deep learning, etc. However, I think why it’s so interesting now is that it’s approachable. It’s approachable by the marketing department; it’s approachable by the sales teams; it’s approachable by the support organizations to infuse it into how they do support. So it’s this real approachable, almost pedestrian use of AI that I think is most promising, and why I think there’s no industry that you can point to. 

I think it’s probably more like horizontal swaths through the organization. Good example, marketing departments write a ton of copy, right? Anyone within the firm could use something — we keep coming back to chat GPT, just because it’s been in the news. You could prompt it [and] you’re going to spit out some really good copy, which you then end up focusing more on tuning, optimizing, etc, versus clackety, clackety, clackety for an hour or two. I think it’s going to be more like the horizontal things, as we see these new capabilities come online, the developer tooling that we’re seeing, using the same large language models to generate chunks of code, which can then again, be optimized. That’s why I say, I don’t think it’s an industry; I think it’s going to take over these horizontals throughout the broad industry. 

And it will come about as we start to see these tools that leverage foundational AI technology, and then are twisted into doing tasks that are helpful to different elements of the business. I don’t think anyone’s going to be immune from the impact of generative AI technologies and these new, approachable AI technologies that again, my friend’s son is using like any other digital technology. I just think it’s going to be pretty pervasive.

Tyler Smith: 

How do you see AI creating a competitive advantage for companies that do leverage it relative to those that don’t? Or is it going to be a wide gap? How do you expect that to play out?

Matt Baker: 

I think it will be a wide gap. I come back to that notion of partnerships — human-machine partnerships. The thing that I see [as] most interesting about this current crop of generative AI functions — that seems to be the term that sticks best — that are super approachable [is that] they just drive a massive amount of productivity. So if you’re a company who’s not embracing these AI technologies, you’re going to be left in the dust because it is such a productivity enhancer to take these menial tasks away and allow the rest of the rest of your brain space to think more deeply and spend more time on the critical thinking side of [things], versus the raw production side of it.

I think that, you know, getting rid of those boring tasks and repetitive tasks and base functions is going to drive a massive amount of productivity. If you’re not embracing this technology, you’re likely going to be left behind as your general productivity diminishes. 

The last thing, though, I think you want to think about it is like this is an opportunity to get rid of OpEx, to replace people. It’s not going to work that way, in the short term — and I don’t even think in the long term. It’s going to be this continuous evolution of human-machine partnership that drives human productivity. That’s why I’m so excited about it, is that AI technology to date has been pretty fussy and difficult to deal with. You had to be a very advanced computer scientist to approach it. Now you have all these tools that are super approachable, and therefore, I think we’re about to see this Cambrian explosion of cobotics and human-machine partnership capabilities that drive a massive amount of productivity.

If you’re a company who’s not embracing these AI technologies, you’re going to be left in the dust.

Tyler Smith: 

This conversation with Matt Baker of Dell has been great. I’ve learned a lot; I hope you have, too; and the conversation is going to continue. But first…

Kristen Krasinskas: 

I’m Kristen Krasinskas from InnoLead, and I’d love to invite you to join our community of change-makers and innovators this October in Boston. That’s when we’re hosting Impact — October 25th through the 27th. It’s the only event designed exclusively for people working in big organizations, in roles like R&D, digital, innovation, and new product development. You’ll learn from peers at Johnson & Johnson, Goodyear, Cisco, Fidelity Investments, the Boston Celtics, and more. You can find out more about Impact 2023, or grab your early bird ticket, at innolead.com/impact. And now, back to A Beginner’s Guide to AI, with Matt Baker of Dell Technologies.

Tyler Smith: 

Do you divide AI in any categories? Or buckets? 

Matt Baker: 

It’s almost a set of concentric circles. Inside, you have the architectural nature of things that people probably talked about five years ago — and I’ve mentioned them throughout — the evolution of machine learning, deep learning, neural networks. Those are the foundational architectural enablers. On top of that, [in] another concentric circle, are these foundational AI models that I mentioned: large language models, computer vision — those are the ones that easily come to mind, but there are many others. And inside of that concentric circle, you have the basis of what we’re seeing now, which is another concentric circle of practical applications of complex models, large language models. So then you see things like GPT-3, which is a large language model. And then around that GPT-3 ecosystem, you see things getting built like ChatGPT; their version of generative art. You then see others out there — they’re all over the place. There’s some that, for example, have a large language model that can replicate the voice of a historical figure or a current figure and can leverage that then to have a conversation that sort of mimics their personality and their intelligence. So that next concentric circle, and I think what we’re going to see is just this sort of notion of bigger concentric circles. So the categories, I think, are more about the classes, the building up from sort of core foundational technology out to things that are more useful. 

I will say that there is this divide between AI and general artificial intelligence — the ‘scary robot that takes over the world.’ Don’t waste time on generalized AI — that’s just not going to happen anytime soon. It’s more in this machine learning, deep learning neural networks. Those are the words you hear people throwing about, but it’s these foundational models that I think are more important to understand, because they’re foundational to what the capabilities that are being built today are all about.

I don’t think there’s a good way to categorize, and I did a quick search before this, like what categories are available. A lot of them are stale; a lot of them don’t comprehend what we’re seeing today, which is why I think we’re in this [stage] of concentric rings of capability being built around foundational technology. It’s probably not worth a lot of time understanding the ins and outs of machine learning, neural networks, all of these complex computer science architectural models that most of us will never have to fully understand.

Tyler Smith: 

We keep talking about OpenAI, how these approachable models are ready [for] the public right now. But they’re not perfect. You see the [generative art] ones, they can’t figure out human fingers yet, so they don’t do that very well. It’s…

Matt Baker: 

There’s like seven fingers. 

Tyler Smith: 

Yeah, like seven fingers, four rows of teeth. They can’t figure out little things right now. And even ChatGPT — sometimes you tell it to end a sentence with the word ‘apple’ [and it] can’t do it. It just physically won’t do it. What’s the timeline on those models being basically perfected?

Matt Baker: 

Remember that what you’re seeing right now is a concept car that has been trained on a limited set of data that is probably imperfect. I asked it all sorts of things about my own job, and I’m like, well, it’s about five years behind in terms of being able to talk about what Dell Technologies is doing, for example. But don’t be fooled by its imperfections in terms of asking — you can fool it very easily. 

Think about it, though, if you took that foundational model, and then started training it on your proprietary data, you would get a much different outcome, and you could apply it more directly.

So think of them as imperfect concept cars. The generative art ones are really complicated, because I think the visual side of generative AI has, as you say, a long way to go, and you end up with these, in some cases, rather, concerning images. And like I said, though, I think as it relates to language, we’ve reached a point where the language models are advanced enough to be then trained on more proprietary data, say, within your organization, and then put to better use. 

So I don’t think it’s the fact that they’re imperfect — they’re imperfect because they’re trained on imperfect data. And in fact, there have been all sorts of really unfortunate and/or nefarious outcomes because things have been trained on bad data. So there’s a whole side of AI ethics and AI transparency that I’m not satisfied with today. 

But when applied to your own organization, you can better govern what it’s fed, and therefore, test the outcomes coming back. I wouldn’t worry too much about what you see is the capability of the UIs that they’ve put out there for us to play around with. Instead, I would say, ‘Well, how do I apply that to my own datasets, and then become really intelligent on the latest and greatest, and do that securely?’ So I don’t think it’s that AI is imperfect today; I think it’s been trained on imperfect data. And frankly, I think, in a lot of ways, the full power of what some of these models can do today, is probably governed purposefully by companies like OpenAI and others. This is a free capability. So there’s pro capability, which you can then do other things with. 

What I’ve seen when people apply the model to other tasks is far more interesting than what you get with just the ChatGPT function that OpenAI has made available for all of us to play with free.

Tyler Smith: 

I was curious as to what ways do you believe AI will impact your job at Dell?

Matt Baker: 

It’s an interesting question. I could create the Matt Baker bot that replaced me and could interact with you and answer all your questions, although I don’t think that’s going to happen.

For me, how it’s impacting my job is that I think there are a lot of ways in which we can apply this technology to augment the capabilities we have today. Dell has been known as a company for a lot of innovation in terms of online business, our direct model, the interactions that we have with our customers every day. There’s a huge amount of opportunity around building new ways for our customers to interact with us to either create a lot more advanced interactions online to help you and find your answers much more quickly, in a way that’s super approachable. 

I think [generative AI] is going to be something that sort of drives a massive amount of innovation, and, frankly, a massive amount of productivity. That’s what gets me excited. 

I think they also can help, for example, our support folks. They have a huge amount of data at their disposal that’s difficult to get through. And I think what we’re seeing is people talking about these new models as sort of a new way of executing search. So imagine you’re a support agent, and you’re dealing with a lot of complex technology, and you can input a set of symptoms that you’re seeing, and you can find the most relevant set of responses to help address that issue. I think that search is probably going to be massively impacted by what we’re seeing today because the results will be so much more targeted than what we see today, which is why, I think, in the press, you see, ‘This is Google’s code red.’ It’s like ‘Oh, my gosh, oh my gosh, oh, my gosh.’ 

So specifically my job, I’d have to think about that a little bit more. But how it can help augment the day-to-day jobs of everyone at Dell Technologies, I can’t think of a role that wouldn’t be impacted in some meaningful way over the next five years with generative AI. I think it’s going to be something that sort of drives a massive amount of innovation, and, frankly, a massive amount of productivity. That’s what gets me excited. 

I never look at this pessimistically as an opportunity to replace individuals or do any of that. I see it more as an opportunity to really augment us as human beings and really bring this power to drive greater productivity. It just makes me really excited. I guess I’m an optimist, but that’s how I see it.

Tyler Smith: 

Absolutely. And AI is going to impact a whole lot of jobs. But one that we think about that could really make a big impact [on] is these call centers or customer service lines, whether it’s at Dell or another company. Will AI be the agent or augment the agent?  You see it already at the beginning of those calls — you’re putting in your information with AI, and you’ve been doing this for years. So do you think that once you get connected to one of those support members, is that going to be an AI setting, too?

Matt Baker: 

We’ve had chatbots for a long, long time, and I think all of us have been frustrated by chat bots. I’m not talking about Dell Technologies’ chatbots; I’m just talking [about] chatbots in general. They tend to be, at times, pretty annoying. 

However, imagine if they were infinitely more accurate and productive. I’d rather be interacting with them than waiting to talk to a human being. There’s a finite number of people to talk to, and if I can get to the information more quickly, then I think I’d prefer that. And given how we live our daily lives, I would welcome interacting with an intelligent bot, if you will, that’s fueled by some of these more modern AI technologies. 

I don’t see it replacing large swaths of customer support folks, because there are the pedestrian questions that are easy for the chatbot to answer, and then at some point, you’re going to have to escalate. And so the way I think about it is there’s already an escalation tree in every call center. I started my life in a call center. There’s the frontline, there’s level two, level three, level four. Maybe level one becomes level two, based on the improving intelligence of these capabilities. But again, I think it’s more [about] productivity, because I don’t think anybody’s happy with the state of customer support broadly in the world. 

There are definitely [job] displacements and disruptions, but ultimately, most of these [technologies] lead to more jobs, more productivity — more, more, more, not less, less, less.

I think what’s exciting is that if you are in a business that has customer support as an integral part of delighting your customers, you probably ought to be thinking about leveraging some of these capabilities in pursuit of delighting your customers, because it will create competitive advantage. 

I don’t like to buy into the ‘Will it replace?’ or ‘Will it impact…?’ because that’s not the history of technology. There are definitely displacements and disruptions, but ultimately, most of these lead to more jobs, more productivity — more, more, more, not less, less, less.

Tyler Smith: 

I’d like to say thank you to Matt Baker of Dell Technologies for coming on the show, and to my colleagues Meghan Hall and Scott Kirsner for their help on this episode.

If you enjoyed it, we’d love for you to rate or review Innovation Answered on any podcast platform.

Our next Beginner’s Guide episode will be out in March — it focuses on design thinking. But if you’d like to learn more about making change, leveraging emerging technologies, and driving growth in big organizations, check out the rest of our feed, and don’t forget to subscribe to Innovation Answered on your podcast platform of choice. This is Tyler Smith — thanks for listening!

LinkedInTwitterFacebookEmail