As the first-ever Chief Artificial Intelligence Officer at Entergy Corporation, Andy Quick is working to drive AI usage that can create quantifiable value — from using LIDAR and computer vision to assess storm damage faster, to deploying predictive models that can prevent transformer failures and reduce power outages. Quick, who has spent nearly three decades at the company, rising through the ranks of IT and serving as a business unit CIO, now reports Jason Chapman, Entergy’s Senior Vice President of Technology and Business Services.
Headquartered in New Orleans, Louisiana, the Fortune 500 energy company has annual revenue of nearly $12 billion. Entergy provides electricity to three million utility customers in Arkansas, Louisiana, Mississippi, and Texas. The company owns power plants that have a combined generating capacity of about 24,500 megawatts, including five nuclear reactors and several large-scale solar facilities.
We spoke to Quick earlier this month about why his role was created; the AI tech stack at Entergy; his prediction about AI’s impact on jobs; and some early case studies.
• • •
What prompted Entergy to create the role of Chief Artificial Intelligence Officer, and how does it align with your broader innovation strategy?
I think the senior leadership believe that there is the promise of creating value with AI at the company very broadly to help support this mission of our company, and so the decision was to investigate what AI might look like. Before we formed the role, we formed an AI task force [that was] very cross-functional, where we really learned a lot about the emergence of generative AI and AI in general and developed a strategy for how we might embrace AI at scale at the company. Once we developed a strategy and the senior leadership supported the execution of that strategy, we felt that having a dedicated role at an officer level to lead the execution of the strategy made sense. We have the right level of focus and a senior role that’s going to support the change management that’s required for AI adoption.
How do you prioritize AI initiatives across the enterprise, and what criteria do you use to decide where to deploy AI?
Well, first and foremost, our mission is to create value for the company for all of our four stakeholders — our customers, our communities, our employees, and our owners. In terms of how do we prioritize specific AI initiatives, we really look at three dimensions: we look at viability, feasibility, and desirability. Viability is looking at value — what is the level of value that can be created by a particular AI initiative? Feasibility — is it technically feasible to do? Do we have the data? Do we have the technical expertise? Do tools exist, and can we technically pull it off with AI? And then third is desirability. For an initiative that is going to change how people work with AI, what is the desirability of the people that would actually have to embrace and use AI to create that value? Some people are very excited about it, and others who don’t necessarily understand AI may have some trepidation. So that’s why desirability is one of the dimensions that we use to prioritize AI initiatives.
The Society for Human Resource Management recently released a report which found 19.2 million jobs are at a high or very high risk of displacement due to automation. Many employees fear AI automation is going to take their jobs away. How do you cope with that thought process among your employees?
Well, it’s a common question I get. I spend a lot of time with employees talking about artificial intelligence and how it can be helpful. The question of job displacement is one that I get all the time. The way to address that is to encourage employees to think about history and how new technologies have shown up over time. An analogy I use a lot is the introduction of the automobile. Before the car was invented, the primary form of transportation was riding a horse, and when a car came onto the scene and quickly was adopted by society, if you were in the business of making horseshoes or making horse saddles or making horse feed, etc., obviously the demand for what you do is going to go down, and that’s going to have an impact on that particular type of job. But think about all the new jobs that were created with the introduction of the car. You need mechanics, you need manufacturing, you need roads, you need infrastructure, motels, diners, fuel — we can go on and on and on. Look at any disruptive technology that introduces automation. Initially, there could be some types of jobs that may be impacted, but in every case, there are net new jobs that tend to be higher-level jobs that require more skills.
Back to AI… Sitting here today, it’s very difficult to forecast or crystal-ball what those new jobs will be. Today, there is a job called the prompt engineer that did not exist. That’s a higher-skilled, well-paying job. I see a future where there will be roles for AI supervisors — human supervisors to make sure that the AI solutions, the AI tools, are doing what they are supposed to do, just like managing employees. I also see technical jobs for creating AI solutions and maintaining AI solutions. So there will be jobs. It’s difficult to even forecast or guess what those new jobs might be. But that’s where I try to steer the conversation when people ask about job disruptions.
Look at any disruptive technology that introduces automation. Initially, there could be some types of jobs that may be impacted, but in every case, there are net new jobs that tend to be higher-level jobs that require more skills.
Have you had instances since you’ve been using AI where you have had to lay off workers because AI automation has replaced their job tasks?
No. We have not.
However, you would say that you have had AI automation change their jobs?
Yes, that is fair. There are instances where maybe somebody’s job changed, or automation was introduced, and the individual would shift in terms of what they do. Typically, it sort of elevates the job into something of a higher level.
Can you give me an example?
I would say, generically, there are some roles where people are gathering, manipulating, and analyzing data that is then used for decision-making. So there are cases where there have been jobs — I would say an analyst’s role — where instead of compiling information and data, through automation they are able to spend more time, and the job shifts more to doing the analysis of the outcomes for management decision-making, as opposed to the internal preparation of data and staging and manual manipulation of information. Instead, the time is spent just analyzing what’s been produced.
Can you walk us through a high-impact AI use case at Entergy?
Yes. We are located in New Orleans, Louisiana, and our market is Louisiana, Texas, Mississippi, and Arkansas. We are frequently in the path of hurricanes. They happen sometimes infrequently, sometimes frequently, and so when a hurricane rolls through, it creates a lot of damage. The power is out to many different customers, and it’s very costly to actually restore the damage due to the labor costs and material cost. And so we are working on a project that uses LIDAR technology and aircraft to fly over a damaged area and then use AI software, using that LIDAR data, to quickly assess what the actual damage is. Today, after a storm passes, we send people out to assess the damage, and that can take 3 to 5 days. The idea is, in addition to or instead of that, the plane takes off when it’s safe, flies over damaged areas, and then collects LIDAR data that gets fed into an AI computer vision model that can very quickly assess the damage automatically from the sky versus waiting for people to do so on the ground. And by doing that, our hope is to restore power quicker so customers can get their lights on, it reduces cost, and it improves the customer experience.
You have another project in which you are using artificial intelligence to reduce the frequency and duration of power outages associated with maintenance outages.
Yes. I guess another example would be our transformer outage prediction. We have an AI tool that — if you know what the transformer is, it’s on a utility pole, it’s a mechanical device — they break. And we have built an AI tool that can take data from the network and predict whether an overhead distribution transformer has a problem and is at risk of failing. From a maintenance perspective, we can remediate the problem before the transformer breaks, as opposed to a customer waiting for it to break, calling our contact center, and then waiting until we have somebody available to repair or replace the transformer and restore power to the customer.
Employees outside of the AI team are coming up with innovative ways to use AI on their own… I would say it’s centralized, but also democratized.
And what sort of results have you had from that?
Positive results. In just one year, this initiative has prevented 536 unplanned outages and avoided over 48,000 outage minutes.
The recent surge in commercial applications of generative AI began in the fall of 2022. Have you been using generative AI, and how are you using it?
We have quite a number of generative AI tools. I think there’s really two classes of generative AI tools that we are using: off-the-shelf tools — like OpenAI’s ChatGPT, Anthropic’s Claude, and Microsoft’s Copilot… Then we create specialized GenAI tools that are using internal information. We will create a tool and train it on very, very specific information to enable employees to have more of a specialized experience or answer specialized questions. For example, we have a tool that our contact centers are using — it’s only used internally within the company — while a contact center agent is on the phone, they can use a generative AI tool that has been trained on our internal contact center procedures to help them answer questions that customers have.
How is the AI function structured within the company? Is it centralized, is it federated, or is it hybrid?
I would say it’s mostly centralized. We have a centralized group, but we also have an AI community where there are a lot of people across the company who are really AI enthusiasts. And so we provide AI tools to employees to use, and they are able to use AI daily. Employees outside of the AI team are coming up with innovative ways to use AI on their own. So to that extent, I would not say it’s federated. I would say it’s centralized but also democratized. We are trying to get AI in the hands of everybody in the company.
When generative AI was introduced, there were a number of companies that started to put restrictions on employees using it because of security reasons. Have you been monitoring and managing employees and their use of GenAI?
Oh yes, very much. AI governance is one of the pillars of our AI strategy, and your question falls into that sort of dimension of our strategy. ChatGPT came out in November 2022, but for us at Entergy, I would say the interest and excitement began in April of 2023. From the get-go, we blocked access to all general-purpose, publicly available generative AI tools — ChatGPT as an example — and today that remains the same. Employees cannot, through their Entergy computer, provide or access any GenAI tool. All the tools that are available — AI tools, GenAI and otherwise — are protected inside of our firewall and have a lot of controls and monitoring on them. We make it available, but we have a lot of guardrails and protections in place.
We use our community as a platform to help with the upskilling of the non-technical employees.
I guess this is all a part of training the workforce. How are you upskilling non-technical employees to effectively work with or understand AI?
A couple of different ways. First, we formed an internal AI community chaired by my team, and it’s a very big tent. Everybody is welcome, especially people that don’t have experience with AI. That community has grown to be very large. I think we are over 700 people in the company, so it’s a pretty good group of folks. It’s community-driven, and it is a place where they get together, and they talk about special topics, but they also will do training. For example, immediately and right out of the bat, a lot of people who did not have any exposure to GenAI tools wanted to learn how to use these tools, and so we had a working session on prompt engineering to show them how to use it. By using a GenAI tool, it goes a long way to teach people what it is and what it can do. So we use our community as a platform to help with the upskilling of the non-technical employees. Also, we have a learning platform that we call AI University. The majority of AIU courses are curated from Coursera. AIU has a catalog of very specific, curated courses on AI that are available to people that want to learn more about how AI tools work, how they function, etc. So the AI community and AI University are sort of our two current platforms to help with upskilling.
Which platforms or tools have become essential to your AI stack, and why?
Most of our AI solutions run in AWS, in their cloud, and we use a lot of their tools within AWS. There are a couple of reasons why. One, the success of AI tools is highly dependent on the quality of the data that is actually used. So prior to AI being a function of the company, we had already established an enterprise data warehouse that we call the Datacore — that’s sort of our brand name. We have over a petabyte of enterprise data in AWS’s cloud, so we were already using the AWS stack for data, data engineering, data analytics, and so it just made sense to expand the use of the AWS platform to support AI solutions as well.
How do you get along with vendors when building AI solutions that you want to roll out? How do you work with them? How do you thrash out ideas? And what’s important to you in that part of the relationship?
I would say (if) the vendor has the technical capability and prowess to actually create or deliver on what is actually expected, that they have the technical capabilities to do it, the way they work and the way we work together is very congruent in terms of following an agile approach to product development if we are developing something new. If we are working with a vendor and we are just purchasing an off-the-shelf product, then that’s more of a traditional software evaluation process where we would evaluate the tool using some criteria and see if it works. But if we are developing something new together, working with a vendor to do it, (we) make sure that 1) the technical capabilities are there with the vendor, and 2) the way we work together follows an agile product development process.
With regard to buying the product, building the product, or partnering with a vendor, what is your approach to that? What would make you determine if you are going to buy, if you’re going to build it yourself, or if you’re going to partner with an AI company?
We take an all-of-the-above approach. We are agnostic. If there’s a situation or an AI use case, we don’t have a preconceived notion of whether we build it or buy it or configure it or co-develop it with a tech company. We have to look at whatever makes sense. In terms of what goes into the decision process, sometimes it may be very, very specialized and outside of the capabilities of what we can do — something very specialized, such as data processing — where it would not make sense for us to create it. It would make sense to use an off-the-shelf tool to do the work. In other cases, we do have the internal capabilities, and we’ve got the data. What really makes it useful is the data that is used to train these tools, and nobody else is going to have our data, so more than half the battle is just having really good data. Once you have that, there are a lot of off-the-shelf tools that do the AI piece of it. And then, obviously, economics and time — how long would it take to do something internally or a version of integrated and off-the-shelf product — and then economics. Obviously, we have to do what’s fiscally responsible.
…More than half the battle is just having really good data. Once you have that, there are a lot of off-the-shelf tools that do the AI piece of it.
Are you using AI agents, and if you are, what are you using them for?
We are not using AI agents, although using AI agents is part of our strategy.
Why?
Well, I think they are continuing to evolve. Agentic AI — we call them AI assistants — is really just combining data, data analytics, generative AI, and automation to a digital AI solution. So agents is just adding automation to AI solutions. You may start out with a chatbot, and all it can do is do Q&A, and then eventually we might add automation to that chatbot so that they can do things for us. So it is in the plan, it is on our radar screen, it’s on our roadmap of what we intend to do. But it’s more of just — you know — we are just evolving, and we will continue to evaluate the use of agentic AI.
Featured image courtesy of Entergy.
You must be logged in to post a comment.