Skip to content

Podcast

Purpose-Built AI and Workplace Safety with Jade Hendrix

Jade Hendrix

In this episode of The Safety Meeting, KPA's Senior Director of Product Management, Jade Hendrix, shares her expertise on the integration of AI technology with workplace safety programs. We discuss the distinction between generic chatbots and purpose-built AI, exploring how purpose-built AI can help proactively correlate safety data to predict future risks.
Jade Hendrix The Safety Meeting Podcast - KPA

Today we’re speaking with Jade Hendrix, KPA’s, Senior Director of Product Management. With 17 years of experience in B2B SaaS technology, Jade has dedicated her career to solving complex business challenges through innovative software solutions. As a hands-on leader who focuses on aligning product strategies with real-world needs, Jade brings a unique perspective on the intersection of AI technology and workplace safety.

She’s here today to help us understand what purpose-built AI actually means for safety professionals and how it differs from the generic chatbot solutions that are currently flooding the EHS and safety software market. Thanks for being with us today, Jade. 

Thanks, Kat. Good to be here.

[00:00:55]
Alright, let’s jump in. So there’s a lot of talk about AI in safety software, but much of it seems to be more generic chatbots. Can you talk a little bit about what purpose-built AI can look like for EHS? 

Yeah, certainly. So, of course, this is an ever-evolving landscape in terms of AI and how it supports EHS professionals and further integrates into safety software.

What we really mean when we talk about purpose-built AI is that it’s not just a generic chatbot. At this point, all of us are probably pretty familiar with the chatbots that are out there for public consumption. But when we talk about purpose-built AI for EHS, it’s not just a tool for answering questions.

It’s more of an engine that’s integrated into an EHS solution. It’s built into the workflows that EHS managers actually use. And more than that, it’s an engine that really understands safety-specific data models. So it understands what incident data looks like, what audit data looks like, and what safety training data looks like, instead of a more generic tool that answers questions like, ” Tell me about our slips, trips, and falls,” for example.

Purpose-built AI can proactively correlate safety information together. So correlate incident reports with floor inspections and even weather data, and start to make some predictions where the next slips might occur. So it’s really something that’s more integrated, not just a window dressing, reactive tool.

[00:02:42]
Okay, that makes a lot of sense. So when we say that AI can help those safety teams do things like see risks earlier, what does that mean in practical terms? Like, what kind of patterns can AI spot that a human might miss?

Yeah, so though AI is a helpful toolkit, it’s never going to replace the human element of EHS and safety-specific areas. But what AI can do is find some weak signals before they become major headlines in an organization. Because what AI can do that we humans can’t is sift through thousands and thousands of data points, thousands of near misses of maintenance logs of exposure readings, and safety observations. It’s really about processing volume of data to then make clearer understandings of data patterns and where spikes in small things, like small equipment issues that may indicate a looming failure.

Another example may be repeated PPE noncompliance. So, certain crews or certain projects may not have that sort of data glaringly apparent to the human eye as we log in and look at our dashboard KPIs. AI can really help signal those sorts of under-the-cover risks that are not as prevalent to EHS managers from just the data they’re looking at.

[00:04:17]
That makes sense. In the example that you gave, I could see how someone not wearing their PPE or it being forgotten, maybe: You log in one day, you see, oh, it’s been forgotten. You log in next week, ah, it’s forgotten again. In your mind, that just looks like one time, one week, and one time the next week, where if the AI was looking at it, it can say, Actually, this person or this team has forgotten this PPE for this extended period of time. So you should take a look at this.

When you’re doing your day-to-day stuff, like you said, that may not be as obvious. So I can see how AI can help in those ways.

But I know one thing that we hear a lot is that people are concerned about the accountability of the AI and the recommendation engine with the AI. Because maybe there’s going to be a wrong recommendation. If that happens, who’s responsible?

Do you have any tips for how organizations should approach AI as a decision-support tool? 

You actually hit the nail on the head with the last thing you mentioned there. AI is a decision-support tool, not a decision-making tool. So, AI does not absolve organizations or managers of responsibility. It doesn’t take on that responsibility.

What it does is provide a tool to give better information, so more informed decisions can be made. Safety managers really don’t need more data! They’re flooded with data, especially if they have good participation in their safety program and the tools they have out there on the market.

They don’t need more data. What they need are tools that help them identify what patterns actually matter and where they should focus. AI is a great tool for that, and organizations should really frame AI as augmenting their expertise but never replacing it.

[00:06:06]
I feel like when we stepped into this new world of AI and thought about the possibilities, we said, “Oh, it’s going to replace all of these things that we do and all of these workflows that we spend time on.” But we are so new to this world that you have to be strategic about it and about how you are implementing this.

One thing that might be good to move to is talking about the data. AI needs good data to work well. So, what should safety professionals consider about their data quality before they start implementing these AI tools? 

Yeah, so data quality is paramount. As it has always been, when we talk about our accident reporting, general KPIs, and EHS program, the data that we have to review is only as good as the data that’s being input to a system.

So it’s still that garbage in, garbage out sort of scenario. This applies to AI even more than before. The best AI in the world still can’t fix your sloppy incident reporting. So if you’re looking at your incident reporting, and incident reports are partially complete or only complete sometimes, or we’re still not capturing proactive reports like safety observations and near misses, AI can only do so much with what it’s given.

The tools, the AI solutions, can do a lot with data that’s structured well. But if your reports are short strings of text and not enough description or not enough structure to the data with selection dropdowns and pre-configured elements, where it’s identifying consistent information like location or risk category, or injury natures and things like that. The more structured your data is, the more effective your AI insights are going to be.

[00:07:57]
It’s almost the same as what we talked about in terms of it being purpose-built. If the AI doesn’t have context for what you’re giving it, there’s really not much it can do.

And so when we talk about the types of data that can go in, we were just at NSC (the National Safety Conference) put on by the National Safety Congress. And we were looking at a lot of what these safety companies were saying with AI and different things that they could do, and it seems like photo hazard recognition is becoming more common and something that AI is starting to tackle. So, can we talk about how reliable this technology is and where it does work best or better than human judgment, and where human judgment is still very critical? 

For sure, it is becoming more and more common. There’s a lot of off-the-shelf technology with these capabilities, and it’s getting better every day.

But safety managers need to think about this technology as a second set of eyes. It’s not a substitute inspector. Again, back to the point that AI is never going to replace the human element.

Hazard recognition is great for more narrow use cases but weak in areas where there’s nuance. It’s great at spotting things that are pretty obvious anyway, like missing hard hats or blocked emergency exits, but it’s never going to replace human judgment.

An example of something maybe more nuanced is: Maybe it sees in a photo that there’s a ladder, but is that ladder unsafe? Is it secured? Is it reliable? With those context-heavy risks, it’s not going to be able to uncover the nuance layer there that an inspector or even supervisor or someone doing that sort of audit would be able to recognize.

[00:09:46]
That makes sense. And no matter what you do in terms of photo recognition, I think at least in the era that we’re in now, there’s just going to be things that are not caught or even possibly made up. I’ve had scenarios where you give a picture to AI and it says something and you say, “Where’d you get that?” And it says, “I’m sorry I made that up.” So that’s still always a possibility, at least in the current landscape that we’re living in.

So I think what’s important to think about, too, is that this era of large language models and generative AI is really just the newest step in AI. It’s not that this is the only AI that we’ve seen. There has been AI in software for years that does things like automating tasks, but maybe people don’t think about it as AI in the way that they understand it today.

Can we discuss the difference between an AI that just automates tasks and one that provides strategic insights, and how safety managers can tell the difference when they’re looking to implement AI into their current safety workflows?

Absolutely. And I love this question because there’s a stark difference. And both are useful tools, but it’s a good way to evaluate what sort of AI solution would be most beneficial in your safety program today.

So if the AI solution is something that just helps check the boxes faster, it’s not strategic; it’s a clerical function. So the difference is that automation saves time. Insights save lives.

To give an example, Automation might schedule training for you or help you complete forms quickly, like providing recommendations on how to fill out the form. That’s helpful, and it saves time. It creates some more efficiencies.

But insights, is AI that can analyze, for example, two years of audit data and tell you things like which facilities are trending a higher risk for the future and why. So it’s got more of a strategic element to it that really becomes more of an assistant, a kind of second set of eyes, but able to process large amounts of volume data and recognize things that it would take a human months to do to recognize. But instead, you’re getting that sort of insight on a regular basis.

[00:12:14]
The way you talk about that, it almost makes me think: like you called it an assistant, but I would almost think of an AI in this area as like an intern. You hire an intern, you give them all this data, the intern says, “Here’s what I found in the data.” And then you, as the human, actually go in and say, “All right, intern, let me see what things you have said. You have some portion of understanding of our business, but you’re new and you’re still learning, so let’s see what you came up with and we can, you know, adjust it.”

I think the more that you treat AI as a part of the company that’s learning how to best protect people to recognize risks, that’s what’s eventually going to build out something that’s going to be effective for you.

Since we were talking about that kind of decision process and what to look for in organizations that are considering AI-powered software for their safety programs, what questions should they be asking vendors to separate the real capabilities of that AI integration from marketing hype?

Yeah, so I mean, first off, pointing back to the question that you had just asked earlier about the difference between automation and insights, I think the first thing is for EHS managers to question themselves: What version of AI would be most helpful to them in their EHS program today? Are they looking for AI to really just support automation and efficiency, and populating forms and scheduling, training, and things like that? Or are they looking for something that is a little bit deeper? Are they looking for something that’s going to generate insights, understand their data better, and really help guide the strategic direction of their safety program?

That’s really question one, but second to that, it’s understanding the behind-the-scenes of that AI model. How does that AI model make its decisions? Is it trained on EHS-specific data? Does it already have the contextual awareness to understand what incidents are and how to evaluate them? To review training data and correlate that to proactive reporting, and make those correlations? To serve up the insights that matter?

So, asking the question, not what AI can do, but how is it trained? Where does it get its data, and how does it prove those results? Those would be some questions EHS managers should be thinking about when they’re considering AI solutions and before they adopt something.

So if the answer sounds like marketing fluff, it probably is.

[00:14:51]
That’s a great framework, and I think the way that you talked about it, it even feeds more into the intern analogy, the same sort of stuff that you would ask of an entry-level person who is coming in to assist the safety team, is the same way that you should look at this AI. If it’s not going to be able to do anything that’s actually helpful for your team, like you said, you set up what you think is going to be helpful, and then you go from there.

Then, you may have to reevaluate what you’re looking for and start over. So, looking ahead, what do you think the next evolution of AI in safety will be, and what will always require human expertise and will never be replaced by AI?

Yeah, so the next wave is not just about answering questions. I think that’s where we’re at today with AI, and we’re all pretty happy with that as a tool, and it really creates some efficiencies for all of us. But that’s a real reactive solution. It saves some time, but the future is more about predicting answers to the questions that you haven’t even thought of yet.

That’s really the future of AI. What am I not asking? What am I not considering about risks prevalent to our organization now, because there’s not a data point on our chart for me to look at, and I don’t have time to read every single report and document that comes across my desk?

I have my general KPIs based on the questions I know to be asking, injury severity or frequency, and I’m tracking those things. But what are the nuances under the covers? That’s really where AI becomes very powerful, and not just being a second set of eyes, but AI finds patterns that maybe humans don’t have the possibility of doing.

But the human element is never going away. Judgment calls in complex high-stakes scenarios, like deciding whether to shut down a plant or a processing facility based on risk and exposure. Also, AI can never replace culture. Communicating empathy and being able to communicate with our organization, and convincing people to not only participate in the safety program, but also to act safely.

So AI will never replace that human element of, who are we going to be as a company? How seriously do we take safety? AI is a great toolkit for an EHS manager to help them have more foresight into where their risks and exposures lie.

[00:17:33]
Absolutely. And I think you wrapped up this thought earlier, too, when we talked about AI being a decision-support tool. No matter what, there’s always going to be some part of the job that you’re doing for your team to keep them safe that you’re responsible for, and you’re the ultimate decision maker.

So even if you have AI making a lot of these calls, you are still the one who’s going to be responsible for what’s happening and responsible for, like you said, creating that culture of safety, being able to share empathy, and having your team be comfortable with letting you know what’s going on. Because if you don’t have that level of trust, it doesn’t matter what sort of technology you have on your side.

I think you’ve done a really great job of explaining how AI fits into this new space of EHS, what people should be asking when it comes to wanting to implement this, and how they can think about it as a support mechanic. So, is there anything that you want to say before we wrap up, to drive home your points about AI for our listeners? 

No, I mean, I really appreciate this conversation. I think it’s an interesting era in EHS, as things evolve and EHS is always considering how to adopt new technology.

I don’t think AI is a fad; it’s here to stay. It is going to shape the future of how we manage our EHS programs, but it’s up to each of us to proceed with caution, ask the right questions, and ensure that we’re still taking full accountability for the program and incorporating tools when they make sense.

[00:19:08]
I love that. Thank you so much, Jade. Our conversations are always fascinating, and your insight is absolutely invaluable. You really gave us a good look into what’s going on in this space and how things are evolving. So, as always, thank you so much for being here.

Thanks, Kat. Appreciate it.

Subscribe to the Safety Meeting

If you like what you’re hearing, please consider subscribing and leaving us a rating or review – it helps other listeners like you find us. 

Subscribe

Related Content

Explore more comprehensive articles, specialized guides, and insightful interviews selected, offering fresh insights, data-driven analysis, and expert perspectives.

Back To Top