On Feb. 19, 2026, the ASSP Artificial Intelligence (AI) Task Force released its white paper, “AI and the Evolving Role of EHS Professionals.” To expand on the paper’s key themes, task force members are sharing their unique perspectives in this new weekly series in ASSP News.
The Six Keys to Successful AI Piloting
By Arianna Howard, managing partner/co-founder at Syncra Group
With so many organizations starting their AI journeys right now, I keep getting asked: How do you run a successful AI pilot? I run a boutique consulting firm focused on environmental health and safety (EHS) systems — digital strategy, technology selection, pilot support and analytics — right at the intersection of EHS and digital transformation. And right now, AI enablement is the hottest topic in that space.
What’s driving it is simple: Everyone wants to use AI. But when you start digging in, you realize data maturity across companies is wildly different. Some are sitting on 15 years of captured data inside enterprise-level EHS systems like Origami, Velocity, Cority, Enablon or Benchmark (or a combination of them). Many others are still managing their safety programs in Excel. So, a lot of what I do is help people balance the excitement of “Let’s use AI!” with the reality: What problem are we actually trying to solve, and are we ready for it?
Step 1: Get your data house in order
“Garbage in, garbage out.” AI doesn’t know the difference. You can pull your incident data from those enterprise systems, but if half your fields are empty or each dropdown contains “N/A” or “other,” what is that AI learning from? AI needs context and meaningful data to be impactful.
When we run data quality assessments, we are looking at things like: Are root causes consistently documented? Are corrective actions tracked? If most of the data are blank, the system can only give you a basic trend analysis; it can’t tell you a full story.
That’s why we coach our clients on data lineage and context. Your baseline data tells the story of “what’s happened” in your organization. But when you connect it to other data sources — training records, work orders, sensor readings, production data and even policy documents — you start to see a much more complete picture. AI can help connect those dots, but only if you feed it good information.
Step 2: Keep the human in the loop (HITL)
Even the best AI will hallucinate or get things wrong. Especially in EHS, where mistakes have real-world consequences, we can’t afford blind trust in a tool. Use AI as a thought partner, not as a brain replacement.
When we do lunch-and-learns with clients, we don’t start with EHS use cases, we start with enablement. We help users discover what tools their organization has made available to them and conduct workshops on “here’s how to use AI to write your weekly updates faster.” Once they see a few small, low-risk wins like that, the fear starts to fade, and curiosity tends to take over. Ethan Mollick, a Wharton professor and AI researcher, has one piece of advice for people new to AI: Just use it to do stuff that you do for work or fun, for about 10 hours, and you will figure out a remarkable amount. Start by asking questions and seeing the potential.
But the real lesson is this: AI is not infallible, and it will make mistakes. As the expert, you have to check facts, build confidence thresholds, and always, always keep a human in the loop (HITL). That’s where EHS professionals can lead — by being the voice of practicality and accountability in this new tech wave.
Step 3: Pilot with purpose
We’re working with a company that is creating a safety innovation tech hub for their operating units to reach out to for support when exploring new systems or tech into their operations. As a result, there are dozens of pilots of various safety technologies — from lone worker to computer vision and sensor wearables — taking place across the units, all leveraging AI in some way. What’s been a learning curve for all of them is knowing how to pilot technology effectively.
A pilot isn’t a lifetime commitment; it’s a chance to learn. And it’s better to fail during the pilot than to roll something out enterprise-wide that doesn’t work in the field. So, when we design pilots, we start by defining a clear hypothesis: What problem are we solving? How will we measure success?
If your company’s goal is to reduce workers’ compensation spend, start with your data. What are your musculoskeletal disorder trends? Where are the ergonomic risks? To make sure you get your stakeholders involved, create a cross-functional workgroup. Once you’ve identified those problem statements, then you are in a position to research technology that could potentially help solve that problem. That’s how you can link AI or any safety technology to actual business objectives and return on investment.
“Shiny new object” syndrome is a challenge. Technology is rapidly evolving and it’s easy to get excited about what’s out there. Identify your problem first, tech second. Then experiment, document lessons learned, share them with your peers and learn from them.
Step 4: Share and scale what works
One of the things we’ve been talking to ASSP and other professional groups about is creating a shared library of case studies. Imagine if you could go to ASSP’s site, filter by “wearable tech” or “utilities industry,” and see which companies have piloted what. Then you could reach out directly to your peers — “Hey, I saw you tried this; how did it go?” That’s how we build a real community of learning in EHS.
It doesn’t have to be overly formal or academic. Publish honest stories about what worked, what didn’t, and how teams navigated it. Because right now, so many people are doing this work in isolation, and we’re all solving the same problems in parallel.
This is why I’m so excited for ASSP’s standards-based user groups (SBUGs) which will offer a knowledge share to reflect the best practices and benchmarking of the world’s best safety professionals.
Step 5: Build governance early
If your company doesn’t yet have an AI governance policy, start that conversation now. At this point, in 2025, I’d be surprised if leadership isn’t at least talking about AI. Whether your company uses Microsoft Copilot, Gemini or homegrown tools, everyone needs to understand what is allowed, what is not and how data will be used.
Governance doesn’t have to be complicated, but it does need to be documented. And it should be built right into your EHS management system and aligned with company strategies and objectives, so everyone’s aligned and trained.
Step 6: Be the power user
If no one on your team is talking about AI and safety, be the one who starts. Host a lunch-and-learn. Try out the tools your company already provides. Get comfortable with prompting and interpreting responses. Most of these tools can even explain why they gave an answer — click the little caret next to the response and expand it.
Once you start using it, the mystery fades. You realize it’s not magic; it’s advanced pattern recognition on top of existing data — the internet at scale. And that makes it a lot less intimidating.
At the end of the day, a successful AI pilot in EHS isn’t about the tech, it’s about alignment. You need the right data foundation, the right governance, and the right mindset to test, learn and scale. And maybe most importantly, you need to give yourself and your team permission to fail small. Because that’s how we’ll all learn, together.