In this episode of Reboot IT, host Dave Coriale sits down with David Cade, CEO of the American Health Law Association, to explore how AHLA moved from talking about AI to actually using it. David shares how the organization developed policies, balanced innovation with risk, and empowered employees to experiment responsibly. He also breaks down the “triangle” of business units, IT, and leadership, and why leaders can’t afford to guide their organizations from a place of fear.
Subscribe
Dave and David Discuss:
Moving From Fear to Experimentation
- David emphasizes that leaders “can’t lead from a position of fear” when it comes to AI.
- Fear of getting it wrong is holding some organizations in the 0–3 adoption range, which risks their relevance to members.
- Instead, leaders should test, learn, monitor impact, and be willing to expand or abandon initiatives based on results.
The AI Governance Triangle: Business Units, IT, and Leadership
- AI adoption at AHLA was driven roughly 70% by business units and 30% by IT.
- Dave and David describe a triangle model:
- Business units: define needs and use cases.
- IT: handles security, integration, licensing, and scalability.
- Leadership/CEO: sets culture, protects IP, and champions smart change.
- When one side dominates (IT control, business “run amok,” or absent leadership), AI efforts become lopsided and less effective.
Real AI Use Cases at AHLA
- AHLA uses AI to generate a weekly podcast from existing content—no human host needed—creating new value from existing assets.
- AI tools support marketing content creation, improving speed and clarity without eliminating positions.
- AHLA explored using AI to unlock a decades‑deep archive for members, learning where cost, scale, and accuracy become limiting factors for smaller associations.
Staff Enablement, Training, and Culture
- AHLA discovered both extremes: some staff moving too fast with unvetted tools, others refusing to use AI at all.
- They created a tool inventory, embraced specific platforms, and pulled hesitant staff in with the clear message: these tools are part of doing your job well.
- Internal lunch‑and‑learns and “each one teach one” sessions help reduce fear, demystify tools, and showcase how AI can accelerate everyday work.
Legal, Ethical, and Accuracy Considerations
- In the legal community, bars and courts are requiring disclosure when AI is used in client work or court filings.
- David draws a line between tools like spellcheck (enhancing what you wrote) and having AI generate entire arguments or articles that aren’t truly “yours.”
- He stresses that AI outputs must be reviewed, adopted, and owned by humans, especially given risks like hallucinated cases, outdated standards, and embedded bias.
Where Associations Are on the AI Adoption Curve
- David sees most associations in the 6–8 range of AI adoption, with a few early lightning rods in the 9–10 space.
- A minority remain in 0–3, often due to fear or misunderstanding, which he believes will make them less relevant to members over time.
- Members are already using AI themselves—associations that don’t keep up with how their communities work risk falling behind their own audiences.