Zoom RecordinG
Recap, Takeaways, & Resources
As artificial intelligence (AI) tools like ChatGPT and Microsoft Copilot become more sophisticated and widely used, associations are looking to their IT staff or managed service provider (MSP) to determine what their organization should be doing with regards to AI. DelCor President Dave Coriale hosted a conversation about generative AI with DelCor subject matter experts (SME) and client technology leaders to discuss how associations should be approaching AI from the perspectives of operations, strategy, and governance.
Key Themes
IT and Leadership
Participants expressed that even though association boards and executives often look to IT to provide direction for their use of AI, that responsibility should not fall entirely on technology departments. Several stated that while an organization’s CIO or IT Director can present recommendations and facilitate a conversation about AI, leadership should take ownership of the decision-making as it impacts the business.
Dave Coriale emphasized the importance of this distinction and described an ideal partnership between the business unit owning the requirements and goals for using the tool and IT owning the technical aspects of the implementation (e.g., security, integration, scalability). He also recommended that IT departments educate themselves on the operational capabilities, integration opportunities, and security implications of implementing AI tools so they can provide guidance to leadership during these critical conversations around requirements and governance.
As Mike Guerrieri, DelCor Senior Strategic Consultant explained, “A master craftsman knows which tool is right to use when. And I think that's what we and IT need to be doing as IT leaders… [be] aware of what those tools are to say this would be an appropriate tool to use in this particular circumstance.”
AI Policies and Governance
Tony Yu, Director of IT at NIGP: The Institute for Public Procurement, shared that his organization is currently considering the possibility of using an AI chatbot to enhance the search capabilities for their extensive digital library. The organization is working on determining how to adjust their data governance policies to allow for generative AI by either creating a standalone policy or incorporating guidelines into the existing data governance policy.
Stephen Gabriel, Director of Technology at the American Health Law Association, shared that a similar project is underway at his organization. His team is working with their publications department on a proof of concept for a chatbot to assist with searching archived documents.
As they’ve worked through these projects, both IT leaders have recognized the need for guardrails to govern the use of AI. Just like with any other technology tool, staff need to understand what is and is not acceptable.
Tom Jelen, DelCor’s digital workplace SME, emphasized the need to start with simple, straightforward guidelines. Staff are likely already using these tools, so organizations don’t have time (and in many cases, the understanding) yet to be able to draft detailed governance policies. He recommended starting with education and a simple directive that staff should not trust AI to be completely secure or correct.
In addition to setting simple guidelines to start, Tobin Conley, DelCor Senior Strategic Consultant, explained that IT needs to use a certain amount of finesse when communicating those guidelines. If IT comes down too hard, staff using IT may go underground and ignore organizational policies governing the use of AI tools. Instead, IT should focus on helping staff understand the unintended consequences of ignoring best practices for using generative AI.
Security and Privacy
It was clear throughout the conversation that security and privacy are both serious areas of concern for organizations using AI tools. There is a push to incorporate AI into day-to-day business, but it’s also important for IT to do a full investigation into the implications of integrating AI with each system. This includes talking to vendor partners to see how they are handling security with regards to their integrations with AI tools.
Beyond the concerns on the technical side, participants were very aware of the potential risk of allowing staff to use generative AI without any guidelines. Participants agreed that organizations should immediately instruct staff not to put any member data into any AI tools.
Participants were also concerned about the implications of tools like Microsoft Copilot or Microsoft Teams tracking and summarizing information. This could not only pose a privacy concern but could also discourage staff from speaking up about sensitive issues once they know that they are being recorded and that their contributions will be summarized—and potentially misrepresented—by AI. Tom Jelen noted that this is an area that could benefit from governance. Organizations need to decide how and when they will allow recording and transcription.
Education and Training
As the group discussed the security implications of integrating AI with data management systems, the topic of training came up. Several participants have been inundated with ads for new courses and certifications related to AI but found that it wasn’t clear which was the right certification or the most reputable provider. However, they agreed that there would be a need to train IT staff on how to use these tools effectively.
In addition to training technical staff on how to use AI tools, the group discussed the importance of training non-technical staff on when to use AI (and when not to use it). This topic is tied into the conversation about security and privacy because training and awareness will go hand in hand with policy development to help prevent issues from occurring.
Non-technical staff will also need training on how to get value out of these tools. Tobin Conley predicts that organizations will need to start training staff on effective searching, saying that “prompt engineering is going to be one of the skills to future-proof anybody's career.”
Key Takeaways
- Decisions about AI should be made by business leaders in consultation with IT.
- While IT leaders should not stand in the way of innovation, they should raise a red flag if the business units propose a generative AI tool that is not appropriate or could be harmful to the organization.
- Organizations should set simple, straightforward guidelines for using AI. Staff are likely already using AI tools, so it’s important to set up guardrails as soon as possible.
- Organizations should remind staff not to put member data into ChatGPT. Once the information is in a chatbot, it is vulnerable to attacks or exposure.
- Organizations cannot trust generative AI tools to produce perfect results. The organization’s business units need to ensure that staff are checking the work of the AI tool for misinformation, misunderstandings, and copyright infringement.
- In addition to training IT staff on how to use AI, it’s important for organizations to teach non-technical staff about the potential unintended consequences of using AI without guardrails.
Resources
AI Tools
Nonprofit-Specific Guidelines
Policy Development
Reboot IT Podcast
Miscellaneous
Contact: Dave Coriale | Tobin Conley