approval and blessing, I asked the AI to create a receipt for a particular expense that appeared on my report. (To be clear, the expense was legitimate, and I had the original receipt.) AI produced a believable receipt, which was submitted. The “fake” receipt passed with flying colors. This is a critical trend that deserves attention! What Leaders Should Do Now Add a “verification pause” to money movement: no urgent payment changes without a second channel (phone call to a known number, not the email thread). Tighten vendor change controls. Teach teams the new reality: polished language is no longer a sign of legitimacy. TREND 3: AI AGENTS WILL MOVE FROM “CHAT” TO “DOING” WORK INSIDE BUSINESS SYSTEMS. In 2024-2025, many organizations experimented with AI as a chatbot: ask a question, get an answer. Quick example: In 2025, Verizon had moved toward AI chatbots answering more than half of their customer service inquiries, and a major funeral provider began using chatbots to answer their incoming calls for service details. In 2026, the major shift is toward agentic AI—tools that don’t just respond, but can take actions across workflows: create tickets, update records, draft customer responses, pull reports, route approvals, and orchestrate tasks across systems. We’re already seeing major enterprise platforms partner to embed “AI agents” into business software and customer service workflows.2 This is where productivity can explode—and where risk can multiply. Because the moment AI is allowed to “do,” not just “suggest,” you need clarity on: permissions audit trails accountability error correction safety checks (what the system should never do) What Leaders Should Do Now Start with “human-in-the-loop” designs: AI drafts/actions require approval. Require logging: decisions, data sources, and changes must be traceable. Define ownership: someone is accountable for outcomes, even when AI touches the process. TREND 4: THE “AI OPERATING GUIDE” WILL BECOME A LEGAL AND REPUTATIONAL NECESSITY. Let me say this plainly: the absence of an AI operating guide is becoming a liability. Many organizations are already using AI informally—employees are experimenting, departments are adopting tools, vendors are slipping AI into platforms. And when something goes wrong, the organization can’t credibly say, “We had reasonable controls.” In the U.S., one of the strongest practical anchors for responsible AI governance is the NIST AI Risk Management Framework and its Generative AI Profile, which provides structured guidance for identifying and managing AI risks.3 Globally, regulation is also tightening. The EU AI Act is rolling out progressively, with a full roll-out timeline extending into 2027. Even if you’re not based in Europe, many vendors and partners will align to it—and its logic will influence expectations around transparency, risk classification, and oversight.4 A better way to say it: If you don’t define how AI is used in your organization, you’re letting risk define it for you. What Leaders Should Do Now (The Practical Minimum) Create a simple, usable AI Operating Guide that covers: approved tools (and prohibited tools) what data cannot be entered (client/member data, confidential info, regulated content) disclosure rules (when AI assistance must be acknowledged) human review requirements (what must be checked before sending/publishing) decision ownership (AI advises; humans decide) incident response (what to do if an AI output causes harm or a privacy issue) This is not bureaucracy. This is protection. If you need a sample to go by or require some help with a starting point for your organization, reach out to chuck@chuckgallagher.com— happy to help. TREND 5: PROOF, AUTHENTICITY, AND TRUST WILL BECOME THE NEW CURRENCY. In the old world, trust was built on familiarity: a known email address, a familiar voice, a recognizable logo. In 2026, trust will increasingly be built on verification: Is this message authentic? Is this image real? Did this person actually say this? Is this policy accurate—or AI-generated guesswork? The stakes are especially high for associations, credentialing bodies, safety-sensitive industries, and professional services— because your value is credibility. And credibility can be damaged by one misattributed quote, one fake “announcement,” or one AI-generated mistake presented as fact. CONTINUED ON PAGE 30 29 nescpa.org
RkJQdWJsaXNoZXIy MTg3NDExNQ==