Every week, another vendor walks into a utility boardroom with a pitch deck full of AI promises. Predictive maintenance that will save millions. Customer service chatbots that will cut call volumes in half. Automated rate analysis that will transform how you do business.
The technology is real, and the potential is genuine. But here's the uncomfortable truth: most utility executives and board members don't yet have the vocabulary to evaluate these claims. Not because they aren't smart — they are — but because AI has its own language, and nobody has bothered to translate it for the people who actually make the buying decisions.
This guide is that translation. We're not going to teach you how to build a neural network. We're going to give you the conceptual toolkit you need to walk into any AI conversation and hold your own — to ask the questions that separate a serious proposal from a glossy slide deck.
What AI Actually Is (and Isn't)
The AI Family Tree
People use "AI" to mean everything from a simple spreadsheet formula to a science fiction robot. In reality, there's a clear hierarchy, and understanding it matters because each layer does fundamentally different things.
Artificial Intelligencethe broadest category. It's any system that performs tasks normally requiring human intelligence. Your spam filter is AI. Your utility's demand forecasting model is AI. The term covers a huge range of sophistication.
Machine Learninga subset of AI where systems learn from data rather than following hand-coded rules. Instead of a programmer writing "if temperature exceeds X, flag this pipe," a machine learning model looks at thousands of pipe failures and figures out the patterns on its own. Think of it like the difference between giving someone a recipe versus letting them taste a hundred dishes and figure out how to cook.
Deep Learninga further subset of machine learning that uses layered mathematical structures called neural networks. These models can find incredibly complex patterns — like recognizing a water main break from acoustic sensor data, or detecting anomalies in SCADA readings that no human would catch. The "deep" refers to the many layers of processing, not any philosophical depth.
Generative AIthe newest branch, and the one making all the headlines. While traditional AI analyzes and classifies data, generative AI creates new content: text, images, code, analysis. When you're chatting with an AI assistant about your rate case strategy, that's generative AI at work.
What a Large Language Model (LLM) Is
A large language model is the engine behind tools like Claude, ChatGPT, and Gemini. Think of it as a system that has read an enormous portion of the text ever written — books, articles, technical papers, code, regulatory filings — and learned the statistical patterns of how language works. When you ask it a question, it's not looking up the answer in a database. It's generating a response based on the patterns it learned during training.
An analogy for the utility world: imagine if you could take every rate case filing, every AWWA manual, every PUC order ever published, and distill all of that knowledge into a single analyst. That analyst wouldn't have perfect memory of every document, but they'd have an extraordinarily good sense of how these things work, what language regulators use, and what arguments tend to appear. That's roughly what an LLM does — at scale, and imperfectly.
What "Training" Means
Training is the process of feeding massive amounts of data into a model so it can learn patterns. This happens before you ever interact with the model. The training data for major LLMs includes publicly available text from the internet, books, academic papers, and more. This is an important distinction: the model was trained once (or periodically updated) on general data. It's not learning from your specific conversations in real time.
This matters because of a common fear: "If I put my utility's financial data into an AI, will it learn from it and share it with others?" On enterprise plans from reputable providers, the answer is definitively no. Your conversations and data are not used to train the model. We'll cover this more in the security section, but the short version is: enterprise AI agreements explicitly prohibit using your data for model training.
What "Hallucination" Means
This is perhaps the most important concept for any utility leader to understand. AI hallucination is when a model generates information that sounds completely plausible but is factually wrong. It's not lying — it doesn't have the concept of truth. It's producing text that statistically fits the pattern of a correct answer.
For utility work, this is a serious concern. An AI might confidently cite a regulatory precedent that doesn't exist, produce a financial calculation with a subtle error, or reference an AWWA standard number that's close to real but slightly off. The output reads like it was written by an expert, but the facts may be invented.
Understanding these foundational concepts protects you from two common mistakes: dismissing AI as hype (it's not — the technology is substantive) or trusting it blindly (it has real limitations). When a vendor tells you their "AI-powered platform" will revolutionize your operations, you now know to ask: what kind of AI? Is it a rules-based system, a machine learning model trained on relevant data, or a generative AI producing text? Each has very different reliability characteristics, and the answer determines whether their claim is credible.
How You Interact With AI
Prompts and Prompt Engineering
A prompt is simply the input you give to an AI — the question you ask, the instruction you write, the context you provide. Prompt engineering is the practice of crafting those inputs to get better results.
Think of it like writing a work order for a contractor. "Fix the leak" will get you very different results than "Repair the 6-inch ductile iron main break at the intersection of Oak and Elm, isolate the section, notify affected customers in the 400 block, and document the repair for the asset management system." Both are valid instructions, but the second one produces dramatically better outcomes.
The same principle applies to AI. A vague prompt produces a vague answer. A specific prompt with context, constraints, and clear expectations produces output that's actually useful. This is a skill your staff can learn in hours, not months — and it has an outsized impact on the value you get from AI tools.
Context Windows
A context window is the amount of information an AI can "hold in its head" during a single conversation. Think of it like the working memory of your analyst. If you hand someone a 300-page rate case filing and ask them to discuss it, they can reference any part of it because it's all in front of them. But if you hand them a 5,000-page document, they can only focus on so much at once.
Today's leading models have context windows ranging from about 100,000 to over 1 million tokens (roughly 75,000 to 750,000 words). This means you can feed an AI an entire cost-of-service study, a full set of tariff schedules, or years of board meeting minutes — and it can reference all of it in a single conversation. This is a transformative capability for utility work, where decisions often require synthesizing information scattered across dozens of documents.
Tokens
Tokens are the unit of measurement for AI. Everything the model reads and writes gets broken into tokens — roughly one token per three-quarters of a word. "Water utility rate case" is about five tokens.
Tokens matter for two practical reasons. First, they determine cost: API access to AI models is priced per token. More tokens in and out means higher bills. Second, they define the limits of a conversation: that context window we just discussed is measured in tokens. Understanding tokens helps you evaluate vendor pricing and understand why some AI requests cost more than others.
Temperature
Temperature is a setting that controls how creative or predictable an AI's responses are. At low temperature (close to zero), the model gives you the most statistically likely response — safe, predictable, consistent. At high temperature, it takes more creative risks, producing more varied and sometimes surprising output.
For utility work, this has practical implications. If you're using AI to draft regulatory language or produce financial calculations, you want low temperature — consistency and accuracy matter. If you're brainstorming new approaches to customer engagement or exploring innovative rate structures, higher temperature can generate ideas you wouldn't have considered.
These interaction concepts aren't just technical trivia — they directly affect the quality and cost of AI in your organization. A team that understands prompt engineering will get dramatically more value from the same AI tools than one that doesn't. Understanding context windows tells you what's feasible: can you feed AI your entire rate case, or do you need to break it into pieces? And knowing about temperature helps you configure AI appropriately for different tasks.
How AI Connects to Your Systems
APIs: How Software Talks to AI
An API (Application Programming Interface) is the mechanism that lets one piece of software communicate with another. When your CIS system pulls data from your billing database, it's using an API. When a mobile app displays your utility's outage map, it's using an API.
In the AI context, APIs are how your internal systems can send data to an AI model and receive analysis back — without anyone manually copying and pasting information. This is the difference between having an employee ask an AI chatbot a question and having your work order management system automatically analyze incoming service requests and route them intelligently.
MCP: Model Context Protocol
Model Context Protocol is an emerging standard that lets AI models securely connect to your databases, documents, and internal systems. Think of it like giving the AI a read-only library card to your organization's information. Instead of you manually uploading a document and asking the AI about it, MCP allows the AI to directly access the data it needs — with proper permissions and security controls.
For utilities, this means an AI assistant could potentially pull real-time data from your SCADA system, reference your current tariff schedules, and access your capital improvement plan — all within a single conversation — without you having to gather and upload each document manually.
AI Agents
Most AI today is reactive: you ask a question, it gives an answer. An AI agent goes further — it can take actions. An agent can break a complex task into steps, use tools, access systems, and execute multi-step workflows with minimal human direction.
Imagine telling an AI: "Analyze last quarter's water loss data, compare it to the three-year trend, flag any distribution zones where non-revenue water increased by more than 5%, and draft a summary memo for the operations team." A basic AI chatbot would give you a generic explanation of how to do that. An AI agent would actually go do it — pulling the data, running the analysis, and producing the memo.
RAG: Retrieval-Augmented Generation
RAG is a technique that dramatically reduces hallucinations by connecting an AI model to a specific, trusted set of documents. Instead of relying solely on what it learned during training, the model first searches your documents for relevant information and then generates its response based on what it actually found.
This is particularly powerful for utility applications. A RAG-enabled system connected to your regulatory filings, engineering standards, and policy documents can answer questions with specific citations — pulling exact passages from your own documents rather than generating plausible-sounding but potentially inaccurate responses.
Fine-Tuning
Fine-tuning is the process of taking a general-purpose AI model and training it further on your specific data to improve its performance in your domain. Think of it like hiring a generalist consultant and then immersing them in your industry for six months before putting them to work.
For most utilities, fine-tuning is not the first step. It's expensive, requires significant technical expertise, and the general-purpose models are already remarkably capable for most utility tasks. RAG is usually a better starting point because it achieves domain-specific accuracy without the cost and complexity of model customization. Fine-tuning makes sense when you need the model to adopt very specific patterns — like matching your utility's writing style for customer communications or learning the nuances of your state's regulatory language.
This section separates the buzzwords from the real capabilities. When a vendor says their product "uses AI," you should be asking: how does it connect to our data? Is it using an API to a hosted model, or does it run on-premise? Does it use RAG to ground responses in our actual documents, or is it relying on general training data? Is it an agent that can take actions in our systems, or a chatbot that just answers questions? The answers to these questions determine whether the tool will actually work in your environment.
The AI Landscape for Utilities
The Major Models
The AI market moves fast, but as of early 2026, here are the major players your organization is most likely to encounter:
Claude (by Anthropic)known for strong reasoning, long context windows, and a focus on safety and accuracy. Popular in professional and enterprise environments where reliability matters more than novelty.
GPT (by OpenAI)the model behind ChatGPT, the product that brought AI into the mainstream. Widely used across consumer and business applications, with a broad ecosystem of integrations and plugins.
Gemini (by Google)Google's flagship model, tightly integrated with Google Workspace. If your utility runs on Google's ecosystem, Gemini has a natural integration advantage.
Llama (by Meta)an open-source model that can be downloaded and run on your own servers. This matters for utilities with strict data sovereignty requirements who want AI capability without sending any data to an external provider.
Copilot (by Microsoft)Microsoft's AI assistant, embedded directly into Office 365, Teams, and other Microsoft products. For utilities heavily invested in the Microsoft stack, Copilot offers AI capabilities without requiring a separate tool.
Deployment Options
Understanding how AI gets deployed is as important as understanding the technology itself, because it determines your security posture, your cost structure, and your operational flexibility.
Team and Enterprise Planssubscription-based access through a provider's platform. Your staff logs in through a web interface or desktop application and uses AI directly. This is the simplest option and is appropriate for most utility use cases. Enterprise plans include contractual guarantees about data privacy, security certifications, and administrative controls.
API Accessprogrammatic access that lets your developers or IT team build AI into your existing systems. Instead of staff switching to a separate AI application, AI capabilities get embedded into the tools they already use — your work order system, your CIS, your GIS platform. This requires technical expertise to implement but produces a much more seamless experience.
Self-Hosted / On-Premiserunning an AI model on your own servers. This gives you complete control over your data and eliminates any dependency on external providers, but it requires significant hardware investment and technical staff to maintain. Currently, only open-source models like Llama support this deployment option.
What "Open Source" Means in AI
In software, open source means the code is publicly available for anyone to inspect, modify, and use. In AI, it means the model's architecture and weights (the mathematical parameters that define its behavior) are publicly released.
For utilities making security decisions, open-source AI has a specific advantage: you can run it entirely within your own network. No data ever leaves your infrastructure. The tradeoff is that open-source models are generally less capable than the leading commercial models, and running them requires your own computing infrastructure and technical expertise to maintain.
Understanding the landscape helps you avoid vendor lock-in and make strategic procurement decisions. When a software vendor says "our product uses AI," ask which model. When they propose a deployment architecture, understand the security implications. And when you're evaluating costs, know the difference between a per-seat subscription and per-token API pricing — because the right choice depends on how your organization will actually use the technology.
Security and Trust
Enterprise Agreements and Data Privacy
The single most important security concept to understand is this: on enterprise plans from major AI providers, your data is not used to train the model. This is a contractual guarantee, not just a marketing promise. It means that when your staff pastes a financial model into an AI conversation, that data is processed to generate a response and then effectively discarded. It does not become part of the model's knowledge. It cannot surface in another customer's conversation.
This is where the distinction between consumer and enterprise AI matters enormously. The free version of ChatGPT, for example, may use your conversations to improve the model (unless you opt out in settings). An enterprise agreement with Anthropic, OpenAI, or Google includes explicit contractual language prohibiting this. Read the agreement. Confirm the terms. This single distinction is the foundation of safe AI use in a utility environment.
Security Certifications
SOC 2an audit framework that verifies a company's controls for data security, availability, processing integrity, confidentiality, and privacy. This is table stakes for any enterprise software vendor and the minimum certification you should expect from an AI provider.
FedRAMPthe Federal Risk and Authorization Management Program. If your utility has any federal reporting requirements, receives federal funding, or handles data subject to federal regulations, FedRAMP certification ensures the AI provider meets federal security standards.
StateRAMPsimilar to FedRAMP but designed specifically for state and local government use. This is increasingly relevant for municipal utilities and state-regulated entities that need to demonstrate compliance with government security standards.
Data Residency
Data residency refers to where your data is physically stored and processed. When you use a cloud-based AI service, your prompts and the AI's responses travel through data centers. For most utilities, the key question is whether that data stays within the United States. Major AI providers offer data residency guarantees that keep processing within specific geographic regions.
This matters particularly for utilities handling sensitive infrastructure data, customer personally identifiable information (PII), or data subject to state-specific privacy regulations. Ask your AI provider: where are the data centers that will process our queries? Is there a guaranteed data residency option?
Consumer AI vs. Enterprise AI
This distinction cannot be overstated. When an employee signs up for the free version of an AI tool using their personal email and starts pasting utility data into it, that is fundamentally different from the same employee using an enterprise-licensed AI tool with proper security controls.
Consumer AI typically has weaker (or no) contractual data protections, may use your inputs for model training, lacks administrative oversight tools, and provides no audit trail. Enterprise AI includes contractual data privacy guarantees, administrative controls for user management, usage monitoring and audit capabilities, single sign-on integration, and compliance with industry security standards. If your utility is using AI in any meaningful capacity, it should be on an enterprise plan.
Air-Gapped Deployments
For utilities with the most stringent security requirements — particularly those operating critical infrastructure — air-gapped deployment means running AI on servers that have no connection to the internet whatsoever. Data goes in, analysis comes out, and nothing ever touches an external network.
This is currently only feasible with open-source models and requires significant on-premise computing infrastructure. It's the most secure option available, but also the most expensive and technically demanding. Most utilities don't need this level of isolation, but it's important to know it exists for sensitive operational technology environments.
Security isn't a checkbox — it's a governance responsibility. Every board member and executive should understand the difference between consumer and enterprise AI, should ask about data handling practices, and should verify that appropriate certifications are in place. The worst-case scenario isn't that AI doesn't work well. It's that sensitive customer data, infrastructure plans, or financial information ends up somewhere it shouldn't because nobody asked the right security questions.
The Vocabulary of AI Strategy
Technology is only half the equation. The other half is organizational strategy — how you plan for, govern, and adopt AI across your utility. Here are the terms you'll encounter when building that strategy.
Digital Transformation vs. AI Transformation
Digital transformation is the broader journey of moving from paper-based, manual processes to digital systems — implementing a modern CIS, deploying AMI meters, adopting GIS platforms. AI transformation is a specific subset: using artificial intelligence to enhance or automate decision-making, analysis, and operations. Many utilities are still in the digital transformation phase, and that's okay. AI transformation generally requires a solid digital foundation to be effective — AI is most powerful when it has good data to work with.
AI Readiness Assessment
Before deploying AI, smart organizations assess their readiness. This includes evaluating data quality and availability, technical infrastructure, staff skills and culture, governance frameworks, and specific use cases where AI can deliver measurable value. An AI readiness assessment isn't a pass/fail test — it's a roadmap that tells you where you are, where you want to go, and what gaps need to be filled along the way.
AI Governance Committee
An AI governance committee is a cross-functional group responsible for overseeing your utility's AI strategy, policies, and risk management. Think of it like your capital planning committee, but for technology decisions. The committee typically includes representatives from IT, legal, operations, finance, and executive leadership. Their job is to evaluate AI use cases, approve tools and vendors, set policies for acceptable use, and monitor for risks.
Crawl-Walk-Run Adoption
This is the framework most successful utility AI adoptions follow. Crawl means starting with low-risk, internal-facing use cases — drafting internal memos, summarizing meeting notes, researching regulatory questions. Walk means expanding to higher-value applications with appropriate oversight — analyzing rate structures, preparing first drafts of regulatory filings, building financial models. Run means deploying AI in operational and customer-facing applications where it can drive significant efficiency and quality improvements.
The mistake most organizations make is trying to run before they've learned to crawl. Start small, learn what works, build institutional knowledge and trust, and then expand.
AI Policy
Every utility needs a formal AI policy — a written document that defines how AI tools can and cannot be used within the organization. At minimum, this policy should address which AI tools are approved for use, what types of data can and cannot be shared with AI systems, review and approval requirements for AI-generated work product, roles and responsibilities for AI governance, and how AI decisions will be documented and audited.
An AI policy doesn't have to be long or complicated. But it does need to exist. Without one, you're relying on individual judgment calls about data security, output quality, and appropriate use — and that's a risk no utility should accept.
Change Management for AI
The biggest obstacle to successful AI adoption isn't the technology. It's the people. Staff members may fear that AI will replace their jobs, distrust AI-generated output, or simply resist changing workflows they've used for years.
Effective change management addresses these concerns head-on through clear communication about what AI will and won't replace (hint: it replaces tasks, not jobs), training that builds confidence and competence, early wins that demonstrate tangible value, and channels for feedback and continuous improvement. The utilities that succeed with AI are the ones that invest as much in change management as they do in the technology itself.
AI Champion
An AI champion is the internal advocate who drives adoption within your organization. This isn't necessarily a technologist — it's someone with credibility, enthusiasm, and enough understanding of both the technology and the business to bridge the gap between IT and operations. The champion identifies use cases, builds consensus, runs pilot projects, and evangelizes successes. Without a champion, AI initiatives tend to stall in the pilot phase.
Technology without strategy is just an expensive experiment. Every concept in this section represents a decision your utility will face — or should be facing — in the near future. Whether you're a board member evaluating a strategic plan, an executive building a business case, or a department head wondering where to start, this vocabulary gives you the foundation to participate meaningfully in those conversations.
What Comes Next
You've just built a working vocabulary for the most important technology shift your utility will face in the coming decade. You don't need to remember every term — that's what this guide is for. What matters is that you now have the conceptual framework to engage with AI conversations from a position of informed confidence rather than uncertainty.
The utilities that will thrive in the AI era won't be the ones with the fanciest technology. They'll be the ones whose leaders understood the concepts well enough to make smart, strategic decisions about adoption — to know when a vendor's promise is credible, when a security concern is real, and when the right answer is "let's start with a pilot."
At NewGen Strategies & Solutions, we help utilities navigate the intersection of technology and strategy. Whether you need an AI readiness assessment, help developing an AI policy, or a trusted advisor to evaluate vendor proposals, we'd love to hear from you.