Everything utility leaders need to know about AI — from foundational concepts to full implementation. Built by a team that knows your industry and has walked this path ourselves.
Artificial intelligence didn't appear overnight in 2022 — but that's when it became accessible to everyone. Understanding the arc of AI development helps utility leaders see where we are, how fast things are moving, and why 2026 is the year that demands action.
The technology we now call "AI" has been developing for decades. Early systems were rule-based — rigid programs that followed predetermined logic trees. Machine learning introduced the ability to learn from data rather than explicit rules. Deep learning made neural networks practical. And then, in rapid succession, language models went from research curiosities to tools that could write, analyze, and reason at a level that changed everything.
The key insight: AI didn't suddenly appear. What changed was accessibility. When ChatGPT launched in November 2022, it brought AI from research labs to everyone's browser. That single event compressed years of gradual adoption into months of explosive growth — 100 million users in just two months.
AI didn't suddenly appear in 2022 — but that's when it became accessible to everyone. The technology had been building for decades. What changed was the interface: suddenly anyone could use AI without a computer science degree.
Click any milestone to learn more about its significance for the utility industry.
Utilities are adopting AI, but trailing other industries. The gap represents both risk and opportunity — there's still time to get it right, but the window is narrowing.
Percentage of organizations with formal AI initiatives underway
This isn't a trend you can watch from the sidelines anymore. In 2026, AI moved from "interesting experiment" to "operational necessity." Competitors, regulators, and customers are all moving. The question isn't whether to adopt AI — it's how quickly and how well.
The workforce is already using AI — with or without a policy. Studies show that over 60% of knowledge workers use AI tools informally. Without organizational strategy, you get shadow AI, inconsistent quality, and security risk. The genie is out of the bottle.
The worst position isn't being behind on AI. It's having no plan at all. Your staff is already using it — the question is whether you're guiding that adoption or ignoring it.
Next: Now that we understand where AI came from and why it matters, let's build a solid foundation of what AI actually is and how it works.
Before you can lead an AI initiative, you need to understand how AI works — not at the engineering level, but at the level that informs good decisions. This module builds that foundation.
Think of a large language model (LLM) as an incredibly well-read assistant who has studied millions of documents. It doesn't "know" things the way you do — it predicts what the most helpful next response would be, drawing on patterns from everything it's read. This distinction matters: AI is a prediction engine, not a thinking machine.
When you send a prompt, the model processes your text as tokens (roughly word-sized chunks), considers the full context of the conversation, and generates a response token by token. Each token is a probabilistic prediction of what should come next. This is why AI can be remarkably insightful — and occasionally confidently wrong.
AI doesn't think. It predicts. Understanding this distinction is the key to using it well — and knowing when to trust it. Great results come from giving AI clear context, specific instructions, and verifying its outputs.
A context window is how much information AI can "hold in mind" at once. Think of it like a desk — a bigger desk lets you spread out more documents and work with more information simultaneously. Here's what different token counts mean in practice:
~300–500 tokens. Even the smallest context window handles emails easily.
~4,000–6,000 tokens. Standard documents fit within any modern model's window.
~50,000–100,000 tokens. This requires a large context window — but modern enterprise models handle it.
Context window size determines how much information AI can hold in mind at once. Enterprise models with large context windows can analyze entire rate studies, regulatory filings, and financial documents — not just snippets. This is why enterprise-grade models matter for real utility work.
AI interactions come in three tiers. Understanding these tiers helps you match the right AI approach to each task:
You ask, it answers. One exchange at a time. Great for quick questions, summarization, and brainstorming. Example: "Summarize this regulatory filing."
AI follows a predefined sequence of steps. Structured but rigid. Example: "Every Monday, pull meter data, flag anomalies, and email the operations team."
AI autonomously plans, uses tools, takes actions, and adjusts. The frontier. Example: "Analyze our rate structure, compare to peers, and draft a memo with recommendations."
Agents are where AI gets transformative. An AI agent doesn't just answer questions — it can use tools, access data, write documents, and complete multi-step tasks autonomously. This is the capability that will change how utilities operate.
The quality of AI output depends heavily on the quality of your input. Compare these three approaches to the same task:
Vague instruction produces vague output. AI doesn't know the audience, format, length, or focus area. You'll get a generic summary that probably needs heavy editing.
AI access comes in several forms. Here's how they compare for utility organizations:
| Feature | Team (~$25/mo) | API Access | Enterprise (Custom) |
|---|---|---|---|
| Users | Per-seat | Usage-based | Organization-wide |
| Usage Limits | Moderate | Pay-as-you-go | Custom / Unlimited |
| Admin Controls | Basic | Full (developer) | Full (SSO, SCIM, audit logs) |
| Data Governance | Standard enterprise terms | Custom via agreement | Custom DPA, data residency |
| Best For | General staff, getting started | Custom integrations | Large orgs with compliance needs |
| Examples | Claude Team, ChatGPT Team | Anthropic API, OpenAI API | Claude Enterprise, Azure OpenAI |
Most utilities should start with Team plans. They're affordable, secure, and get results fast. You can always scale up to Enterprise or API as your needs grow.
Next: Now that you understand what AI is and how it works, the next step is defining what you want it to do for your organization. That starts with a vision.
Too many organizations jump straight to "which AI tool should we buy?" The right first question is "what do we want AI to do for our organization?" A vision document creates alignment, accountability, and a clear path forward.
A tool without a vision is an expense. A vision with the right tools is a transformation. Start here — before you compare vendors, evaluate platforms, or set up accounts.
A successful AI strategy follows four sequential steps. Each one builds on the previous, creating a solid foundation for everything that follows.
Where are you today? What manual processes consume the most time? Where are the bottlenecks? What data do you already have, and where does it live? This honest inventory is your starting point.
Where do you want to be in 12–24 months? What does "AI-enabled" look like for your specific utility? Don't think about technology — think about outcomes. Faster rate studies? Better customer service? More efficient operations?
Which use cases deliver the highest value with the lowest risk? Start there. You don't need to transform everything at once — you need 3–5 wins that build confidence and momentum.
Write it down. A formal AI vision document creates alignment across leadership, IT, operations, and finance. It becomes the North Star that guides every decision that follows.
Before you evaluate a single tool, these four areas need clarity:
Who owns the AI strategy? Who approves new use cases? How do you handle data classification? Establish accountability from day one.
What's the initial investment? What are the ongoing costs? How will you measure ROI? Start small and scale based on demonstrated value.
Quick wins in 30 days? Broader deployment in 6 months? Full integration in 12? Set realistic milestones that build momentum.
Who are your internal advocates? Every department needs at least one AI champion — someone excited to experiment and share results.
Your AI vision document doesn't need to be 50 pages. It needs to be clear, specific, and actionable. Structure it around these eight elements:
| # | Section | What It Covers |
|---|---|---|
| 1 | Executive Summary | Why AI, why now — the business case in 2–3 paragraphs |
| 2 | Current State Assessment | Where the organization is today — tools, processes, pain points |
| 3 | Target State Vision | 12-month and 24-month goals with measurable outcomes |
| 4 | Priority Use Cases | Top 5–10 use cases, ranked by value and feasibility |
| 5 | Governance Framework | Decision rights, approval processes, data policies |
| 6 | Budget & Resources | Costs, team allocation, training requirements |
| 7 | Success Metrics | How you'll measure progress and ROI |
| 8 | Risk Mitigation | Security, compliance, change management considerations |
NewGen created its own AI vision document before recommending it to clients. We've seen firsthand how much clearer decisions become when the strategy is written down. It's the single most important step in your AI journey.
How prepared is your organization? Check each item you've completed to see your readiness score.
Next: With a vision in place, the next question leaders always ask is: "Is it safe?" Let's address security head-on.
The biggest security risk isn't using AI — it's having staff use AI without guardrails. Enterprise AI tools are built for security. The risk lives in the gap between "no policy" and "good policy."
Fear of AI security is often based on consumer-grade experiences. Enterprise AI platforms operate under entirely different security models — with contractual guarantees, audit trails, and data isolation. Don't let outdated fears block real progress.
Modern enterprise AI platforms provide security that meets or exceeds what most utility IT departments require:
Independent audit confirming security controls over time. Both Anthropic (Claude) and OpenAI hold this certification. Your systems are tested, monitored, and verified by third parties.
Enterprise agreements guarantee your data is stored separately, encrypted at rest and in transit, and never shared across customers. Your data stays in its own silo.
Enterprise plans contractually guarantee that your prompts, documents, and outputs are never used to train AI models. Your data stays yours — period.
Enterprise security handles the platform. These are the practices your organization needs to establish:
Use enterprise plans for anything involving customer data. Free and consumer tiers don't offer the same protections.
Know what's public, internal, confidential, and restricted. This simple framework covers 95% of scenarios.
Security awareness is more important than any technical control. Make sure everyone understands what's appropriate to share with AI and what isn't.
Understand data retention periods, subprocessor lists, and breach notification terms. These are standard enterprise contract items.
Enterprise plans let you control who has access, set usage policies, and monitor activity. Use them.
A simple four-tier classification framework covers most utility data scenarios:
| Tier | Description | AI Policy | Examples |
|---|---|---|---|
| Public | Already publicly available | Any AI tool acceptable | Published rate schedules, public meeting minutes, industry reports |
| Internal | Not public but not sensitive | Enterprise AI tools with standard controls | Internal memos, process documentation, general analysis |
| Confidential | Business-sensitive | Enterprise AI only, with data handling review | Financial projections, draft rate studies, vendor contracts |
| Restricted | Regulated or highly sensitive | AI use requires specific approval & controls | Customer PII, SSNs, SCADA data, security assessments |
Walk through this decision tree to determine the right AI policy for any data you're working with:
Next: Security handled? Good. Now let's talk about actually selecting and setting up your AI system.
AI access comes in tiers, and choosing the right one depends on your team size, security needs, and use case complexity. Most utilities don't need to build anything custom — they need to pick the right tier.
Here's how the major AI tiers compare across the features that matter most to utilities:
| Feature | Team (~$25/mo) | Pro (~$100/mo) | Enterprise (Custom) |
|---|---|---|---|
| Users | Per-seat | Per-seat | Organization-wide |
| Usage Limits | Moderate | High / Unlimited | Custom |
| Admin Controls | Basic | Basic | Full (SSO, SCIM, audit logs) |
| Data Governance | Standard enterprise terms | Standard enterprise terms | Custom DPA, data residency |
| Best For | General staff, getting started | Power users, heavy workloads | Large orgs with compliance needs |
| Examples | Claude Team, ChatGPT Team | Claude Pro, ChatGPT Pro | Claude Enterprise, Azure OpenAI |
Beyond direct AI subscriptions, management platforms sit on top of AI models and give organizations a single interface for accessing multiple capabilities:
AI-powered search and research with source citations. Ideal for regulatory research, industry analysis, and fact-finding tasks that need verifiable sources.
Integrated into Office 365. Best for organizations already deep in the Microsoft ecosystem — AI assistance directly inside Word, Excel, Teams, and Outlook.
API-based connections that embed AI into existing utility software — billing systems, asset management, GIS. Most technical but most customizable.
You don't have to pick just one. Many organizations use a team plan (like Claude Team) for general daily use and a specialized platform (like Perplexity) for research workflows. Start simple and expand as needed.
If AI saves each employee even 5 hours per month, the ROI on a $25/month tool is immediate. At an average utility salary of $75K (~$36/hr), that's $180 in recovered time per employee per month — a 7x return on a Team plan.
Estimate your organization's AI costs. Adjust the sliders and tier selections to see real-time cost projections.
ROI assumes each user saves 5 hours/month at $36/hr average utility employee cost. Actual results vary by use case and adoption level.
Next: You've got the tools. Now comes the human side — getting your team on board and leading the change.
The organizations that succeed with AI don't lead with the technology. They lead with the value proposition for their people: "This will make your work better, not make you unnecessary."
Let's address the elephant in the room: "Will AI take my job?" The answer is no — but AI will change every job. The utilities that handle this message well will accelerate adoption. Those that don't will face resistance at every turn.
The organizations that succeed with AI don't lead with the technology. They lead with the value proposition for their people: "This will make your work better, not make you unnecessary." Frame it as empowerment, not threat.
Utilities face a workforce crisis — aging workforce, difficulty recruiting, growing workload. AI doesn't replace people; it lets your existing team do more.
Every utility we work with is doing more with less. AI doesn't reduce headcount — it closes the gap between what your team is asked to do and what they can realistically accomplish.
AI isn't just for the IT department. Every function in a utility organization can benefit:
AI drafts rate study sections, analyzes billing data for anomalies, produces financial projections, and writes board memos. What took the rate team 3 weeks now takes 1 week.
Predictive maintenance analysis, work order prioritization, and regulatory compliance document review. Catch equipment issues before they become failures.
AI-assisted call handling, automated FAQ responses, billing inquiry resolution, and outage communication drafting. Response times drop, satisfaction goes up.
Policy document drafting, training material creation, recruitment support, and compliance documentation. Your HR team of 3 operates like a team of 6.
Contract review, regulatory filing analysis, compliance tracking, and public comment drafting. Review contracts in hours, not days.
Press releases, social media content, public meeting presentations, and newsletter writing. Consistent, professional communications at twice the speed.
Younger workers expect AI tools. Senior staff have institutional knowledge that AI can help preserve and scale. The goal is to create an environment where AI skills are as expected as Excel skills. In 5 years, "AI-proficient" won't be a special skill — it'll be as basic as knowing how to use email.
In 5 years, "AI-proficient" won't be a special skill — it'll be as basic as knowing how to use email. Start building that culture now. The organizations that invest in AI literacy today will have a significant competitive advantage tomorrow.
Select a workflow to see the before and after comparison:
Next: Your team is ready, your vision is clear. Now let's get tactical — how to actually implement AI across your organization.
This is where strategy meets execution. You've built the vision, addressed security, selected tools, and prepared your team. Now it's time to get tactical about deploying AI across your organization.
The key principle: start with high-value, low-risk use cases. Build confidence and momentum before tackling complex integrations.
Document drafting, summarization, research, data analysis, meeting notes. Low risk, immediate value, and they build confidence in AI capabilities.
Workflow automation, reporting systems, customer communication templates, financial analysis. Require some integration but deliver substantial value.
Predictive analytics, agent-based workflows, system integration, automated decision support. High value but require mature infrastructure.
Pick 3–5 use cases, execute them well, and let success build demand for more. The utilities that try to do everything at once end up doing nothing well.
AI is only as good as the data it can access. Here's the three-step path to making your data AI-ready:
What data do you have? Where does it live? Billing systems, GIS, SCADA, financial software, document management — map it all out.
Is the data clean? Structured? Accessible via API? Or is it trapped in legacy systems and spreadsheets? Be honest about where you are.
Which systems need to connect to AI? What data flows are needed? What's the priority order? You don't need everything connected on day one.
MCP (Model Context Protocol) servers are like universal translators between AI and your existing systems. They let AI read your billing data, query your GIS system, or pull from your document management — without replacing any of those systems.
A role-based access structure balances openness with governance. The key: make the default generous and restrict only what genuinely needs restricting.
| Role | AI Access Level | Data Access | Approval Needed |
|---|---|---|---|
| Executive | Full — all tools, all tiers | All non-restricted | No |
| Department Head | Full — team + pro tools | Department + public | No |
| Analyst / Engineer | Team tools + dept-specific | Department data | For new use cases |
| General Staff | Team plan (standard) | Public + internal | For new integrations |
| Contractor / Temp | Limited or supervised | Public only | Yes, always |
Overly restrictive access kills adoption. If people can't easily use AI tools, they'll go back to doing things the old way — or use unmanaged consumer tools. Make the path of least resistance also the path of compliance.
Plot your use cases on this value/effort matrix to identify where to start. Hover over each point to see details.
Next: Implementation isn't a one-time event. The most successful organizations build a continuous improvement loop. Here's how.
AI adoption isn't a project with an end date — it's an ongoing capability that needs tending. The organizations that get the most value from AI aren't the ones that deploy the fanciest tools. They're the ones that measure what's working, learn from what isn't, and adjust quickly.
What gets measured gets managed. Track AI impact across four dimensions:
Time saved per task, throughput increases, reduction in manual work hours. The most tangible and easiest to measure.
Error rates, consistency of outputs, customer satisfaction scores. AI should improve quality alongside speed.
Active users, frequency of use, breadth of use cases, user satisfaction. Adoption tells you whether people actually find value.
Cost per task (before/after), ROI per department, total program cost vs. value delivered. The numbers that justify continued investment.
The PDCA cycle is a proven framework for continuous improvement. Click each phase to see how it applies to AI implementation:
Set specific, measurable goals for each AI initiative. Define what success looks like before you start. Identify the data you'll need to measure outcomes.
Make your goals concrete and time-bound so you can objectively assess whether AI is delivering real value.
AI capabilities are advancing rapidly. What's cutting-edge today will be baseline in 18 months. Build adaptability into your program:
The pace of AI advancement means your strategy should be a living document. What seemed ambitious last year might be table stakes this year. Build review cycles into your governance framework.
| Cadence | Activity | Owner |
|---|---|---|
| Weekly | Check usage metrics, address user questions, troubleshoot issues | AI Champion |
| Monthly | Review productivity metrics, gather user feedback, share wins | Department Heads |
| Quarterly | Evaluate new tools/capabilities, update use case list, assess ROI | AI Committee |
| Semi-Annually | Review and update AI vision document, adjust strategy | Executive Leadership |
| Annually | Full program assessment, budget review, strategy refresh | Executive + AI Committee |
You've got the knowledge, the framework, and the tools. But you don't have to do this alone.
AI is transforming how utilities operate — and we're helping lead that transformation. We don't just advise on AI strategy. We've built ours, and we're ready to help you build yours.
We've spent decades in the utility industry — rates, operations, finance, regulation. We don't need to learn your world. We live in it.
NewGen didn't just study AI adoption — we did it. We built our own AI vision, selected our tools, trained our team, and integrated AI into our daily work.
Our team includes practitioners who build with AI every day — from prompt engineering to agent development to system integration.
AI is already changing how rate studies are conducted. We're at the forefront — using AI to accelerate analysis and produce better deliverables.
Aging infrastructure, workforce transitions, regulatory complexity, affordability concerns — we understand the pressures and how AI can help.
We don't sell AI tools. We help you build the strategy, develop the skills, and create the culture to succeed with AI — whatever tools you choose.