My Take on AI Governance: Balancing Risk with Opportunity
To AI effectively, you cannot not run away from AI and bury your head in the sand, but you also should not adopt it blindly, or through arbitrary top-down mandates, without understanding the risk and implementing relevant oversight.
AI is increasingly embedded in how we search, create, code, and make decisions, whether our policies have caught up or not. This creates a tension for security and technology leaders: we can neither ignore AI nor hand it the keys without understanding how it behaves on our networks, with our data, and in the hands of our teams.
This is the first post in a series I am developing giving my take on AI governance and operating models. My goal is to lay a strategic foundation, framing the key governance, risk, and oversight questions that every organization (and every individual) should be asking. I intend to dive deeper into each of these areas with real-world examples, research, and lessons learned in future posts.
My take on AI, as of March 2026, is straightforward: you cannot not run away from AI and bury your head in the sand, but you also should not adopt it blindly, or through arbitrary top-down mandates, without understanding the risk and implementing relevant oversight.
The organizations and leaders who will see the greatest returns on AI investments are the ones who:
- Empower their workforce by providing relevant, tested, controlled, and effective AI tools; and
- Implement appropriate AI risk management and governance.
On an individual level, those who are curious, test things out, try new things, and experiment, are likely to see the greatest positive impact from the use of AI.
Enterprise AI Oversight: Who Owns the Risk?
The first question any organization needs to answer is deceptively simple: who is accountable for AI risk?
In most enterprises today, AI adoption is outpacing governance. Teams are integrating AI into workflows, procurement is (maybe) evaluating AI-enabled vendors, and employees are experimenting with generative AI tools, often without centralized visibility or policy. This is not a technology problem. It is a governance problem.
Effective AI oversight requires:
- Defined ownership and accountability: Someone, whether a single designated executive, a cross-functional AI governance committee, or an extension of the existing risk management function, must own the AI risk portfolio.
- Integration with existing governance structures: AI governance should not be a standalone silo. It belongs inside your existing enterprise risk management, information security, and compliance frameworks. If you already have a risk committee, a data governance program, or a third-party risk function, AI oversight should feed into and draw from those programs, rather than an entirely new not duplicate them.
- Cross-functional participation: AI risk is not just a technology or security concern. Legal, compliance, HR, procurement, engineering, data science, and business leadership all have a stake. A governance model that excludes any of these perspectives will have blind spots.
- Board-level visibility: Boards should receive periodic reporting on AI risk posture, incidents, and alignment with enterprise risk appetite. This is consistent with the direction regulators are heading.
So, what are the signs of effective AI implementation at the enterprise level?
- Clear policy: Policy statements should not try to scare employees into avoiding AI tools. They should provide clear guidance on how to determine whether certain tools, use cases, and data sharing is permissible, as well as a forum to raise questions.
- Ongoing evaluation and implementation of solutions: If an enterprise is going to engineer their own AI solution in-house, they need to dedicate proper resources and address user feedback effectively. A poor in-house implementation will send employees running for the unauthorized (and often free) solutions. Leveraging off-the-shelf solutions can be effective, but require proper procurement, licensing, integration, and monitoring.
- Monitoring, Detection, and Allowlisting: Clearly defining the authorized tools and use cases allows you to identify and mitigate against unauthorized tools. This also requires relevant technical controls to be in place already. Web browsing and DLP solutions can be leveraged to prevent user input into unauthorized chatbots while permitting the use of approved tools. Endpoint monitoring and allowlisting solutions can detect and prevent unauthorized AI software from running on corporate devices (your AI governance committee likely does not approve of OpenClaw…).
- Identity Governance: Differentiating between human and non-human activity on the network will be a critical component of the authentication, authorization, and accounting components of identity management. Leveraging solutions that control for non-human (agentic AI) identities, but does not totally restrict the productivity gains, will help enterprises prevent chaos from ensuing.
The key takeaway: governance comes first. If you are deploying AI tools without answering the question of who owns the risk, you are building on sand. This mirrors the same principle I have written about before in the cybersecurity context, leaders focus first on governance and risk identification, then on processes, and the tooling effectiveness will follow.
Risk Assessment: Applying the NIST AI Risk Management Framework
Once you have governance in place, you need a structured approach to identifying and assessing AI-specific risks. This is where the NIST AI Risk Management Framework (AI RMF 1.0) provides a strong foundation.
The AI RMF is a voluntary, use case-agnostic framework designed to help organizations manage risks associated with AI systems across their lifecycle.
The AI RMF is deliberately flexible, as no policy or risk management document will keep up with the pace of AI innovation. Thus, it is crucial that organizations stay informed and continuously evaluate their environments, not only the risks and mitigations, but which tools and AI solutions suit them best.
I outline what success looks like from a practical implementation perspective above. Effective AI risk management will support those implementation successes additionally
- Inventory your AI: You cannot manage what you do not know about. Start by cataloging where AI is being used across the enterprise, including approved and deployed tools, data access flows, and outputs.
- Tier your risks: Not all AI use cases carry the same risk. A generative AI tool drafting marketing copy carries different risk than an AI model informing clinical decisions, which carries a different risk than an agent writing software. Apply risk tiering to focus governance resources where the stakes are highest. Again, this is not to say you should ban all AI. You should, however, know where it is used and apply mitigating controls accordingly (i.e., requiring certain code review standards for AI-developed software).
- Establish measurement baselines: Define what "good" looks like for your AI systems in terms of accuracy, fairness, reliability, and security. Without baselines, you cannot measure drift or degradation.
- Ensure data governance is in place: AI is only as good as the data it consumes. Without strong data governance, including data quality controls, lineage tracking, access management, and consent management, AI governance will not scale within the organization. If your organization does not have a mature data governance program, that is the place to start before layering on AI-specific controls.
A theme you may have picked up on is that AI governance can only be as effective as overall IT, security, and data governance. If you have not established foundational governance principles before, you are far less likely to be able to govern AI-associated risks.
The Individual's Use of AI: Productivity with Awareness
So far, this post has focused on enterprise governance, but individuals need to think about AI governance, both in our personal and professional lives, as we are making daily decisions about how and when to use AI tools.
My view: AI can meaningfully boost your productivity, but you should approach it in an informed and risk-aware way. That means:
- Understand what you are sharing: When you use a generative AI tool, you are often providing data, prompts, documents, context, to a third-party system. Just like with any other tool, especially if you are using a free account, you should be aware of the data sharing settings, know where that data goes, whether it is used for model training, and what the retention policies are.
- Verify AI outputs: AI tools are powerful research accelerators, writing assistants, and analytical aids. They are not infallible. Treat AI-generated content as a draft that requires your review, judgment, and validation. This is especially critical for anything that will be published, shared with clients, or used to inform decisions.
- Know the boundaries: Be deliberate about what you delegate to AI and what you do not. High-stakes decisions, sensitive data, and situations requiring nuanced judgment should retain meaningful human involvement.
- Stay current: Things are moving fast. Models are upgraded. New models are introduced. New tools are introduced. MCPs may have come and gone in the span of one news cycle! While you don’t need to listen to every AI podcast and read every AI newsletter that is out there, you should make an effort to stay informed about what the current capabilities are, and how those capabilities may fit into your personal use cases.
- Don’t overdo it: Be honest, do you really need to build and maintain your own budgeting tool, or can you continue to rely on one of the dozens of tried-and-tested tools out there? There are true INSANE productivity gains to be had through the use of AI tools, but don't turn AI into a solution looking for a problem.
This is not about fear. It is about informed use. The same risk-aware mindset that makes you a better security professional also makes you a more effective and responsible AI user.
How I Use AI: Personal Use Cases
Time for the fun part. I will close with some transparency about my own AI use, because I think it is important for anyone writing about AI governance to be honest about their own practices and boundaries.
Here are a handful of use cases in particular where I have found AI most useful in my own daily workflows:
- Research acceleration / Problem solving: AI tools help me survey a topic faster; pulling together sources, summarizing long documents, and identifying patterns across large volumes of information. I still read the primary sources and validate the output, but AI compresses the initial research phase significantly. Additionally, if I need to get smart on a particular topic quickly, AI tools are my go-to.
- A recent example: Gemini helped me find a hidden menu on my family’s smart TV to program it to always turn on to HDMI 1 rather than defaulting to the TV’s “smart hub”, which was driving my family crazy.
- Scripting: AI tools have helped me write Excel formulas, macros, Python scripts, and PowerShell scripts. You surely need to know enough about the syntax to fix minor formatting issues and properly address user-input needs, but I can easily cite multiple instances where use of AI tooling has condensed multiple hours of work into mere minutes of testing and validation before fully executing.
- A recent example: Perplexity helped me build a macro that would implement the same set of changes across 70+ individual Excel workbooks. I had to do a few rounds of testing one workbook before setting it loose, but I surely saved multiple hours not having to manually update all of the workbooks individually.
- Organization and Planning: Particularly in my personal capacity, I have increased my use of AI tools to organize my inboxes, help prioritize tasks, and plan ahead. I used AI to help plan a previous trip to Germany and recently used Claude Cowork to distill all prior research and existing reservations into a detailed itinerary for a trip to Portugal. The output was incredible.
- Lastly, I can definitively say I am a Claude Code believer. I recently created my first public GitHub repository, a simple alert analysis dashboard that monitors and alerts the Google Workspace environment I maintain for myself. I have a roadmap for continuing to iterate on the dashboard, along with other side projects that I am excited to dig into.
How do I account for potential hallucinations or inaccuracies?
- I subscribe to multiple models. In plenty of instances, I will ask multiple AI tools for help on the same thing, to see whether they provide the same guidance.
- I check the sources that a model is citing. If the sources are sparse, or are not directly related to the topic, there is a higher risk that the model is misinterpreting a source. Asking follow-up questions and doing additional research can help.
- I look for hedged/generalized guidance. Use of terms like "should", "might", and "maybe" can be giveaways that the tool does not actually know. Pressing the model to clarify can get you to a point where you can have confidence behind the answer (sometimes the model will tell you that it really does depend on an input that you have not yet provided).
These use cases are deliberately modest. I do not delegate high-stakes decisions or sensitive analysis to AI, and I always operate within the acceptable use requirements in professional contexts.
I continue to tinker, iterate, and experiment on my own AI journey.
This is an ongoing conversation, and I look forward to having more conversations with those of you in my network and with anyone as fascinated with the nuance and the new philosophies around the use of AI.