menu

The Ethics of AI in Consulting: Centering Humanness in a Tech-Driven World

Ethical AI Use Matters

Learn how consultants can align AI tools with values, stay transparent, and mitigate environmental impacts while staying competitive.

AI is here. The cat’s out of the bag. And for those of us working in consulting—especially small, values-driven businesses—this moment calls for discernment, not denial. We can’t just opt out. Our clients are using AI. Our peers are using AI. Nearly every sector is integrating artificial intelligence into workflows, systems, and decision-making tools. If we want to stay relevant and useful, we need to understand how to use AI responsibly.

At Triple Creeks, our approach to ethical AI use starts with grounding in our values: adaptability, stewardship, and equity. It’s not enough to just use the tools—we need to define how and why we’re using them, stay transparent with our clients, and ensure that AI supports rather than replaces the human relationships at the heart of our work.

Why AI Use is Inevitable in Consulting

AI is rapidly becoming integrated into everything from nonprofit CRMs to grant evaluation tools to strategic planning simulations (PlanPerfect is the newest one we’re nerding out about!) That means consultants—especially those helping organizations with operations, planning, or communications—are going to be touching AI whether we plan to or not.

But that doesn’t mean we need to adopt every shiny new tool. Using AI in consulting is not about outsourcing thinking. It’s about supporting our creative, administrative, and analytical work with tools that save time, surface insights, and increase access.

AI is already helping us transcribe interviews, generate first-draft content, summarize documents, and find patterns in large datasets. When used with intention, these tools can help us focus more on what matters: people, purpose, and meaningful impact.

The Rule of Thumb: Update SOPs Once or Twice a Year

A good SOP should be evergreen — meaning the information stays relevant for a long period. If you find yourself updating SOPs every few weeks, that’s a red flag. You’re likely storing too much tactical or fast-changing information in the wrong place.

Best Practice: Schedule at least one to two full planning days per year to review and update your SOPs. For small teams (even solo founders), this can be part of quarterly or annual strategic planning sessions. OR you can also delegate this – identify appropriate team members to “own” certain SOPs and ensure they’re the ones that update them annually!

Environmental Impact of AI. What Consultants Should Know.

Based on our business value of stewardship, before we talk about how AI can help us, we need to acknowledge its harms—especially its environmental impact. AI models, especially large ones like ChatGPT or Gemini, require enormous computing power. That means water, electricity, and carbon emissions. Here are a few facts:

  • Training GPT-3 used nearly 700,000 liters of freshwater just for cooling purposes (Li et al., 2023, arXiv).
  • AI data centers in the U.S. consumed up to 3% of the nation’s electricity in 2022, and that’s increasing.
  • AI’s carbon footprint is rapidly growing due to its reliance on cloud computing and energy-intensive GPUs.

We’re downstream from a lot of these decisions as users—but we’re not powerless. We can take steps like:

  • Choosing tools committed to sustainable AI infrastructure
  • Using AI only when it adds genuine value
  • Disclosing AI use to our clients so they can make informed choices
  • Advocating for regulation, sustainability, and transparency in the tech sector

Human-Centered AI. The Path Forward:

At Triple Creeks, we take inspiration from organizations like Techtonic Justice, which champions human-centered AI—AI that supports human needs, not replaces human connection. That means we use AI to draft, brainstorm, summarize, and analyze—but we never let it take the place of relational trust, facilitation, or creative thinking. We will always be in the drivers seat, with AI as our co-pilot, sometimes. We’re not here to automate care, insight, or discernment. We always ask ourselves…

  • Does this tool help us serve people better? Do we have the ways that we use it clearly defined and codified?
  • Are we using AI in a way that aligns with our clients’ values and capacities? Do we have their understanding and blessing in its use?
  • Are we using it transparently, without misleading or masking the source of ideas?

…And we encourage our clients to do the same!

Transparency, Trust & Tools: Ethical Practices for Consultants

If Not Now—When?

Clients deserve to know when and how AI is used. It’s about trust and transparency. We clearly disclose when AI was used to draft a document, analyze data, or support content development. And we’re upfront when something could be done by AI but shouldn’t be—because nuance, context, or care is needed. Here’s how we practice transparency in AI:

  • Letting clients know what tools we’re using and why, and integrate feedback and redirection when necessary.
  • Auditing AI-assisted outputs for bias, inaccuracy, or tone mismatches – we always have the final eye and still take our time with whatever it spits at us.
  • Making space to talk with clients about their own relationship to AI, and empowering them to learn them as well.
  • Not over-relying on AI in areas where human skill and creativity are essential to making something work well.

We also believe that part of responsible AI use is making time to reflect on our own boundaries. Not everything should be outsourced or automated, this is something central to sustainable-human-centered-growth. Especially when it comes to storytelling, decision-making, or community engagement, there’s still nothing like a human being. So, of course, internally, we have an organizational commitment to our ethical use of AI.

Aligning AI With Internal Values

Your organization might already have a values statement—but have you ever applied it to your AI use? We recommend creating a quick internal alignment checklist. Ask questions like:

  • Are we using AI in ways that uphold our mission?
  • Who benefits from this tool—and who might be harmed?
  • Are we replacing processes that need more attention with ones that need less?
  • How are we staying human while staying efficient?

This kind of reflection keeps AI and client trust at the center. It also creates space for accountability and learning as the tech evolves.

AI Is a Tool, Not a Substitute

AI isn’t going anywhere, and for better or worse, it’s becoming part of every industry’s landscape. That means our responsibility isn’t to avoid it—it’s to meet it with clarity, courage, and care. For consultants like us, that means using AI:

  • Ethically
  • Transparently
  • In alignment with values
  • With a constant eye on human connection

Let’s make sure AI works for us—not the other way around.

Want to talk about how to ethically incorporate AI into your organization or consulting practice?

Reach out. We’re here to help you navigate this new terrain with your values intact.

References

  • Strubell, E., Ganesh, A., McCallum, A. (2019). Energy and Policy Considerations for Deep Learning in NLP.
  • IEA (2023). Electricity 2023 – Analysis. International Energy Agency.
  • Patterson, D., Gonzalez, J., Le, Q., Liang, C., Munguia, L. M., Rothchild, D., … & Dean, J. (2021). Carbon Emissions and Large Neural Network Training.

 

KEEP ME ON THE MOVE!

Want to regularly receive words of work life wisdom? Sign up to receive our ✨ magical messages! ✨

SIGN UP HERE!

SUBSCRIBE


We pinky promise we won't clog your inbox & we keep your contact information to ourselves.

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.