“It wasn’t me, it was AI”, the new scapegoat for breaches

As artificial intelligence becomes more deeply embedded in business operations, so too does the temptation to treat it as a convenient scapegoat when things go wrong. Some may recall pop culture references such as Shaggy’s song “It Wasn’t Me”, which playfully captures the instinct to deny responsibility even when caught in the act. This tendency to deflect blame is not new, and AI is now reflecting some of these very human behaviours back at us.

While AI systems can and do make mistakes, whether through hallucinated outputs or exploitable vulnerabilities, experts caution against placing the blame solely on the technology. Communications strategist Carol Barreyre warns that using AI as a scapegoat can erode trust among stakeholders. She argues that when leaders shift blame to AI, it highlights a lack of oversight and governance, since accountability does not vanish simply because automation is involved. A commentary in InfoWorld similarly likened this behaviour to the outdated practice of blaming interns, a tactic that ultimately reveals weaknesses in leadership rather than resolving the issue at hand.

At its core, artificial intelligence is a tool that enables automation and accelerates decision-making. The fundamental challenges facing Australian organisations today remain largely unchanged from the pre-AI era. These include failure to implement zero trust principles, inadequate application of least privilege, lapses in information protection, ineffective data lifecycle management, and poor records and data governance. What has changed is the scale and speed at which these shortcomings can now be exploited. Attackers are increasingly using AI-driven tools to enhance the effectiveness and reach of their operations. Without strong governance and comprehensive controls, organisations are more vulnerable to these evolving threats.

Although influencing human behaviour remains a complex task, there is a growing recognition of the importance of accountability and transparency. Leaders in corporate governance, academic publishing, and the technology sector are taking proactive steps to promote integrity and reduce blame-shifting. Ethicists continue to stress the need for human oversight in algorithmic decision-making. Relying on AI should not be used as an excuse for poor outcomes. Instead, businesses must focus on improving information hygiene and implementing effective governance and controls to reduce the risk of data breaches and ensure accountability when incidents occur.

The following sections explore these themes in greater depth:

  • Case Studies: Real-World Breach Disclosures
  • Protecting Your Business: The House Analogy
  • The Challenge of Real-World AI Testing: The Lab-to-Reality Gap
  • Human and Social Aspects of Blame-Shifting
  • Conclusion: Strengthening Accountability and Governance in the AI Era

Case Studies: Real-World Breach Disclosures

In the past two years, several high-profile organisations have publicly attributed data breaches or cybersecurity incidents to artificial intelligence (AI) systems or tools. In many cases, AI was framed as the cause or enabler of the breach, often to shift attention away from internal failures in governance, access control, or oversight. Below are four notable examples, followed by an analysis of emerging patterns in how AI is invoked in breach narratives.

Air Canada – Feb 2024
Airline Sector

Incident: AI-powered customer-service chatbot gave a grieving passenger incorrect refund info, leading the customer to incur costs.  

“Wasn’t me: defence: Air Canada argued the “chatbot is a separate legal entity responsible for its own actions,” effectively claiming the AI misled the customer (a defense a tribunal called “remarkable”).  

Source: Tribunal ruling; Ars Technica
McDonald’s – Jul 2025
Fast Food/Retail

Incident: Data privacy breach via “McHire” – an AI-driven recruiting chatbot (by vendor Paradox.ai) – exposed personal data of job applicants (initial reports speculated up to millions; confirmed impact was 5 records).  

“Wasn’t me” defence: McDonald’s public statement pinned the blame on “an unacceptable vulnerability from a third-party provider, Paradox.ai,” stressing that the flaw in the AI hiring tool caused the exposure.  

Source: McDonald’s statement (via Fox News)
Salesloft (Drift Chatbot) – Aug 2025
Tech (B2B SaaS)

Incident: Hackers breached Salesloft’s systems and stole OAuth tokens to access ~700 companies’ Salesforce data by exploiting a flaw in Drift, a customer-facing AI chat agent integrated with Salesforce. Stolen credentials were used to export sensitive data (e.g. account records, passwords, API keys) from numerous corporate Salesforce databases.  

“Wasn’t me” defence: Salesloft’s advisory highlighted a “security issue in the Drift application” – effectively pointing to the third-party AI chatbot integration as the weak link. “A threat actor used OAuth credentials to exfiltrate data from our customers’ Salesforce instances,” the company explained, noting that customers not using the AI-driven Drift–Salesforce integration were unaffected. This framing emphasized the AI chatbot tool as the source of the breach.  

Source: Salesloft incident statement; The Hacker News
Fortinet Firewalls – Feb 2026
Cybersecurity

Incident: Over 600 Fortinet FortiGate firewalls worldwide were compromised by a hacking group. The attackers – described as relatively low-skilled – managed to breach systems in 55 countries by automating their campaign with off-the-shelf generative AI tools. AI-generated scripts helped the hackers rapidly scan for vulnerable devices, generate exploit code, and coordinate attacks at a scale and speed that would have been difficult otherwise.  

“Wasn’t me” defence: In analysing the incident, Amazon Web Services’ Chief Security Officer Stephen Schmidt publicly stressed that AI allowed an “unsophisticated” hacker to massively scale their attack, lowering the barrier for cybercriminals. He noted that “AI is making certain types of attacks more accessible to less sophisticated actors who can now leverage AI to enhance their capabilities and operate at greater scale”. By highlighting the role of AI in the attack’s success, the narrative implicitly shifted focus toward the advanced tools employed by criminals rather than solely on firewall or user shortcomings.  

Source: AWS Threat Intelligence report (via CRN)

Emerging Patterns

These cases reveal a growing trend. AI is increasingly cited in breach disclosures, either as the cause of the incident or as a tool that enabled the attacker. In some instances, organisations have used AI as a rhetorical shield, emphasising the novelty or autonomy of the technology to deflect scrutiny from internal lapses in oversight, governance, or vendor management.

This pattern reinforces the need to treat AI systems as part of an organisation’s broader digital infrastructure. Whether AI is developed in-house or integrated through third-party providers, the responsibility for its behaviour and impact remains with the organisation. As these examples show, failing to apply foundational security principles such as least privilege access, zero trust architecture, and strong data governance can leave organisations exposed to both technical and reputational harm. Addressing these challenges requires more than the deployment of technology. It involves building the right organisational processes, defining clear governance structures, and ensuring that people understand and uphold their security responsibilities. Effective controls must be embedded into daily operations, with continuous oversight and accountability. AI systems, like any other business tool, must be governed with clarity and care to ensure they support the organisation’s objectives without introducing unmanaged risk.

Protecting Your Business: The House Analogy

A helpful way to understand cybersecurity in the modern workplace is to think of your organisation as a house. Just as you would not rely on a single lock to protect your home, businesses should not depend on a single layer of defence to secure their systems and data. A basic lock may deter casual intruders, but if valuables are left in plain sight and every household member has unrestricted access, the risk of theft or misuse increases significantly. This scenario reflects the dangers of weak access controls and poor data governance in digital environments.

The principle of least privilege is like ensuring that only certain individuals have keys to specific rooms in the house. Not everyone needs access to the safe, just as not every employee requires access to sensitive financial records or customer data. By limiting access to only those who need it for their role, organisations can reduce the potential impact of both accidental and malicious breaches.

Taking this further, a zero trust model functions like a multi-layered security system. Even if an intruder manages to get through the front door, they will still face additional barriers such as internal locks, motion sensors, and alarm systems before reaching anything of real value. In the digital world, this translates to continuous verification, segmentation of networks, and multi-factor authentication. For example, accessing a critical system might require approval from two separate individuals, much like a joint bank account that needs dual authorisation for any transaction.

However, securing a business is not simply a matter of deploying technology. It requires a clear understanding of what needs to be protected, how it is accessed, and who is responsible for maintaining those protections. Effective cybersecurity depends on embedding the right controls into daily operations, supported by well-defined governance structures and informed, accountable people. This includes classifying sensitive data, enforcing access policies, and ensuring that security measures are consistently applied and reviewed. Just as a well-maintained home requires regular checks, updates, and responsible occupants, a secure organisation must continuously assess its risk exposure, adapt to new threats, and ensure that its people understand their roles in protecting information. Technology plays a vital role, but it must be part of a broader strategy that includes process maturity, cultural awareness, and strong leadership.

The Challenge of Real-World AI Testing: The Lab-to-Reality Gap

Artificial intelligence systems are often developed and tested in controlled environments, where variables are known, data is clean, and outcomes are predictable. However, once deployed in the real world, these systems are exposed to far more complex, unpredictable, and dynamic conditions. This disconnect between development and deployment environments is commonly referred to as the “lab-to-reality gap”.

In the lab, AI models are typically trained and validated using curated datasets. These datasets are often limited in scope and may not reflect the full diversity of real-world inputs, behaviours, or edge cases. As a result, models that perform well in testing may behave unpredictably when confronted with unfamiliar scenarios, ambiguous language, or adversarial inputs in production.

For example, a chatbot trained on structured customer service queries may struggle to interpret sarcasm, slang, or emotionally charged language when interacting with real users. Similarly, an AI system designed to detect fraud may fail to identify novel attack patterns that were not present in the training data. These limitations are not necessarily due to flaws in the technology itself, but rather in the assumptions made during development about how the system would be used.

The challenge is compounded by the fact that many AI systems are now integrated into critical business processes, such as customer support, recruitment, financial decision-making, and cybersecurity. Failures in these contexts can have significant consequences, including reputational damage, regulatory breaches, and financial loss.

Moreover, the increasing use of generative AI introduces new risks. These systems can produce outputs that appear plausible but are factually incorrect, misleading, or even harmful. Without robust validation mechanisms, organisations may inadvertently act on inaccurate information, leading to poor decisions or unintended outcomes.

Bridging the lab-to-reality gap requires a shift in how AI systems are tested and governed. It is not enough to evaluate performance against benchmark datasets or in sandbox environments. Organisations must adopt practices that simulate real-world conditions, including diverse user behaviours, adversarial scenarios, and operational constraints. This includes:

  • Stress-testing AI systems with unpredictable or ambiguous inputs
  • Monitoring for drift in model performance over time
  • Implementing human-in-the-loop oversight for high-impact decisions
  • Establishing clear escalation paths when AI outputs are uncertain or contested
  • Continuously updating models with new data and feedback from real-world use

Equally important is the recognition that AI testing is not a one-off event. It is an ongoing process that must be embedded into the lifecycle of AI systems, from design and development through to deployment and maintenance. This requires collaboration across technical, operational, and governance teams to ensure that AI systems remain reliable, secure, and aligned with organisational values and regulatory obligations.

It is also important to acknowledge the financial and operational barriers to effective testing. Many organisations face challenges in trialling AI workloads at scale due to the cost and complexity of replicating production-like environments. As explored in more detail in The Hidden Cost of “Just Turning It On”: Why AI Workloads Are Becoming Harder to Trial Before You Buy, the shift towards consumption-based pricing models and the increasing sophistication of AI systems have made it more difficult for organisations to conduct meaningful pre-deployment evaluations. Ultimately, closing the lab-to-reality gap is not just a technical challenge. It is a matter of trust. Organisations must demonstrate that they understand the limitations of AI, are transparent about its capabilities, and are committed to responsible deployment. Only then can they realise the benefits of AI while managing the risks it introduces.

Human and Social Aspects of Blame-Shifting

As artificial intelligence becomes more embedded in business operations, it is increasingly being drawn into the social dynamics of accountability. When things go wrong, organisations often face pressure to explain what happened, who was responsible, and how similar incidents will be prevented in future. In this context, AI is sometimes used not just as a tool, but as a convenient scapegoat.

Blame-shifting is not a new phenomenon. In the past, organisations have deflected responsibility by pointing to junior staff, external vendors, or ambiguous processes. Today, AI systems are beginning to occupy a similar role. When an AI model produces an incorrect output, makes a poor decision, or enables a breach, it can be tempting to frame the issue as a failure of the technology itself, rather than a failure of oversight, governance, or design.

This tendency is reinforced by the perception of AI as autonomous or opaque. Phrases like “the algorithm did it” or “the chatbot made a mistake” suggest that the system acted independently when it was designed, trained, and deployed by people. In some cases, organisations have gone so far as to describe AI systems as separate entities, distancing themselves from the consequences of their own implementations.

The social implications of this are significant. When organisations deflect blame onto AI, they risk undermining trust with customers, regulators, and employees. It signals a lack of accountability and raises questions about whether appropriate controls, testing, and oversight were in place. It also obscures the human decisions that shape AI behaviour, from data selection and model training to deployment and monitoring.

Moreover, this pattern can discourage meaningful learning and improvement. If AI is treated as the problem, rather than a reflection of organisational choices, there is less incentive to examine the underlying causes of failure. This includes gaps in data governance, unclear roles and responsibilities, or inadequate risk management practices.

To address this, organisations must foster a culture of accountability that recognises AI as part of a broader system of people, processes, and technology. This means:

  • Clearly defining ownership and responsibility for AI systems across their lifecycle
  • Ensuring that decisions made by AI are traceable and explainable
  • Embedding human oversight into high-impact or high-risk use cases
  • Being transparent about the limitations of AI and the safeguards in place
  • Responding to incidents with honesty and a commitment to improvement

Ultimately, the way organisations talk about AI failures reveals much about their internal culture. Those who take responsibility, learn from mistakes, and invest in better governance are more likely to build trust and resilience. Those who shift blame risk repeating the same errors and eroding confidence in their use of technology.

Conclusion: Strengthening Accountability and Governance in the AI Era

As artificial intelligence becomes more deeply embedded in business operations, the need for robust governance, clear accountability, and thoughtful implementation has never been more urgent. While AI offers significant opportunities for efficiency, insight, and innovation, it also introduces new risks that cannot be addressed through technology alone.

The case studies and examples explored in this report demonstrate that AI-related incidents are rarely the result of the technology acting in isolation. More often, they reflect broader organisational challenges—such as unclear responsibilities, inadequate testing, poor data governance, or a lack of oversight. In some cases, AI has been used as a convenient explanation for failures that stemmed from human decisions or systemic weaknesses.

To move forward responsibly, organisations must recognise that AI is not a substitute for sound judgement, nor is it a shield against accountability. Effective use of AI requires a foundation of well-defined processes, clear roles, and a culture that prioritises transparency and continuous improvement. This includes:

  • Embedding AI systems within existing governance frameworks
  • Ensuring that access to sensitive data is limited and monitored
  • Testing AI models under realistic, dynamic conditions
  • Maintaining human oversight for high-impact decisions
  • Being transparent about the capabilities and limitations of AI tools

Ultimately, trust in AI is built not by claiming perfection, but by demonstrating responsibility. Organisations that invest in the right controls, foster a culture of accountability, and remain vigilant in the face of evolving risks will be better positioned to realise the benefits of AI while protecting their people, data, and reputation.

References and Acknowledgements

This report was informed by a range of publicly available sources, including:

This blog post was developed with the assistance of artificial intelligence to support research, drafting, and editorial refinement. All facts and references have been reviewed for accuracy and relevance.

The Hidden Cost of “Just Turning It On”: Why AI Workloads Are Becoming Harder to Trial Before You Buy

Enterprise AI is rapidly moving toward consumption‑based pricing models. On paper, this makes sense: customers pay for compute, scale with usage, and avoid rigid per‑user licences.

In practice, however, this shift is introducing a growing and often overlooked problem:

It’s becoming harder for customers and experts to safely trial and evaluate AI workloads before committing financially.

Microsoft Security Copilot is a notable real-world example of this trend, though it is not the sole instance.

Executive Summary

Across the industry, many enterprise AI workloads are adopting compute‑metered, consumption‑based pricing. While this approach aligns costs with usage, it increasingly shifts financial risk to the evaluation phase, before value is proven. Microsoft Security Copilot is a visible example of this broader challenge, not an isolated case.

When AI features are included with premium licenses like Microsoft 365 E5, users can try out AI tools without paying extra. On the other hand, if these features aren’t bundled, testing them usually means setting up constant computing resources that incur charges continuously, whether they’re actually used or not.

This creates significant friction for customers and for security professionals, architects, and consultants who need to test AI tools using real telemetry, real alerts, and real operational noise. Experts want to triage incidents, investigate edge cases, and stress AI systems using data they generate themselves. Guided walkthroughs, documentation, or tenants preloaded with synthetic “happy path” data are useful for orientation, but they are insufficient to expose limitations or operational shortcomings.

As a result, many AI workloads are effectively evaluated only after financial commitment, or at the customer’s expense, limiting independent validation and informed decision‑making. This is not a critique of AI value, but a growing misalignment between how AI is priced and how it must be learned, tested, and trusted.

The Pricing Model Makes Sense — Until You Try to Learn

From a vendor perspective, consumption‑based AI pricing is rational:

  • AI compute is expensive
  • Usage varies dramatically
  • Static per‑user pricing doesn’t reflect real load

For organisations already invested in premium bundles, this works reasonably well.

Security Copilot as an Example (E5 Tenants)

For a tenant with 1,000 Microsoft 365 E5 licences, Microsoft includes:

  • 400 Security Compute Units (SCUs) per month

In low‑usage scenarios:

  • A limited number of active users
  • Occasional prompts or investigations
  • Light incident summaries

👉 The additional monthly cost can realistically be $0, if usage stays within that included capacity.

This is a good outcome. It encourages experimentation inside production environments and reduces adoption friction.

Where the Model Breaks: Evaluation Outside Premium Bundles

The challenge emerges the moment evaluation happens outside a premium licence bundle — whether for:

  • Demo tenants
  • Lab environments
  • Partner testing
  • Consultant sandboxes
  • Pre‑sales or architecture validation

In these scenarios, Security Copilot (like many AI workloads) requires:

  • Provisioned compute capacity
  • Billed continuously, per hour
  • Regardless of whether the service is used

For Security Copilot specifically:

  • Minimum: 1 SCU
  • Cost: ~$4 per SCU per hour
  • Billing: 24×7 while provisioned

This is not “pay per prompt”. It is pay for availability.

When “$4” Quietly Becomes Thousands

One of the most common misunderstandings with AI pricing is the unit of time.

“It’s only $4.”

Yes — per hour.

That means:

  • 1 SCU × 24 hours × ~30 days
  • ≈ $2,920 per month

For a single, idle unit.

Multiply that across workloads or forget to deprovision, and costs scale very quickly.

A Real Evaluation Scenario (And an Expensive Lesson)

In a demo tenant:

  • Security Copilot was enabled
  • 1 SCU provisioned
  • No prompts executed
  • No active use

It was enabled for about 7 days.

The resulting charge:

  • $850.04

This wasn’t a billing error. This wasn’t misuse. This was simply:

  • ~212 hours × $4/hour

There was no end‑of‑month credit. No “unused capacity” adjustment.

Once compute is provisioned, the meter runs.

Why This Is a Bigger Problem Than One Product

Security Copilot is just one example of a wider AI evaluation problem.

Experts Need Real Data, Not Happy Paths

Security professionals, architects, and consultants don’t evaluate tools by reading guides alone.

They need to:

  • Generate real alerts
  • Ingest noisy, imperfect telemetry
  • Triage incidents under pressure
  • Observe how AI behaves when data is incomplete or contradictory

That kind of evaluation:

  • Requires live data
  • Requires control over the environment
  • Requires time to experiment and break things

Preloaded demo tenants and guided scenarios are useful introductions, but they do not expose operational limitations.

Evaluation Now Happens After Commitment

Because of cost exposure:

  • Customers hesitate to “just try it”
  • Experts can’t easily test independently
  • Validation often happens after purchase

In many cases:

  • Evaluation is pushed into production
  • Or absorbed as part of a customer engagement

That’s not how trust in AI systems is built.

This Isn’t About Cost — It’s About Friction

The issue isn’t that AI workloads are “too expensive”.

In many real‑world scenarios:

  • Costs are low
  • Or already covered by existing licences

The issue is that learning has a price tag.

When:

  • Experimentation incurs immediate cost
  • Idle time is billable
  • There’s no safe sandbox

People stop experimenting. And AI adoption slows.

What Would Help (Across All AI Workloads)

A few changes would dramatically improve evaluation without undermining commercial models:

  • Time‑boxed compute trials (e.g. limited SCU hours)
  • Capped evaluation allowances
  • Pause/hibernate functionality for AI capacity
  • Expert or partner sandbox environments
  • Clearer cost warnings at enablement

These reduce the cost of learning, not the value of running.

Final Thought

AI systems demand trust. Trust demands hands‑on experience. Hands‑on experience demands safe experimentation.

Right now, for many AI workloads, it’s easier to justify buying than it is to safely try.

Security Copilot illustrates the issue well — but the challenge is broader than any single product.

If enterprise AI is to scale responsibly, vendors need to lower the barrier to learning, not just optimise the cost of consumption.

Are You Winning? Finding the Middle Ground for a Healthy Work–Life Balance in IT Consulting

Why boundary‑setting — especially learning to say “no” without saying the word — is one of the most important skills in consulting.

Introduction

If you’ve worked in IT consulting for any length of time, you already know the industry demands more than technical expertise. It pulls at your time, your focus, and often your personal life. After years of navigating different consulting environments, one lesson has become clear:

The hardest skill in this industry isn’t solving complex problems — it’s setting boundaries that protect your well-being.

And at the core of that skill is this question:
How do you say “no”… without actually saying “no”?

Most consultants avoid disappointing others. Instead of declining outright, we offer gentle deferrals like, “Happy to help… let me get back to you soon.” It feels polite, but it still creates an expectation — and that expectation becomes your obligation.

This article explores how to find a sustainable middle ground between being seen as reliable and preserving your personal life, your health, and your long‑term career value.

Why Saying “Yes” Comes Naturally — and Why It Can Hurt You

Consulting trains us to deliver. To please. To be the person who “gets things done.”
But saying yes too often comes with real costs.

The Upside of “Yes”

  • Clients and colleagues enjoy working with you.
  • You avoid conflict or negative perceptions.
  • The company saves time and money.
  • If you work on commission, yes = more income.
  • You build new skills and occupy more opportunities.

The Downside of Too Many Yeses

  • Long hours quietly become your new normal.
  • Family time disappears, straining relationships.
  • Quality drops when you spread yourself too thin.
  • You inadvertently block underutilised colleagues from work.
  • Stress rises, health declines, burnout looms.
  • Expectations escalate — what was exceptional becomes expected.

Yes isn’t always bad. But yes, without boundaries is dangerous.

Why “No” (or Something Like It) Is Often the Healthiest Answer

Saying no isn’t about being unhelpful — it’s about being sustainable.
But most consultants struggle to say the word itself.

That’s why the real skill is declining work without ever using the word “no.”

The Benefits of Setting Boundaries

  • You preserve time for family, rest, and personal life.
  • You protect the quality of your deliverables.
  • You create space for learning and professional growth.
  • You maintain a healthier lifestyle and mindset.
  • Expectations become realistic again.
  • Projects succeed because you aren’t stretched thin.

The Challenges

  • Some stakeholders may dislike boundaries.
  • Relationships can temporarily feel strained.
  • You may be overlooked by people who reward availability over sustainability.

But the people who value quality will always appreciate you more in the long run.

The Hidden Trap of “Maybe”

“Maybe” feels safer than yes or no — but in reality, it’s often the worst of both worlds.

The Problem With Maybe

  • It creates ambiguity and stress.
  • Clients think “maybe later” means “probably soon.”
  • You still feel obligated.
  • You still work extra hours.
  • Your personal life still suffers.
  • Quality still drops.
  • Expectations still increase.
  • Projects still suffer when you stretch too thin.

“Maybe” is usually a soft yes in disguise — with all the negative consequences of yes and none of the clarity of no.

How to Say “No” Without Saying the Word

Here are practical, high‑impact ways to decline work without using the word “no” — grounded in the “No, but…” technique you already use.

Use these phrases as‑is or adapt them to your tone.

Option 1: Redirect the Timeline

“I can help, but next week is the earliest I can give this proper attention.”

Option 2: Offer Partial Support

“I can take on this part of the task, but the rest will need someone else.”

Option 3: Shift the Decision Back

“To give this the focus it deserves, I’d need to reshuffle priorities. Which task should take a back seat?”

Option 4: Protect Your Capacity

“I’m fully booked today, but I can recommend someone who has availability right now.”

Option 5: Quality‑Based Decline

“I want to ensure the quality stays high, and I don’t currently have the bandwidth to deliver that standard.”

Option 6: Set Clear Expectations

“I can support this, but the earliest I can commit to is Thursday — does that work?”

When You Do Need to Use the Actual Word “No”

Most of the time, you can avoid saying the word entirely.
But sometimes — rarely — people genuinely need to hear an explicit “No.”

Use a clear, direct “No” when:

Someone repeatedly ignores your boundaries

If you’ve redirected time-lines, clarified priorities, and offered alternatives — and the same request keeps coming — it’s a sign the message isn’t landing.

You feel genuinely overwhelmed

When your capacity is beyond stretched, soft language only delays the inevitable and increases the stress.

The request is unreasonable or unfair

Sometimes the only professional, honest answer is:

“No — this isn’t realistic for me.”

The escalation risk is higher if you avoid clarity

Ambiguity here can cause reputation or project damage.

Direct “No” should be your last resort — but a tool you’re willing to use when your well-being or professional integrity requires it.

A well‑placed, confident No can reset expectations faster than any soft phrasing.

Building a Sustainable Middle Ground

A healthier work–life balance doesn’t rely on yes, no, or maybe; it depends on strategic control.

Set Clear Boundaries Early

People treat you based on what you tolerate.

Prioritise High‑Value Work

Focus your energy on what truly matters.

Use “No‑Without‑No” Techniques First

Protect relationships and your time.

Use an Explicit “No” Sparingly — But When Needed

It’s a boundary‑reset, not a failure.

Protect Non‑Work Time

Treat personal time like a meeting with your future self.

Track Your Workload Honestly

If you’re consistently overworked, it’s not a badge of honour—it’s a system that needs fixing.

Communicate Transparently

Clear communication prevents unrealistic expectations.

Remember: Your Career Belongs to You

Companies benefit when you say yes.
You benefit when you choose your yes carefully.

Conclusion

Work–life balance in IT consulting isn’t about choosing between being helpful and protecting yourself.
It’s about the confidence to decline work gracefully, respectfully, and professionally — and knowing when clarity requires a firm No.

Most of the time, you won’t need the word.
But when you do?
It’s a powerful tool for defending your boundaries, your well-being, and your long‑term success.

And when you find that balance?
That’s when you start winning.

Okta AD Integration with Azure AD Domain Services

1. Introduction

This is a experimental article, using a existing Azure Active Directory (AD) and Azure Active Directory (AD) Domain Services deployment and integrating it with a Okta solution.

2. Preparation tasks

3. Assumptions

The following assumptions are made in following this article:

  • Windows 2012 R2 Member server of the  Azure AD Domain Services
  • The member server has internet access
  • Okta free trail without any modifications made

 

4. Installation

4.1 Create Service Account in Azure AD

  1. Log into Azure AD, Go to Users and Click “ADD USER
  2.  In “Type of user“, Choose “New user  in your organization
  3. In “User Name”, Use company Service Account Name convention e.g. okta
  4. okta6
  5. In “First Name“, “Last Name” and “Display Name“, Enter Okta
  6. In “Role”, Choose “User”
  7. Create a temporary password, and document the password for next step.
  8. Go to http://portal.office.com, Login as the new user and set the password.
  9. The default password expiry is set on the account and should be disabled by using Azure AD PowerShell.

4.2 Okta AD Agent Install

Please follow theses steps for integrating Azure AD Domain services with Okta:

  1. Log onto the Domain joined Server that will run the Okta Agent
  2. Go to your okta administrator url e.g. https://<Company>-admin.okta.com/admin/dashboard
  3. On the top navigation bar, go to Security, Authentication
  4. okta1
  5. Click “Configure Active Directory
  6. okta2
  7. Click, “Set Up Active Directory
  8. okta3
  9. Click, “Download Agent
  10. okta4
  11. Once the agent is finished downloading, run the installation.
  12. In the Welcome screen, Click “Next
  13. okta5
  14. Choose the path for the installation and click “Install
  15. In The Domain field, enter company domain and click “Next” e.g. schmarr.com
  16. Choose “Use an alternate account that I specify
  17. Enter username and password and click “Next
  18. At Okta AD agent Proxy Configuration, Click “Next
  19. At Register Okta AD Agent, Choose Production and in “Enter Subdomain” add company name.
  20. Click “Next
  21. Sign in with your Okta Admin Account
  22. Click “Allow Access
  23. Agent Installed, Click “Next”
  24. As an Example choose the following in “Basic Setting
  25. okta7
  26. Click “Next
  27. Click “Next
  28. In “Select the attributes to build your Okta User profile“, Click “Next
  29. Done
  30. okta8

Conclusion

The ability to add SaaS applications in Azure AD and Okta, Azure AD being the Identity store for both.

Integrate SharePoint with Azure AD

1. Introduction

This article will show the quick configuration tasks, that are required to make Azure AD a trusted identity provider for a SharePoint 2013 installation.

2. Assumptions

The following assumptions are made during this article:

3. Preparation

Before starting with the article the following needs to be in place:

  • Azure AD PowerShell tools installed, look here for more details.

4. Configuration

The configuration will be broken into the following sections:

  • Azure AD configuration
  • SharePoint configuration
  • Assigning Users

4.1 Azure AD Configuration

Follow these tasks to document the Azure AD WS-Federation metadata URL for later use:

  1. In the Azure Management Portal (Classic), Click Active Directory.
  2. Click on the Azure AD that will be integrated with SharePoint 2013
  3. Click Applications
  4. On the bottom bar, Click View Endpoints
  5. Document the Federation metadata document url for later use

Follow these tasks to create / configure the namespace in Azure AD :

  1. In the Azure Management Portal (Classic), Click Active Directory.
  2. Click Access Control Namespaces, and create a new namespace and called it “Company”
  3. Click Manage on the bottom bar. This should open https://company.accesscontrol.windows.net/v2/mgmt/web.
  4. Click Identity Providers, Click Add
  5. Click WS-Federation identity provider, click Next.
  6. In Displayname enter, “Company”
  7. In Login link Text enter, “Company”
  8. In WS-Federation metadata, choose URL and enter the URL that was documented in tasks above Example: https://accounts.accesscontrol.windows.net/company.onmicrosoft.com/FederationMetadata/2007-06/FederationMetadata.xml
  9. Click Save
  10. Click Relying party applications, then click Add
  11. Enter the following in each field:
    1. Name: “Company SharePoint”
    2. Realm: “urn:sharepoint:company”
    3. Token format: SAML 1.1
    4. Token lifetime (secs) default is 600: Recommended value is 2 hours
  12. Click Save
  13. Click Rule Groups, and then Add
  14. Click Generate
  15. Click Add
  16. Fill in all the fields as illustrated below:
  17. The claim rules in Azure Access Control
  18. Click Save
  19. Delete the existing claim rule named upn
Extract the X.509 certificate from Azure Access Control for later use
  1. In the Access Control Service pane, under Development, click Application integration.
  2. In Endpoint Reference, locate the Federation.xml that is associated with your Azure tenant, and then copy the location in the address bar of a browser.
  3. In the Federation.xml file, locate the RoleDescriptor section, and copy the information from the <X509Certificate> element, as illustrated in the following figure.
  4. X509 Certificate element of Federation.xml file
  5. from the root of drive C:\, create folder named Certs
  6. Save the X509Certificate information using notepad to the folder C:\Certs and name the file ACS.cer
  7. Run the following PowerShell commands:
    1. “Connect-MsolService”
    2. “Import-Module MSOnlineExtended -Force”
    3. $replyUrl = New-MsolServicePrincipalAddresses -Address “https://company.accesscontrol.windows.net&#8221;
    4. “New-MsolServicePrincipal -ServicePrincipalNames @(“https://company.accesscontrol.windows.net&#8221;) -DisplayName “Company ACS Namespace” -Addresses $replyUrl”

4.2 SharePoint 2013 Configuration

Follow these steps to configure Azure AD as the identity provider for SharePoint 2013:

  1. From the Start menu, click All Programs.
  2. Click Microsoft SharePoint 2013 Products.
  3. Click SharePoint 2013 Management Shell
  4. Run the following PowerShell commands:
    1. $root = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2(“C:\Certs\ACS.cer”)
    2. New-SPTrustedRootAuthority -Name “Token Signing Cert Parent” -Certificate $root
    3. $cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2(“C:\Certs\ACS.cer”)
    4. New-SPTrustedRootAuthority -Name “Token Signing Cert” -Certificate $cert
    5. $map1 = New-SPClaimTypeMapping -IncomingClaimType “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn&#8221; -IncomingClaimTypeDisplayName “UPN” -SameAsIncoming
    6. $map2 = New-SPClaimTypeMapping -IncomingClaimType “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname&#8221; -IncomingClaimTypeDisplayName “GivenName” -SameAsIncoming
    7. $map3 = New-SPClaimTypeMapping -IncomingClaimType “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname&#8221; -IncomingClaimTypeDisplayName “SurName” -SameAsIncoming
    8. $map4 = New-SPClaimTypeMapping -IncomingClaimType “http://schemas.microsoft.com/ws/2008/06/identity/claims/role&#8221; -IncomingClaimTypeDisplayName “Role” -SameAsIncoming
    9. $realm = “urn:sharepoint:company”
    10. $signInURL = “https://company.accesscontrol.windows.net/v2/wsfederation&#8221;
    11. $ap = New-SPTrustedIdentityTokenIssuer -Name “ACS Provider” -Description “SharePoint secured by SAML in ACS” -realm $realm -ImportTrustCertificate $cert -ClaimsMappings $map1,$map2,$map3,$map4 -SignInUrl $signInURL -IdentifierClaim “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn&#8221;
  5. Verify that the user account that is performing this procedure is a member of the Farm Administrators SharePoint group.

    In Central Administration, on the home page, click Application Management.

  6. On the Application Management page, in the Web Applications section, click Manage web applications.
  7. Click the appropriate web application.
  8. From the ribbon, click Authentication Providers.
  9. Under Zone, click the name of the zone. For example, Default.
  10. On the Edit Authentication page, in the Claims Authentication Types section, select Trusted Identity provider, and then click the name of your provider, which for purposes of this article is ACS Provider. Click OK.
  11. The following figure illustrates the Trusted Provider setting.
The Trusted Provider setting in a web app

4.3 Assigning Users

Use the following steps to set the permissions to access the web application.

  1. In Central Administration, on the home page, click Application Management.
  2. On the Application Management page, in the Web Applications section, click Manage web applications.
  3. Click the appropriate web application, and then click User Policy.
  4. In Policy for Web Application, click Add Users.
  5. In the Add Users dialog box, click the appropriate zone in Zones, and then click Next.
  6. In the Add Users dialog box, type user2@company.onmicrosoft.com (ACS Provider).
  7. In Permissions, click Full Control.
  8. Click Finish, and then click OK.

Conclusion

Azure AD is the trusted identity provider for SharePoint 2013, and Azure AD users will be able to authenticate and use SharePoint 2013 resources.

External Links

Some good extra reading articles:

 

 

Quick Install Guide For SharePoint Foundation 2013

1. Introduction

This quick install guide will assist in installing SharePoint foundation 2013 server to address certain technical / business requirements. This type of installation will have some limitations and might not be fit for production deployments.

2. Preparation Tasks

The following preparation tasks will be required before starting the SharePoint 2013 Foundation installation:

3. Assumptions

The following assumptions are made during the creation of this article:

  • Active Directory or Azure AD Domain Services is up and running
  • Active Directory Member server, running windows 2012 R2
  • Unrestricted internet access
  • SSL Certificate is available for site.
  • Experience in SSL certificates
  • Access to DNS Server to create records

    If using Azure AD Domain Services, changing DNS record will not be allowed.

2. Installation

The installation is broken up into two parts:

  1. Framework installation
  2. Configuration

2.1. Framework installation

  1. Run the “sharepoint.exe” that was downloaded in the preparation tasks
  2. SharePoint2013Screenshot1
  3. Click “Install software prerequisites
  4. Click “Next
  5. Check “I accept the terms of the License Agreement(s)
  6. Click “Next

    The process will install all required roles and software, during the installation the server will be reboot twice, logon with the same user account that was used to continue installation.

  7. Run the “sharepoint.exe” that was downloaded in the preparation tasks
  8. SharePoint2013Screenshot1
  9. Click “Install SharePoint Foundation
  10. Choose “Stand-alone” installation
  11. Click “Install Now
  12. Check “Run the SharePoint Products Configuration Wizard now.
  13. On the Welcome Screen, Click “Next
  14. On warning dialog, Click “Yes
  15. Click “Finish

2.2 Configuration

Once the steps above have completed, SharePoint foundation will be installed and running. Users will be able to connect to the default SharePoint Team Site, by using http://<servername&gt; URL.

To change the default URL to the required URL, follow these steps:

  1. Import SSL certificate into local computer store
  2. Open “SharePoint 2013 Central Administration
  3. Under “System Settings“, Click “Configure alternate access mappings
  4. Click “Edit Public URLs
  5. In “Alternate Access Mapping Collection:” list, choose “SharePoint – 80
  6. In “Default“, Change http://<servername&gt; to https://<newURL&gt; e.g. https://sharepoint.company.com

IIS Manager Configuration

The following task should be done on IIS Manager to allow the configuration changes:

  1. Open “Internet Information Services (IIS) Manager” console
  2. Go to <SERVERNAME>\Sites\, click  “SharePoint – 80
  3. On the right hand site, click “Bindings…
  4. Click “Add…
  5. In Type, choose “HTTPS
  6. In Host name, enter the new dns address e.g. sharepoint.company.com
  7. In SSL certificate, choose the imported SSL certificate
  8. Click “OK
  9. Remove “http” binding
  10. Click “Close

User’s should be able to use the new secure URL to access the SharePoint team site. e.g. https://sharepoint.company.com

P.S. Make sure to include the new URL into user Internet Explorer local intranet zones

Conclusion

Basic SharePoint 2013 foundation team site will be running and available for business, the solution will be using windows internal database and have some limitations.

 

 

AD CS Install Guide For Azure AD Domain Services

1. Introduction

Active Directory Certificate Services (AD CS) provides customizable services for issuing and managing public key certificates used in software security systems that employ public key technologies.

Azure AD domain Services allows limited access to the Active Directory instance for administrators, only a standalone Certificate Authority (CA) deployment will be possible.

More information about AD CS can be found here.

2. Assumptions

The following assumptions are made during the creation of this article:

  • Azure AD Domain Services is up and running
  • Active Directory Member sever, running windows 2012 R2
  • Experienced in Microsoft Certificate Authority
  • Experienced in Active Directory

3. Installation

Please follow the instruction below to install a Standalone CA:

Disclaimer:

This is a quick and basic installation and should be evaluated if it meet business and security requirements.

  1. Run the following PowerShell Command as Administrator
    1. Install-WindowsFeature AD-Certificate,ADCS-Cert-Authority,ADCS-Web-Enrollment -IncludeManagementTools
  2. Run the following PowerShell command as Administrator
    1. Install-AdcsCertificationAuthority -CAType StandaloneRootCa
  3. Run the following Powershell command as Administrator
    1. Install-AdcsWebEnrollment

4. Configuration

After the completion of section 3, the AD CS service should be up and running with default configuration. Here is some recommendation for making the AD CS more secure and a production ready service.

Steps to import the Root CA as trusted authority for all domain joined Servers/ machines.

  1. Download Root CA:
    1. Go to http://<Servername>/certsrv
    2. Click “Download a CA certificate, Certificate chain, or CRL”
    3. Click “Download CA certificate”
    4. Save the file for next steps
  2. Open Group Policy Management (Follow below to install Group Policy Management on a Member Server)
    1. Run the following PowerShell Command as Administrator
      1. Install-WindowsFeature GPMC
  3. Edit “AADDC Computers GPO
  4. Go to “Computer Configuration\Windows Settings\Security Settings\Public Key Policies\Trusted Root Certification Authorities” section
  5. Import the Root CA into the section above
  6. Close the Group Policy
  7. To allow the group policy to take affect:
    1. reboot member servers, or
    2. run “gpudpate /force” as administrator

5. Conclusion

Azure AD domain Services domain joined servers will be able to install and trusted the new standalone CA Certificates.

 

 

Azure AD Domain Services Quick Install

Introduction

Azure Active Directory Domain Services lets you join Azure virtual machines to a domain without the need to deploy domain controllers, more detail can be found here.

This article show quick way to install and configure Azure AD Domain Services, other options might be required for a production deployment and not highlighted in this article.

At the time of writing this article most of the configuration will be done in Azure Portal (Classic), Microsoft is planning to move everything to the new Azure portal.

Assumptions

The following assumptions are made in this article:

  • Functional Azure AD – A quick guide can found here
  • Access to Azure Subscription

Preparation Tasks

The following preparation tasks will be required before starting the installation process below:

Installation

This section will be divided into the following sections:

To create all the required Azure resources, please follow the steps below:

1. Azure Virtual Network

  1. Go to https://manage.windowsazure.com
  2. Click “+ NEW”
  3. AzureADDomainServices5.JPG
  4. Click “Network Services“, “Virtual Network” and then click “Custom Create
  5. AzureADDomainServices6
  6. In Name, enter required network name
  7. Choose correct Location
  8. AzureADDomainServices7
  9. On Page 2, leave DNS servers empty for now
  10. On Page 3, enter the required Address space range and Subnets for the network
  11. AzureADDomainServices8
  12. Click check mark to create network

2. Create ‘AAD DC Administrators’ Group

To allow users to manage Azure AD Domain Services, you’ll first need to create a group in Azure AD called ‘AAD DC Administrators’ and add all the users that should have admin rights.

For more detailed tasks, please have a look here.

3. Azure AD Domain Services

  1. Go to https://manage.windowsazure.com/
  2. On the left Menu find, “ACTIVE DIRECTORY
  3. Click on the required Azure AD in the list provided
  4. AzureADDomainServices9
  5. Click “CONFIGURE” tab
  6. Scroll down and find “domain services” section
  7. Change “ENABLE DOMAIN SERVICES FOR THIS DIRECTORY” to “YES
  8. Change “DNS DOMAIN NAME OF DOMAIN SERVICES” to required suffix
  9. Choose the network that was create in steps above for “CONNECT DOMAIN SERVICES TO THIS VIRTUAL NETWORK
  10. Click “Save
  11. The creation might take a bit of time to complete, once completed DNS server IP addresses will be provided for use in the created Virtual Network. (Please follow steps below to finish Virtual Network configuration)

3. Configure Azure Virtual Network DNS Servers

  1. Go to https://manage.windowsazure.com/
  2. On the left Menu find, “ACTIVE DIRECTORY
  3. Click on the required Azure AD in the list provided
  4. AzureADDomainServices9
  5. Click “CONFIGURE” tab
  6. Scroll down and find “domain services” section
  7. Document the IP Addresses in “IP ADDRESS” section for next steps
  8. AzureADDomainServices10
  9. On Left hand menu, Choose “NETWORKS
  10. Open the network that was created and have been enabled for Azure Domain Services
  11. Click “CONFIGURE
  12. In the “dns servers” section, enter the two dns servers documented in previous step
  13. Click “SAVE

 

Before using Azure AD domain services, please follow this guide to enable password synchronization.

Conclusion

By the end of this guide Azure AD domain services will be functional with the ability to domain join Azure Virtual machines.

Filtering on Azure AD Connect

Introduction

This article will add a filter for Azure AD Connect for only syncing user accounts that have a valid email address. Additional options may be required by the organization and more detail can be found here.

Preparation Tasks

The following tasks should be completed before starting the process:

  1. Azure AD Connect is installed and configured – see “Getting Started with Azure AD Free Edtion
  2. Administrator Access for Azure AD Connect Server

Adding the Filter

The following tasks should be preformed on the Azure AD Connect Server:

Disable scheduled task

To disable the scheduled task which will trigger a synchronization cycle every 3 hours, follow these steps:

  1. Start Task Scheduler from the start menu.
  2. Directly under Task Scheduler Library find the task named Azure AD Sync Scheduler, right-click and select Disable.
    Task Scheduler
  3. You can now make configuration changes and run the sync engine manually from the synchronization service manager console.

After you have completed all your filtering changes, don’t forget to come back and Enable the task again.

  1. Open “Synchronization Rules Editor
  2. Click “Inbound
  3. FilteringAzureADScreenShot1
  4. Find “In From AD – User Join” rule, click “Edit
  5. FilteringAzureADScreenShot2
  6. Click “Yes
  7. In “Precedence“, enter “500
  8. Click “Next
  9. Only include user that a have email address
    1. Click “Add clause
    2. Attribute Field choose “mail
    3. Operator field choose “ISNOTNULL
    4. FilteringAzureADScreenShot4
  10. Add Company email domain (Optional – checking if user have a email address solves most cases)
    1. This rule assumes you only have one email domain, will not work for multi-domain
    2. Click “Add clause
    3. Attribute Field choose “mail
    4. Operator field choose “ENDSWITH
    5. Value enter “<email>.<domain-name>”
    6. FilteringAzureADScreenShot5
  11. Apply and Verify changes
  12. Enable Scheduled task

Conclusion

Completion of this article, the organization will only sync user accounts that have a valid email address into Azure AD.

 

 

 

Getting Started with Azure Active Directory Free Edition

 Introduction

Azure Active Directory (Azure AD) is Microsoft’s multi-tenant cloud based directory and identity management service.

More in-depth detail about Azure AD can be found here.

The article illustrate the registration process and the essential configuration tasks for Azure AD free edition for use of organization internal users. (Future posts will look at other scenarios)

Preparation tasks

The follow preparation tasks will be required:

  1. Have a Microsoft account ready to use for sign-up;
    1. Generate a Microsoft account by going here;
    2. Follow the on-screen wizard and complete sign-up;
  2. A credit Card – This will only be used for verification and not be charged unless you explicitly upgrade to a paid offer;
  3. Optional – External Domain Name e.g. schmarr.com to integrate into Azure AD;
    1. P.S. You’ll need to be able to create TXT records in the external domain.

Installation

Registration

Please follow theses steps in registering your free Azure subscription that will host Azure AD:

  1. Go to the following url: https://azure.microsoft.com/en-us/trial/get-started-active-directory/;
  2. Click on “Create a free Azure Account”;
  3. Click “Start Now”;
  4. Fill in the form and submit;
  5. The subscription will take up to 4 minutes to be created.
  6. Once the process is complete you should see the following screen:

AzureADFree-ScreenShot1

By now a default Azure AD is already created, skip “Create Azure AD” section if default instance shouldn’t be used.

Create Azure AD (Optional)

Follow the these steps to create a new Azure AD:

  1. In the left corner click on the “+ New” icon
  2. Click “Security + Identity”
  3. Click “Active Directory”
  4. It will re-direct to the Azure Classic portal (This might change in the future)
  5. You will get the following Wizard
  6. AzureADFree-ScreenShot2
  7. Fill in the form and click the check to create the Azure AD

Essential Azure AD Configuration

At this point Azure AD is fully functional, with the following constraints:

  • Manual process is required for creating user accounts (GUI, PowerShell or CSV import);
  • User passwords will not be in-sync with their network passwords;
  • Usernames at this stage will be <username>@<AzureADName>.onmicrosoft.com;
    • Users will be required to remember their usernames. (Most users find it difficult remember their password);

Follow my Essential Azure Configuration guide here if you want to address the constraints mentioned above.

Conclusion

After completion of this guide, Azure AD free edition will available and be functional with available features.