Ethics and morality are cornerstones of trust in society, yet even those considered "good people" can sometimes act in ways that defy their values. This phenomenon raises important questions about human behaviour and decision-making. Is it simply a failure of character, or are there deeper, situational factors at play?
Understanding why good people do bad things requires examining the interplay between individual traits and external influences. Situational pressures, such as stress, fatigue, or confusion, can profoundly affect ethical decision-making. Similarly, as we integrate advanced technologies like artificial intelligence (AI) into environments where critical decisions are made, we must consider how these systems can support ethical action or prevent harm.
1. Can you trust that good people won’t do the wrong thing?
2. What can you do about it?
3. How can AI systems enable people to do good things or help prevent bad ones?
1. Can you trust that good people won’t do the wrong thing?
We often assess others based on their perceived character—qualities like honesty, courage, and diligence—and expect these traits to reliably guide their actions. However, research and experience show that situational factors can significantly influence behaviour, sometimes in ways that defy our expectations of character.
For example, stressors such as sleep deprivation, ambiguity, and emotional strain can cloud judgement, alter perceptions of right and wrong, and lead to ethical lapses. A study on military officers found that even partial sleep deprivation impaired moral reasoning, highlighting how external conditions can undermine ethical decision-making.
While strong character traits are helpful, they are not enough to guarantee ethical behaviour in unpredictable or high-pressure environments. This applies to leaders, subordinates, and even AI systems. Just as a person may falter under extreme conditions, AI systems designed to act predictably in normal situations may also fail to respond appropriately when faced with unprecedented challenges. Situational factors such as ambiguity, misinformation, emotional intensity, or ethical drift (where previously unacceptable actions become normalised) create moral hazards that can derail even the most well-intentioned individuals. Understanding and addressing these factors is essential to mitigating risk.
2. What can we do about it? Mitigating the risk of ethical lapses involves equipping individuals to navigate the pressures and complexities of real-world situations. Effective training and preparation focus not only on teaching ethical principles but also on ensuring those principles translate into action, even under duress.
- Train as you intend to operate
Ethics training should simulate the environments in which ethical decisions are likely to occur. This involves exposing individuals to realistic scenarios, including stressful or high-pressure situations, to help them practise responding effectively to ethical dilemmas.
- For example, simulations might recreate situations involving ambiguity or strong emotions, such as anger, to help individuals learn how to manage these factors.
- Scenario-based training can also encourage proactive interventions when witnessing unethical behaviour.
2. Normalise ethical discussions
Ethical decision-making should be an integral part of routine activities, not treated as a separate or exceptional issue. By embedding ethics into daily interactions and informal moments, organisations can reinforce ethical behaviour as a standard expectation.
3. Influence behaviour beyond knowledge
Ethical training should address not only rational thought processes but also the automatic responses that often dictate behaviour under stress. Repeated practice in varied scenarios can help individuals internalise ethical principles, making them instinctive in real-world situations.
By adopting these methods, individuals are more likely to act ethically, even when faced with challenging conditions.
3. How can AI systems enable people to do good things or help prevent bad ones?
As AI systems become integral to decision-making in complex environments, their ethical design and deployment are critical. To function effectively, AI must be capable of responding to uncertainty, ambiguity, and misinformation in ways that uphold ethical standards.
This requires training AI to navigate incomplete or conflicting information while ensuring that its decision-making processes align with the principles of fairness, responsibility, and humanity. Moreover, safeguards must be embedded to prevent both accidental and deliberate misuse. For instance, an AI system designed for logistics should be programmed to reject inputs or actions that could facilitate unethical outcomes, such as enabling acts of violence or oppression. For example, how would you build in safeguards that would prevent an AI system focused on planning and logistics, from assisting in an ethnic cleansing? Finally, continuous monitoring of AI systems is essential to ensure they remain aligned with ethical objectives, adapt to evolving challenges, and function appropriately in dynamic environments where the nature of ethical dilemmas can change rapidly.
Just as the behaviour of even good people can become compromised through environmental factors, even well-designed AI-enabled systems may find themselves operating in unexpected or suboptimal ways when exposed to the realities of the real operating environment. The systems must be both sufficiently robust to be able to operate appropriately even in the extreme environment of conflict, plus be able to support the people in those environments to make the best possible decisions despite the multiple factors combining to negatively affect their perception and judgment.