Ethics Foundations 1 1 1 1
What’s the difference between morality, ethics, and law? Where do our ideas about right and wrong come from?
Filed under:
Human Centricity
Responsibility
Understanding
Bias and Harm Mitigation
Reliability

The concepts of morality, ethics, and law are interconnected yet distinct, each emerging from different foundations and serving unique purposes in guiding individual and societal behaviour. When designing artificial intelligence systems, understanding the interplay between morality, ethics, and law is essential to ensure AI operates responsibly, aligns with societal values, and avoids harm.
 
1.      What is Morality?
2.      What is Ethics?
3.      What is Law?
4.      What is the relationship between Morality, Ethics and Law & why does the distinction matter?
5.      What are the origins of Ethics and Morality?
 
Morality - Personal and cultural standards
Morality concerns the principles and values individuals or groups use to determine right from wrong, good from evil, or virtuous from harmful actions. It reflects deeply held beliefs about how people should act based on their character, intentions, and societal norms. Morality is often influenced by:
  • Cultural factors: Traditions and societal norms that evolve over generations. In the case of AI systems deployed globally, we might need to account for differing moral perspectives; for instance, privacy norms may vary significantly across cultures.
  • Religious teachings: Many moral codes stem from spiritual doctrines that define what they consider to be virtuous or sinful.
  • Personal values: Individual beliefs shaped by experiences, upbringing, and introspection.

For example, an individual might feel morally compelled to help someone in need because they view kindness and compassion as intrinsic virtues. Morality often acts as an internal compass, influencing behaviour even when external rules (laws) or ethical guidelines are absent. Morality is not universally consistent; what one group considers morally acceptable may conflict with another’s values. Creating an AI system that can reflect or at least accommodate such variations in values will be a significant challenge, which may mean that we need to be thinking in terms of AI ethics rather than AI morality.
 
Ethics - Principles for decision-making
Ethics provides a more structured and reasoned framework, guiding individuals and groups on how to act in specific contexts. It goes beyond personal feelings or societal norms, aiming for objective reasoning to determine what is right or wrong. Ethics is particularly significant in professional and societal domains, such as business, medicine, or military operations, where morality or law may be unclear. In the case of AI systems, ethical considerations address both how AI is designed and used. Ethical decision making is therefore anchored in: 
  • Guiding principles: Ethics often derives from philosophical theories, such as utilitarianism (maximising well-being), deontology (adhering to duties), or virtue ethics (cultivating good character).
  • Context-specific norms: Ethics frequently addresses specific domains—e.g., business ethics, medical ethics—defining what is considered acceptable conduct in particular professions. Different domains require tailored ethical approaches. For example:
    • In healthcare, amongst other things, AI should prioritise patient safety and confidentiality
    • In military applications, ethical reasoning must address proportionality of force and the distinction between combatants and civilians. Military ethics emphasises adherence to international laws of armed conflict while also promoting values like courage and integrity. In complex situations, such as humanitarian operations, ethical reasoning helps personnel decide on morally justifiable actions even when laws or orders are ambiguous.

Law - Enforceable rules and standards
Law is a formal system of rules established and enforced by governments or societies to regulate behaviour, resolve disputes, and maintain order. Unlike morality or ethics, which are often internalised, law imposes external authority through penalties. Laws emerge from social contracts, historical precedents, and collective agreements about acceptable behaviour. Police, courts, and other institutions ensure compliance and impose consequences for violations. If you understand the law, then why worry about ethics? As long as you act within the law, why worry about this ethics stuff?
  • Limitations: What is legal is not always what is right. For instance, Rosa Parks’ refusal to give up her bus seat in 1955 highlighted a moral conflict with a legally enforced but unethical segregation law. Her act of civil disobedience showed how laws can be unjust and how moral and ethical challenges can lead to legal reforms. This case perfectly illustrates that the law, on its own, cannot be the only measure of when something is right or wrong.
  • What happens when the law has not yet caught up with a new technology, product or social practice? 
  • How do you know what you are permitted to do if the law is not clear on a specific matter?
  • What should regulate emerging areas?
  • What happens when you don’t know what the law says and no one else seems to either?
  • How do you know what you are permitted to do if the law is not clear on a specific matter?
  • What happens when you do not know what the law says – and no one else seems to know either?

For AI, laws establish boundaries for acceptable behaviour, providing accountability and enforcement mechanisms. However, legal frameworks often lag behind technological advancements, creating challenges:
  • Legal compliance: AI systems must adhere to existing laws, such as the GDPR for data privacy or the AI Act in the EU as it is currently understood. Developers must anticipate future legal changes to ensure future compliance as well. Complying from the start with the spirit as well as the current letter of the law will help.
  • Gaps in legislation: AI operates in novel domains where laws may not yet exist or are only just starting to be codified, such as the ethical use of generative AI in art or text creation. Developers must rely on ethical reasoning to navigate these grey areas.
Example: Facial recognition technology might comply with local surveillance laws but still face ethical challenges regarding bias and mass privacy violations that result in future legal changes.


Relationship between morality, ethics, and law
While morality, ethics, and law often overlap, they are not interchangeable:
  1. Morality and ethics: Ethics can be seen as a systematic exploration of moral beliefs, offering a structured way to address dilemmas. For example, while morality might instinctively oppose lying, ethical reasoning may justify lying to prevent harm (e.g., hiding refugees during wartime).
  2. Ethics and law: Ethics often addresses gaps where laws are unclear or insufficient. For instance, while laws regulate military conduct, ethical reasoning guides decisions in ambiguous or unprecedented situations, such as prioritising civilian safety over strategic objectives.
  3. Law and morality: Legal systems aim to codify widely accepted moral and ethical standards, but they do not always succeed. Unjust laws may conflict with personal morality or societal ethics, as seen in historical examples like apartheid or slavery.
Ultimately: 
  • Ethics bridges gaps: While laws provide structure, and morality offers personal guidance, ethics bridges the two, fostering principled decision-making in situations where the "right" action is not immediately clear. Together, they form a foundation for a just, cohesive, and adaptive society. When laws fail to provide clear guidance, ethics enables AI systems to navigate morally ambiguous situations. For instance, an AI moderating online content must balance freedom of speech (a legal principle) with minimising harm from harmful content (an ethical concern). Many decisions require navigating conflicts between ethical principles and legal requirements. For example, a whistleblower might act ethically by exposing corruption, even if their actions break confidentiality laws.
  • Morality provides a foundation: Individuals like Rosa Parks demonstrate how moral conviction can challenge unethical laws, prompting societal change. AI trained on datasets influenced by societal values must acknowledge and respect diverse moral perspectives. However, morality alone may lack consistency or scalability for global AI systems.
  • Law can ensure accountability: While morality and ethics guide AI’s behaviour, laws establish enforceable boundaries and consequences for violations, fostering societal trust in AI applications.

By integrating morality, ethics, and law considerations into the AI development process, developers can create systems that not only function effectively but also uphold the highest standards of societal responsibility. This holistic approach ensures AI contributes positively to humanity, addressing both present needs and future challenges.


Origins of ethics and morality
The origins of ethics and morality are deeply rooted in human evolution and societal development:
  1. Evolutionary basis: Cooperative behaviours, such as honesty and altruism, evolved as survival strategies in social groups. Communities that valued these behaviours thrived, while those that did not struggled. For example, hunter-gatherer societies that cooperated to hunt large game and share resources experienced long-term survival benefits.
  2. Cultural and social reinforcement: Shared moral values and ethical systems evolved to maintain order and cooperation in increasingly complex societies. Religious traditions, philosophical thought, and communal rules provided frameworks for interpreting right and wrong.
3. Practical necessity: In complex societies, morality and ethics developed to manage relationships, resolve conflicts, and address dilemmas beyond basic survival.

Disclaimer

This tool has been created in collaboration with Dstl as part of an AI Research project. The intent is for this tool to help generate discussion between project teams that are involved in the development of AI tools and techniques within MOD. It is hoped that this will result in an increased awareness of the MOD’s AI ethical principles (as set out in the Ambitious, Safe and Responsible policy paper) and ensure that these are considered and discussed at the earliest stages of a project’s lifecycle and throughout. This tool has not been designed to be used outside of this context. 
The use of this information does not negate the need for an ethical risk assessment, or other processes set out in the Dependable AI JSP 936 part 1, the MODs’ policy on responsible AI use and development. This training tool has been published to encourage more discussion and awareness of AI ethics across MOD science and technology and development teams within academia and industry and demonstrates our commitment to the practical implementation of our AI ethics principles.