1
1
1
1
What do we mean by “across the entire lifecycle”?
2
2
2
2
What do we mean by "assessing for the full range of effects"?
3
3
3
3
How do we balance or choose between competing Principles?
4
4
4
4
Who or what should be considered stakeholders for AI-enabled systems?
5
5
5
5
How do you identify ethical risk?
6
6
6
6
What does it mean to assess and consider Human Centricity, and how does one assess the different factors in Human Security?
7
7
7
7
How can we take into account "human diversity"?
8
8
8
8
How do we balance military effectiveness with broader impacts on humans and the environment?
9
9
9
9
What does “human flourishing” have to do with AI?
10
10
10
10
What does having a person “in the loop” actually mean?
11
11
11
11
What does “Meaningful Human Control” mean?
12
12
12
12
What is the difference between "meaningful human control" and "appropriate human control"?
13
13
13
13
Responsibility vs Accountability, what is the difference?
14
14
14
14
What does GDPR mean for my AI-enabled system?
15
15
15
15
What is meant by “an accountability gap”?
16
16
16
16
How much understanding is sufficient? Who needs to know what?
17
17
17
17
What does “trust” mean in relation to AI systems? How much trust is enough?
18
18
18
18
What does "informed consent" mean?
19
19
19
19
What do we mean by "bias"? How can I address bias in algorithmic decision-making?
20
20
20
20
What do we mean by "harms" and what are they?
21
21
21
21
If the AI-enabled system did not work as intended, what is the worst thing that could happen?
22
22
22
22
Measuring Reliability: how do we decide if an AI system is “suitably” reliable?
23
23
23
23
Measuring Robustness: how do we decide if an AI system is “suitably” robust?
24
24
24
24
Measuring Security: how does one decide if an AI system is “suitably” secure?