Business Ethics and Artificial Intelligence 2018

What it was:

CIPR written briefing studied as part of Continuous Professional Development, 27 February 2018:
Source: https://www.ibe.org.uk/userassets/briefings/ibe_briefing_58_business_ethics_and_artificial_intelligence.pdf

What I learned:

AI is relevant to Business Ethics – for example, how do you ensure your organisation’s values are being applied if decisions are being made algorithmically?

Potential risks of AI:

  • Ethics risk
  • Workforce risk (loss of jobs /skills)
  • Technology risk (cyber-attacks)
  • Algorithmic risk (biased decisions)
  • Legal risk (privacy / GDPR)

AIs may be accurate but nonetheless reflect human biases.

Open-sourcing may be important for openness and trust in AI systems – this could be especially true in Govt where trust is critical and keep source closed is less important.

“Explainability” is key to AI trust and to working alongside an AI partner.

AI work and contracts should specify responsibilities carefully – AIs cannot be held responsible for their behaviour!

Some practical steps organisations can take:

  • Meta-decision-making to ensure AI systems act in line with organisational ethical values.
  • Make sure third party algorithms adhere to ethical standards.
  • Establish a multi-disciplinary Ethics Research Unit.
  • Introduce ‘ethics tests’ for AI machines, where they are presented with an ethical dilemma.
  • Ensure staff have access to relevant training courses and communications re: ethical use of AI

What I will aim to do differently as a result:

In future I will:

  • Think about ethical and legal and other risks of AI projects at the design stage
  • Continue my learning and investigation of AI as applied to my organisation’s business
  • Consider whether ethics, compliance and legal teams should be engaged in AI projects.