UK Site SA Site

Trends and insights making an impact in your digital transformation journey

Safeguarding the organisation in AI implementations – Part 1 : Ethics
Karl Fischer
DVT Executive Head: Data and Analytics & MD East Coast

Safeguarding the organisation in AI implementations – Part 1 : Ethics

The AI, Data and Analytics technology space is certainly in the spotlight. A great deal of coverage is being given to the potential application of these capabilities and almost the necessity to do so in order for businesses to survive in the competitive, digital landscape. That pressure though to adopt, use and disrupt comes with dangers that are certainly not receiving the same air time. For that reason, we have been advising customers that as critical as the technical capability to utilize AI and data are foundational, non-technical aspects that could well mean the difference between success or the unintended damage to their business. These framework elements to safeguard the business are ethics, data management and the social test.

In this article, part 1 of 3, Ethics is the focus element.

Failure of ethics has a consequence

Recall Bell Pottinger (BPP Communications Ltd.)? International PR consultancy, 27m UKP revenue in 2016, with clients that included Adobe, Uber, Deloitte, E&Y, Coca-Cola, Emirates Airways, Dyson and numerous others. Bell Pottinger is now no longer. Its rapid demise is linked to its other infamous customers including Rolf Harris, Oscar Pistorius and Oakbay Investments. The last, arguably the toxic pill that killed Bell Pottinger. The business ultimately was not deemed guilty of any legal contraventions. Rather it was revelations of Bell Pottinger’s tactics on behalf of the Gupta company that allegedly saw implementation of

“social media strategy, using a network of bloggers, commentators and Twitter users, in an attempt to influence public opinion, exacerbate racism, and sow racial division in South Africa” (Wikipedia

As a consequence of the exposure of its practices and subsequent expulsion from its professional body, both clients and investors abandoned the company. On 12 September 2017 Bell Pottinger entered administration.

Why are ethics so relevant in AI versus other typical IT initiatives?

We would expect any business executive to easily be able to define ethics along the lines of:

ethics: the discipline dealing with what is good and bad and with moral duty and obligation” (Merriam-Webster,,

What is probably a more challenging expectation is that executives and organisations realise when ethical decision making should be consciously applied and not assumed. This is particularly true in the new dimensions and decision making being implemented utilizing AI techniques and data analytics. Take as an example the ability to market at the 1 to 1 level: campaigns targeted to the individual not segments of markets. Consider applying the tactics of Cambridge Analytica where the objective was NOT just to inform or make aware but to influence decision making toward an outcome desired by the influencing party. Is this ethical?

The influence tactic has been proven effective using behavioural science and delivery through social platforms. It is not a “can it be done question”. It is a “should it be done” question. Until there are governance processes and a conscious practice of review in terms of ethical evaluation of decisions (including those being implemented in AI decision making), organisations are at risk of implementations that will be judged unethical by the public. That determination and public response can be devastating (perhaps justifiably so.)

Consider scenarios where AI capability may result in the loss of jobs in a particular skill set area. In countries with labour shortages, this may be welcomed with an opportunity to redeploy those affected to new areas. In countries with high unemployment rates, are such implementations ethical without mitigation of the impact?

What constitutes a competency in Ethics for an organisation?

At Microsoft, the development of AI solutions is being couched in an approach that “builds on an ethical foundation” and subscribes to 4 principles: fairness, accountability, transparency and ethics. Similarly Google, in June of this year (2018) have stated the company will,

“assess AI applications in view of the following objectives. We believe that AI should:

  • Be socially beneficial
  • Avoid creating or reinforcing unfair bias
  • Be built and tested for safety
  • Be accountable to people
  • Incorporate privacy design principles
  • Uphold high standards of scientific excellence
  • Be made available for uses that accord with these principles”

Google goes on to describe AI applications they will not pursue and conclude,

“We believe these principles are the right foundation for our company and the future development of AI. This approach is consistent with the values laid out in our original Founders' Letter back in 2004. There we made clear our intention to take a long-term perspective, even if it means making short-term trade-offs. We said it then, and we believe it now.” (, Sundar Pichai, CEO Google, June 7, 2018)

Earlier in 2017, in the acquisition of DEEPMIND by Google, conditions of the acquisition included establishing an ethics board which saw the realisation of Deepmind Ethics & Society. This unit states its purpose as follows,

“We created DeepMind Ethics & Society because we believe AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards. Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work. In a field as complex as AI this is easier said than done, which is why we are committed to deep research into ethical and social questions, the inclusion of many voices, and ongoing critical reflection.” (

The statements by these organisations point to actions that should be considered as essential by every organisation developing or implementing solutions in the AI domain.

Key Actions to safeguard your organisation
  • Establish your ethical foundation: Concretely establish and state your organisation’s ethical decision making framework, the alignment with your values and practical governance processes. (e.g. the Deepmind Ethics & Society statement)
  • Give context to your ethical foundation through description of the principles and objectives to be met in the application of AI for your business purposes. (e.g. the Google objectives)
  • Review initiatives and projects requiring a conscious and practical demonstration of how they align with the stated principles and objectives (e.g. conduct the review during your project initiation stage gate along with cost-benefit analysis decision points)
  • Commit to review and refinement of the principles and objectives in a transparent and inclusive manner. The technology will evolve. So should your objectives.
  • Educate your organisation in ethics and ethical decision making. Doing so will put a spotlight on the topic and raise the profile in your organisation.
  • Apply a “Social Factor” filter in decision making (covered in part 2 of this article series)
  • Build competency in data lifecycle management that is inclusive of data privacy, governance, security and quality requirements.

Earlier in my career, I had the privilege of being a part of a leading multi-national consultancy. To date, it is the only organisation I have been part of that specifically required and ensured that every employee completed a course in ethics before being allowed to engage on any customer assignment.

I believe that with the capabilities that are now available in technology, that grounding in ethical decision making is a safeguard that every organisation embarking on a data-driven initiative should have in place. Given that the answer to “Is it possible with data/analytics /AI” is almost certainly “Yes”, organisations must have in place the mechanisms to enable sound answers to “Given that we can, should we?”.

Sound, conscious ethical decision making will safeguard your organisation.

Recommended reading:

Tagged under