An AI and Data Ethics Framework for Responsible Development

AI Oct 12, 2022

Everyone who is anyone in the tech industry is doing AI ethics in some way. In large part to get ahead of impeding regulations and create, or at least give the appearance of creating, responsible Artificial Intelligence. It is pretty easy to do, slap in some words like “transparency”, “avoiding data bias” and “commitment” and suddenly you have a plush looking framework ready to play with. Throw together an oversight committee made up of people who don’t know their Python from their Java or haven’t written a line of code in fifteen years, and you too can roll out an AI ethics strategy.

As you may have guessed, we believe this approach is wrong for a number of reasons. The most prominent of them is a misunderstanding of what ethics is. We don’t want to talk about being a good company for the sake of it, rather we are more interested in aligning business goals and principles with responsible development. Brainpool is building a practical ‘on the ground’ AI and data ethics framework that will fit within our current technical development strategies. Not only will this make us a more ethical company, but, more importantly, will help us deliver solutions that meet our client's needs and goals.

The Challenge

In most situations there currently seems to be two approaches to AI ethics. The most favoured of these is oversight or ethics committee. Typically made up of a range of C-Suite Execs, Lawyers, Senior Data Scientists and the odd Philosopher, these groups aim to provide top level review of AI projects at a variety of stages of development. Depending on the level of power afforded to these committees they may be able to stall or even stop projects from going forward without updates to bring them in line with ethical procedures.

The issue we see with this is the level of detachment from the process. This is not a problem of complexity; any organisation could stack their committee with enough tech literate talent to be able to make sense of even the most complicated solution. Rather, it is an issue of responsiveness. Say a committee discover a potential ethical risk in an AI model, are they best placed to be able to provide a solution? Such a top-level approach takes ethics out of the hands of those who can mitigate risks and places it at a review completely removed from the development process. This is not to criticise independent governance panels, rather to suggest that streamlined solutions will come from those who are working on the project itself. We do not also argue that all ethical review should be dropped on tech teams. Software Developers, Data Scientists and AI Engineers are focused on one thing, making the best working solution they possibly can. Adding more layers of complexity to their already complicated task should not be the go-to. Instead, we propose a multi-disciplinary ‘on the ground approach’ only assigning actions as needed. This not only requires an understanding of goals and principles for ethical AI development, alongside specific parameters for achieving them.

Ethical Goals vs Business Goals

At a conference earlier this year we listened to companies we strongly suspected to be ethics washing talk about the ills of ethics washing. Hypocrisy aside, it brought up an important issue, what is the point of ethics? As far as we are concerned there is no point in ‘doing’ ethics without some purpose. When we look at what we mean by ethics we find quite often we are actually talking about business goals and principles. It is not a pillar of anyone’s business to provide their customers terrible products or services, rather the goal is to provide clients with quality. A facial recognition system with a racial bias, a chatbot that likes fascism or a CV parser that discriminates against women are not just examples of ethically flawed systems, but violations of business principles.

This makes the job of defining ethical principles much easier. Aligning company goals with ethical pillars can provide the foundations of an ethics framework. With this perspective, any business should be able to involve all their stakeholders in the process of building out ethics principles, providing a holistic approach to responsible AI development. From there, however, a company needs define exactly what the principles look like in the context of Artificial Intelligence. This is where definitions become trickier. Maybe your company values transparency and open conversations with your clients and partners. How would that impact AI IP ownership? Perhaps your organisation would like to promote accountability throughout your AI development. How would you divide responsibility for Artificial Intelligence in your company? These are the hard questions that will provide different answers for each company. Having open discussions across all depts that have a stake in AI tech is essential to finding these right answers for your company’s needs

Vague Promises

Defining the parameters of an AI ethics framework past core goals and principles is difficult. In large part this is due to the diverse nature of the technology. What works as an ethical scope for approaching NLP solutions will not necessarily be applicable to Computer Vision. One deals with language, whilst the other video and images. Further issues arise when the technology is put into context. Computer Vision for facial recognition may have completely different issues arising from its use in law enforcement versus healthcare.

Many organisations and institutions employ a use case-based framework. Defining all projects they expect to work on and how they would approach them. At Brainpool this is impossible. We operate a tech agnostic approach that allows us to meet clients where they are, with the aim of integrating seamlessly into their existing infrastructure. Even if we were to develop a comprehensive list of all expected use cases, this would make for a static, as therefore rigid, approach. The rapid speed of innovation in the tech sector means that our framework would be out of date within months or just weeks. In efforts to avoid this, some organisations opt for a ranking system, where projects are scored based on pre-set principles. This is notably the case in the UK for public sector workers and in legislation proposed by the EU where technology is scored by potential risk. In larger organisations this is probably possible, however we not only rely on a small in-house team but a network of 500 AI experts to build AI solutions. Ensuring the same approach to scoring in every project is not possible. This risks vague understandings of ethics and arbitrary scores aimed at reducing fuss in development rather than focusing on responsible technology. Our challenge is to create a set of specific actions that must take place at each stage of development, which are broad enough to cover every technological use case, yet narrow enough to ensure tangible impact.

Our Approach to AI Ethics

With all that in mind, as we see it, there are three main things needed for an effective AI ethics framework; an overall goal, core principles aligning all stakeholders' interests and a guide of specific actions. To start with then, we need an aim for the framework. This is the essence of what we consider to be ‘good’ or responsible AI development. At Brainpool we see our overall goal for ethical development as avoiding harms that may be caused by Artificial Intelligence. This is inherently broad, covering procurement, wider outcomes of technology and development itself. Definitions of what constitutes harm are widely debated in philosophical circles, however we see it as:

“Any action or inaction which causes unjustified tangible negative impact on a person or persons”

This provides us with a few things we can begin assuming, particularly at a procurement stage. A red list of projects forms against autonomous weapons or expanding environmentally harmful activities. Clearly action suggests we should avoid developing technology that actively harms or has the potential to harm people, yet inaction is more complex. There is an inherent suggestion that we should develop specific actions that would be undertaken at each stage of development to identify and mitigate potential harms. This requires an understanding of where these harms may arise, which is where we turn to our principles.

Three principles

With a red list in place, we have already cut out active harms from making it through procurement stage. There thus appears to be two main ways the AI we build could cause harm, discrimination and a lack of safety. To avoid the former we propose two principles, fairness and accessibility. Whilst there is overlap between the two, fairness primarily deals with some of the broader themes surrounding discrimination. Here we aim to ask questions around data bias and the potential wider risks of technology. Accessibility focuses on the specifics of how useable the technology is, to not only enable the less tech literate from engaging with AI but create more effective solutions for our clients.

On questions of safety, we turn to the principle of accountability. It is our strong belief that human beings should always be at the centre of AI solutions. Work by people should never be replaced, rather augmented and improved through innovative technology. Artificial intelligence should therefore never make decisions that could impact a human life. The key word here is decisions, which we separate from the insights, signals or analytics that an AI is capable of producing. Deciding a course of action is still solidly the domain of human beings. We have the capability to rationally decide and explain why we have made choices. AI, being narrow in scope, cannot. Thus, we have safeguards to ensure responsibility and accountability are always in the hands of human beings, to identify and mitigate safety risks.

Specifically each of these principles ask certain questions:

Accountability – Are regulations being followed? Who is responsible for each stage of the project and its outcomes? How transparent is the solution? Is the data safe and secure?

Fairness - Does the data record human characteristics that could contribute to discrimination? Does the dataset contain enough diversity to be considered representational? Upon stress testing, can the AI produce biased outcomes? Does the AI have sentience?

Accessibility - Is the output of the solution explainable? How accessible is the technology for all end users? How well documented is the code and data?

An Actionable Checklist

In philosophy questions breed more questions far more often than answers. The same is true here. Take fairness alone, we still require clarity on what characteristics would lead to discrimination, the criteria for a representational data set and guidelines for an ethics stress test. Whilst the principles and goals outlined above are fixed, forming a specific checklist of actions will be a continual and evolving process. In our framework we have attempted to split actions across multiple stages of developments. Not only should this allow the most ‘in-the-know' stakeholders to engage with relevant issues but should reduce the overall workload off our technical team. At this point we have clearly defined initial actions for procurement, data analysis and outcomes stages. The development stage itself remains a work in progress, in large part due to the wide scope of use cases Brainpool works across.

The entire framework will be released soon, however we can now share our procurement checklist for ethical AI development. Under this there are only three occasions wherein we would not pursue a project. Firstly, if the project violated our redlist, which includes; autonomous weapons, promoting or expanding fossil fuels industry, Political activities resulting in polarisation of societies, activities promoting misinformation and the support of tech monopolies. Secondly a project that used human characteristics that may contribute to a discrimination where no significant public interest or benefit was found. A project that wished to use this kind of data would need to have proven benefit, it cannot be neutral and certainly not negative. Lastly, we would not pursue a project if our clients are looking to build a model that would result in the public losing out on substantial benefit if the AI was privatised. This would mostly apply to medical projects, where patients would miss out from the privatisation of healthcare AI. This part of our larger mission to democratise access to AI, but also aims to reduce the risks seen in prominent cases in the healthcare industry.

AI Ethics Procurement Checklist

All other points on the checklist aim to identify ethical issues against our principles and develop strategies to mitigate them. An example of our procurement ethics workflow can be found above. This checklist allows us to easily consider all ethical concerns that may arise at the point of sale, ensuring our principles are kept. At the stages where harms may arise, we are developing procedures to mitigate them. Whilst we aim to keep these as firm as possible, the practical nature of our work means in reality there will be creative solutions at play. Our usability assessment for example will have elements that are fixed, however the overall report will be bespoke for each client based on end users. This will ensure that our ethical principles are upheld, and we are creating the most effective technology.

Next Steps

What is next for us is to put this to the test. Before throwing our framework in the deep end and using it on client projects we are running the guidelines through a few stages of scrutiny. Our inhouse team has already had their say in forming the overall goals and principles. The next step is to take our draft framework to our network of over 500 AI experts, trusted clients and academics. As our community works on projects with our clients it is especially important, they are able to give tangible feedback on the framework. The ethics guidelines will extend beyond our inhouse team and to the network as a whole. We are committed to full transparency in the development of this framework and will be aiming to publish the draft in the coming months before the finalised version is signed off.

We believe every organisation developing AI technology should consider ethics, be it through a wholistic framework or working with area specialists. The common critique levelled at the tech industry at large is poor self-regulation. In the UK questions around how to regulate harms caused by technology are being investigated and debated. Whilst governments have been slow to act regulation is coming. The British government is proposing the Online Harms Bill to avoid risks in online communications and the European Union has proposed regulations on AI as a whole. In many ways what we are aiming to do here is to get ahead of that. A little work now on defining our ethical practises will undoubtably save our company time when new laws are inevitably passed. Yet our goal is more than that. As we transition from a start-up into a company that has grown massively in the last few years we look to define and reinforce what our values are. This is a reflection of the goals of our team, to develop responsible Artificial Intelligence solutions that we can be proud of.

Written by Dominic Richmond

Brainpool AI

Brainpool is an artificial intelligence consultancy specialising in developing bespoke AI solutions for business.