Login / Signup

Companies Committed to Responsible AI: From Principles towards Implementation and Regulation?

Paul B de Laat
Published in: Philosophy & technology (2021)
The term 'responsible AI' has been coined to denote AI that is fair and non-biased, transparent and explainable, secure and safe, privacy-proof, accountable, and to the benefit of mankind. Since 2016, a great many organizations have pledged allegiance to such principles. Amongst them are 24 AI companies that did so by posting a commitment of the kind on their website and/or by joining the 'Partnership on AI'. By means of a comprehensive web search, two questions are addressed by this study: (1) Did the signatory companies actually try to implement these principles in practice, and if so, how? (2) What are their views on the role of other societal actors in steering AI towards the stated principles (the issue of regulation)? It is concluded that some three of the largest amongst them have carried out valuable steps towards implementation, in particular by developing and open sourcing new software tools. To them, charges of mere 'ethics washing' do not apply. Moreover, some 10 companies from both the USA and Europe have publicly endorsed the position that apart from self-regulation, AI is in urgent need of governmental regulation. They mostly advocate focussing regulation on high-risk applications of AI, a policy which to them represents the sensible middle course between laissez-faire on the one hand and outright bans on technologies on the other. The future shaping of standards, ethical codes, and laws as a result of these regulatory efforts remains, of course, to be determined.
Keyphrases