AI has become part of the fabric of modern life, with applications in sectors ranging from agriculture to retail to health to education. We believe that AI, used appropriately, can deliver great benefit for economies and society, and help people to make decisions that are fairer, safer, and more inclusive and informed.
As with other technologies, there are new policy questions that arise with the use of AI, and governments and civil society groups worldwide have a key role to play in the AI governance discussion. In a white paper we’re publishing today, we outline five areas where government can work with civil society and AI practitioners to provide important guidance on responsible AI development and use: explainability standards, fairness appraisal, safety considerations, human-AI collaboration and liability frameworks.
There are many trade-offs within each of these areas and the details are paramount for responsible implementation. For example, how should explainability and the need to hold an algorithm accountable be balanced with safeguarding the security of the system against hackers, protecting proprietary information and the desire to make AI experiences user-friendly? How should benchmarks and testing to ensure the safety of an AI system be balanced with the potential safety costs of not using the system?
No one company, country, or community has all the answers; on the contrary, it’s crucial for policy stakeholders worldwide to engage in these conversations. In the majority of cases, general legal frameworks and existing sector-specific processes will continue to provide an appropriate governance structure; for example, medical device regulations should continue to govern medical devices, regardless of whether AI is used in the device or not. However, in cases where additional oversight is needed, we hope this paper can help to promote pragmatic and forward-looking “rules of the road” and approaches to governance that keep pace with changing attitudes and technology.
Source : Engaging policy stakeholders on issues in AI governance