AI, A Public Good?

The Covid pandemic is providing ample opportunities to consider the potential of technology for public good, indeed, what area of public good is more important than public health? One example, the NHSX tracing app promised significant benefits although, unfortunately, these seem far from being realised. It does, however, give us an opportunity to look at where it has worked and consider the risks.

A recent report from Stanford shows how Taiwan managed to avoid the extreme lock down measures seen here and around the world yet still successfully limited and contained the spread of the virus. How? The report shows five interconnected factors: pandemic readiness, national electronic health records database, wide scale testing, big data analytics and the use of mobile technology to track movements of individuals who tested positive for Covid-19.

The benefits of a functioning mobile app are clear but the use of this technology has raised concerns around transparency, trust  and data privacy rights.  This is an important issue for public discussion. I am passionate about the potential of technology for the public good and believe there is no better part of society than the public sector to lead the charge in the UK’s role as a global leader in responsible AI innovation. Our Civil Service colleagues will be able to do a tremendous amount of social good if they approach the design and implementation of AI systems by making the realisation of ethical purpose and the pursuit of responsible practices of discovery a first priority.

Last year the govt published a guide to using artificial intelligence in the public sector.  The guidance consists of three sections: understanding AI; assessing, planning and managing AI and most importantly using AI ethically and safely. The guidance focuses heavily on the need for a human-centric approach to AI systems which aligns with positions of other forums including our work on the Lords AI select committee. The Guidance also stresses the importance of building a culture of responsible innovation, and recommends that the governance architecture of AI systems should consist of a framework of ethical values; a set of actionable principles; and a process-based governance framework.

I have asked the government what plans they have to put this guidance on a statutory footing.

I hope they will think carefully about the statutory and non statutory mechanisms to ensure the safe and ethical use of AI and data technologies. The government has also promised that a national data strategy will be published this year. It is absolutely essential that we get this right. If we make sure we are regulating in such a way that supports the design implementation of ethical, fair and safe AI systems then that really would be ‘world beating’.

 

Lords report asks for ethical AI

Chris has been a member of the House of Lords Select Committee on AI and on Monday (16th April) the final report and recommendations were published: “AI in the UK: ready,AI Select Committee Report launch willing and able?” Following 9 months of expert witness evidence and extensive consideration the report’s conclusions and recommendations emphasize that the UK is in a strong position to be a world leader in AI but that putting ethics at the heart of development and use is the best way to do this. AI, handled carefully, could be a great opportunity for the economy. The report buy valtrex pills makes 74 specific recommendations but one key recommendation is for a cross-sector ethical code for AI, underpinned by 5 principles:

1. AI should be developed for the common good and benefit of humanity.

2. AI should operate on principles of intelligibility and fairness.

3. AI should not be used to diminish the data rights or privacy of individuals, families or communities.

4. All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside AI.

5. The autonomous power to hurt, destroy, or deceive human beings should never be vested in AI.