Amendment 111ZA | Employment Rights Bill – Report (3rd Day) | Lords debates

My Lords, it is always a pleasure to follow my friend, the noble Lord, Lord Clement-Jones, who, in his single Nelsonian amendment, has covered a lot of the material in my more spread-out set of amendments. I support his Amendment 111ZA and will speak to my Amendments 168 to 176. I declare my interests in the register, particularly my technology interests, not least as a member of the advisory board of Endava plc and as a member of the technology and science advisory committee of the Crown Estate.

I will take one brief step backwards. From the outset, we have heard that the Government do not want to undertake cross-sector AI legislation and regulation. Rather, they want to take a domain-specific approach. That is fine; it is clearly the stated position, although it would not be my choice. But it is simultaneously interesting to ask how, if that choice is adopted, consistency across our economy and society is ensured so that, wherever an individual citizen comes up against AI, they can be assured of a consistent approach to the treatment of the challenges and opportunities of that AI. Similarly, what happens where there is no competent regulator or authority in that domain?

At the moment, largely, neither approach seems to be being adopted. Whenever I and colleagues have raised amendments around AI in what we might call domain-specific areas, such as the Product Regulation and Metrology Bill, the data Bill and now the Employment Rights Bill, we are told, “This is not the legislation for AI”. I ask the Minister for clarity as to whether, if a cross-sector approach to AI is not being taken, a domain-specific approach is, as opportunities are not being taken up when appropriate legislation comes before your Lordships’ House.

I turn to the amendments in my name. Amendment 168 goes to the very heart of the issue around employers’ use of AI. Very good, if not excellent, principles were set out in the then Government’s White Paper of 2023. I have transposed many of these into my Amendment 168. Would it not be beneficial to have these principles set in statute for the benefit of workers, in this instance, wherever they come across employers deploying AI in their workplace?

Amendment 169 lifts a clause largely from my Artificial Intelligence (Regulation) Private Member’s Bill and suggests that an AI responsible officer in all organisations that develop, deploy and use AI would be a positive thing for workers, employees and employers alike. This would not be seen as burdensome, compliant or a mere question of audit but as a positive, vibrant, dynamic role, so that the benefits of AI could be felt by workers right across their employment experience. It would be proportionate and right touch, with reporting requirements easily recognised as mirroring similar requirements set out for other obligations under the Companies Act. If we had AI responsible officers across our economy, across businesses and organisations deploying and using AI right now, this would be positive, dynamic and beneficial for workers, employees, employers, our economy and wider society.

Amendment 170 goes to the issue of IP copyright and labelling. It would put a responsibility on workers who are using AI to report to the relevant government department on the genesis of that IP and copyrighted material, and the data used in that AI deployment, by which means there would be clarity not only on where that IP copyright and data had emanated from but that it had been got through informed consent and that all IP and copyright obligations had been respected and adhered to.

Amendments 171 and 172 similarly look at where workers’ data may be ingested right now by employers’ use of AI. These are such rich, useful and economically beneficial sources of data for employers and businesses. Amendment 171 simply suggests that there should be informed consent from those workers before any of their data can be used, ingested and deployed.

I would like to take a little time on Amendment 174, around the whole area of AI in recruitment and employment. This goes back to one of my points at the beginning of this speech: for recruitment, there currently exists no competent authority or regulator. If the Government continue with their domain-specific approach, recruitment remains a gap, because there is no domain-specific competent authority or regulator that could be held responsible for the deployment and development of AI in that sector. If, for example, somebody finds themselves not making a shortlist, they may not know that AI has been involved in making that decision. Even if they were aware, they would find themselves with no redress and no competent authority to take their claim to.

Would the Minister not agree that this makes the case for at least the consideration of a recruitment and employment-specific regulator to be brought about through this Bill? If not, I would certainly prefer to have a light-touch, cross-sector AI authority which would ensure that, wherever individuals, workers, employees and citizens come across AI, they can have clarity, certainty and consistency in its application. In this instance, it would be in the area of recruitment, but the AI authority—agile, light-touch and, crucially, horizontally focused—would ensure clarity, certainty and consistency across all sectors of employment, our economy and society.

I will touch briefly on Amendment 176 and the algorithmic allocation of work. Again, this is already happening, often without employees even being aware that that is the case. What is the Government’s position on the algorithmic allocation of work? If this amendment is not to be considered and adopted, what is the Government’s approach to how this is currently occurring in our economy to workers right now, often with an extremely discriminatory and detrimental impact on those workers?

AI has such potential to transform employment for the benefit of workers, employers, businesses and our economy. It has the potential, if it is human-led AI, to drive productivity in a way that no other element of our economy is currently likely to do. Similarly, if we do nothing and continue with this wait-and-see approach to legislation and regulation, it is most likely that workers may often find themselves at the sharp end of the algorithmic allocation of work, AI in the workplace taking their data and numerous other issues, unable to avail themselves of the benefits.

This situation could be wholly averted if some of these amendments were considered and incorporated into the Employment Rights Bill. Better still, the Government should reconsider bringing forward a cross-sector AI regulation Bill. What we know fundamentally is that regulation is right: right for workers, right for employees, and right for all aspects of our economy and society. When I say that regulation is right, I mean the right size regulation. What we know from history, not least from recent history, is that right-size regulation is good for innovation, investment, citizens, creativity and our country. Would the Government be good enough to agree?