Jo’s article was published in The Times, 27 April 2023, and can be found here.
As Artificial Intelligence becomes an increasingly prevalent aspect of our day-to-day lives, the rise of AI in the workplace raises serious concerns as to how this technology may be used outside of the scope of existing Data Protection Act rules, in situations ranging from employee surveillance to hiring, firing and personnel management.
These issues aren’t a future risk but a current reality, with recent developments such as a landmark ruling in the Amsterdam District Court, where the Court overruled Uber’s “robo-firing” of drivers, ordering Uber to reinstate the affected drivers and to pay them compensation after they were unfairly dismissed using an algorithm.
As part of their ruling, the Court made clear that Uber must make ‘transparent’ the basis for dismissal, wage deductions and for the use of surveillance systems in monitoring the performance of their drivers. We would hope this ‘transparency’ still involves human assessment, or at least a clear appeals process for employees to challenge the potentially harmful decisions made by AI.
Similarly, New York City recently passed legislation prohibiting employers from using AI and algorithm-based technology for recruitment or promotion unless AI bias against individuals with protected characteristics (such as sex, race and disability) could be audited and eradicated first. This represents a step in the right direction for regulators, giving them scope to monitor how AI is implemented in the workplace, and other jurisdictions must follow suit to ensure that employment protections keep pace with the rapid adoption of AI.
Companies have a responsibility to implement AI in a manner that recognises established workers’ rights and data protection regulation. In the UK, a raft of employment law ensures employers have a duty to follow fair and reasonable processes when hiring, firing and negotiating with employees. Notably, the Equality Act 2010 gives specific protection to people with protected characteristics, who may be unfairly categorised by AI bias and algorithms.
In the UK the Data Protection Act also has provisions on automated decision making and profiling by AI. These can only be carried out where the decision is necessary for entry or performance of a contract, authorised by domestic law, or based on an individual’s explicit consent. Employers must give individuals information about the processing, introduce a simple way to challenge or to request human intervention and carry out regular checks to make sure the system is working as intended.
Undoubtedly, the UK will see its share of claims, as failure to abide by established legal principles will likely lead to employment claims ranging from cases of wrongful dismissal to discrimination and data protection disputes. Employers will be wise to watch and learn before implementing algorithms that damage hard fought employment rights.