Reasons to choose Wilson Browne
The popularity of artificial intelligence (AI) has exploded since OpenAI unleashed ChatGPT on the world, and business owners are more aware than ever of how AI can be used to innovate, increase productivity and save costs.
However, with those benefits come a number of potential pitfalls for the unsuspecting employer. From data protection issues to discrimination, the use of AI in the workplace spans several different areas of law.
This guide aims to take an employer through the employee lifecycle, from recruitment to dismissal, to demonstrate how AI can be used both effectively and lawfully. In it we cover:
• The Recruitment Process
• Line Management – General
• Line Management – Performance Management
• Data Protection/confidentiality considerations
• Ending employment
• How Can Wilson Browne Help You? – Mitigating the risk of legal implications.
The Recruitment Process - Sourcing, Screening, Assessments and Interviewing
Employers can use AI to analyse job descriptions and pick out particular skills, qualifications, or experience which are essential or ideal for certain roles. These factors can then be used by recruiters to reach out to suitable candidates. This would help streamline the recruitment process and help ensure that time is dedicated to those candidates who most closely match the job profile.
Employers could also use chatbots to help guide applicants through the application, answering questions which they might have about the job, the company, or the interview process.
Employers are already using AI to help sort through applications. The AI can extract information from the application forms or CVs and sort it into criteria based on skills, education and experience. In some instances AI is used to search a candidate’s social media posts; it can also be used to rank applicants against criteria, thereby further helping to reduce the size of the candidate pool.
Employers can also use AI to reject applications if they do not meet certain criteria.
Many employers use some form of evaluation to help sift applicants and reduce the size of the candidate pool. For example, an employer might use an online test which assesses reasoning and problem-solving capabilities.
More recently, personality tests are being used to help employers determine whether a candidate will be a good fit in their workplace.Employers could use AI to enhance these tests by using it to generate flexible and adaptable scenarios to which applicants have to respond, or by implementing algorithms to interpret and sort their responses and build a profile of their personality.
In addition to scheduling interviews, AI can be used at interview to evaluate performance by analysing biometric data. For example, an applicant’s tone of voice or body language could be analysed to give the employer insight into the applicant’s non-verbal communication.
The Recruitment Process - Legal Implications
1. Data Protection/GDPR: For each element of the recruitment process set out above, you will need to consider whether your third-party privacy notice adequately covers the use of AI and, if necessary, automated decision making during the recruitment process. If automated decision making is used, you will also need to ensure that job applicants are provided with a mechanism by which they can ask for reconsideration of a decision made by an automated process.
2. Discrimination: You should also carefully consider the scope of AI and the extent to which decisions have human involvement. There is a high degree of risk that any decision made by AI or without human involvement could be discriminatory. This is because AI programmes can inherit discriminatory biases from the data on which they are trained – and Amazon is a good example of where, using data, an algorithm used for recruitment gave rise to discriminatory practices which culminated in a decision to stop using the tool. Furthermore, some data analytical tools could give rise to automated decisions which constitute actionable detriments – foe example, an automated refusal to proceed with an application for union-related reasons.
At least one union (the TUC) has published a report identifying ways of bringing direct and/or indirect discrimination claims when AI is used as part of the recruitment exercise. ACAS and the EHRC have published guidance which touches on this.
Line Management - General
Employers are increasingly using chatbots to guide new employees through the onboarding process and answer questions they may have about the business and their benefits/obligations.
AI can be used to forecast customer footfall by using data from traffic history, point-of-sale, and weather forecasts. These predictions can then be used to generate schedules which respond to consumer demand. These schedules can then be quickly adapted to changing circumstances.
Network records, phone apps, sensors, biometric tracking devices and facial recognition systems have enabled the development of a transformation in employee monitoring. For example, an AI wearable could be used to analyse real-time body language data and provide insights into communication patterns, collaboration skills, and employee wellbeing. Using AI to monitor employees has significantly increased following the wide-scale implementation of remote/hybrid working.
Line Management - Legal Implications
1. Discrimination: You should be cautious about the way in which shift patterns are assigned when using AI. Irrespective of whether they are a worker or an employee, the individual will have protection from discrimination if the assignment of shift patterns causes them a disadvantage related to a protected characteristic (such as impacting childcare commitments for female workers).
2. Failure to make reasonable adjustments: adjustments will be required to remove any disadvantages faced by those with disabilities – automated shift scheduling may therefore result in c, for example, may organisations may need
3. Constructive dismissal: if shift patterns are regularly assigned at short notice and without care as to an individual’s other commitments, the employer risks breaching the implied mutual duty of trust and confidence. This duty is implied into every contract of employment, and a breach of it may entitle an employee to resign and claim constructive dismissal.
Additionally, if shifts are not offered due to previous low shift acceptance rates by an individual, this could lead to disability discrimination.
This right applies to employees, not workers. Therefore, whilst it’s generally accepted that more workers than employees work irregular shift patterns, the question of whether a person is a worker or an employee needs considering on a case by case basis.
4. Unfair dismissal: failure to take up shift patterns which ultimately leads to the termination of employment could amount to unfair dismissal assuming the employee has the requisite service.
Again, this is a right applying only to employees and not workers.
5. Data protection/GDPR: employee monitoring (within reason!) should be covered in the appropriate privacy notices and/or policies and procedures.
6. The Workers (Predictable Terms and Conditions) Act 2023: this Act is yet to come into force (expected date is September 2024). However, it introduces a new statutory right for workers to request a more predictable working pattern which AI tools will need programming to deal with in a non-discriminatory/otherwise unlawful way.
It’s also worth taking into account that, whilst a practice may be lawful, it can still have a negative impact on your business. For example, if employees feel they are being monitored this can increase presenteeism and stress which in turn can lead to lower morale, higher sickness absence and overall decreased productivity.
Line Management - Performance Management
AI can be used to analyse productivity, performance and/or attendance which then informs decisions about promotions, pay rises, demotions, and dismissals (please see “Ending Employment” below). One advantage of AI is that it allows for real time evaluations, avoiding the need for delayed annual appraisals and ensuring that relevant action is taken based on recent data. For example, in a call centre, AI could be used to analyse speech patterns and provide instant feedback on factors such as tone of voice and speed.
Using AI in performance management can also protect against contrast bias, which ensures that an employee’s performance is measured objectively instead of against their peers. This has proven popular for those organisations operating remote and/or hybrid working arrangements. Decisions on the appropriateness of disciplinary and/or capability processes have been based on the use of AI tools.
Line Management: Performance Management - Legal Implications
1. Discrimination: as mentioned above, AI can inherit biases of its own, and the data it is trained and tested on may lead to certain groups being treated less favourably. Given the lack of transparency/understanding about how AI tools work, employers may not know of hidden prejudices/biases within AI tools until after decisions have been taken and thus are not capable of any non-discriminatory explanation. This increases the risk of discrimination in the workplace.
2. Harassment: claims may be brought in the event the use of AI tools have the effect of creating an “intimidating, hostile, degrading, humiliating or offensive environment”. For example, this could occur in the event of facial or speech recognition technologies not recognising facial movements or speech due to a person’s disability thus inappropriately triggering a process.
3. Constructive/Unfair dismissal: an increased reliance on AI tools can reduce common sense decisions which might have otherwise been taken (for example, underlying reasons for a dip in someone’s performance may not be properly factored into decisions).
As above, there is also the additional risks of missing out on capitalising a person’s potential and/or negatively affecting morale if employees are faced with non-applicable feedback on their performance.
Data Protection/Confidentiality Considerations
Risks relating to data protection and confidentiality increase with the use of AI in the workplace – especially if the organisation uses generative AI. Organisations should therefore ensure there are the appropriate rules in place to minimise these risks. To an extent, rules will already be in place – for example, data protection, IT systems and communications and/or bring your own device to work policies and HR/3rd party privacy notices are likely to exist. However, these policies and notices should be checked to ensure they have kept up to date with AI usage.
Increasingly, to avoid any policy gaps, employers are adopting AI specific policies which cover:
• Permitted use,
• Guidelines for use,
• Implications for breaching the policy.
Some organisations are also updating disciplinary procedures and/or Codes of Conduct to expressly reference AI obligations/breach implications.
In addition to analysing performance, AI could be used to inform a variety of dismissals including those arising due to:
• fraudulent behaviour, and
In each of the above dismissal instances, AI tools can help ensure that disciplinary action/redundancy selection is both fair and consistently. This can, in turn, reduce the likelihood of scandals and other reputational issues.
Data Protection/Confidentiality Considerations - Legal implications
1. Discrimination: if the use of the AI tool has triggered a sickness dismissal process due to a disability related absence, this can give rise to a claim of discrimination and/or failure to make reasonable adjustments. Likewise, using AI tools for redundancy selections can allow room for bias on which discrimination claims could be based.
2. Unfair dismissal – the use of AI tools may result in decision makers being unable to fully explain their decision to dismiss in some instances. If a disciplinary sanction is given solely or primarily on the basis of AI, you may not be able to justify the sanction if it later forms the basis of an employment tribunal claim. This risk is far more likely if the AI algorithms are provided by a third party, who may be unwilling or unable to share details of how its algorithm works in practice.
Furthermore, the reliance on AI tools increase a risk of removing “common-sense” approaches to dismissal decisions thus increasing the risk of unfair dismissal claims.
How Can Wilson Browne Help You? – Mitigating the legal implications
We advise employers of all sizes on the full range of employment law issues and, as a small, closely knit team, we are proud of the personal relationships we build with our clients.
Our employment team can assist employers develop strategies for the fair and lawful use of AI in the workplace, implement policies to address the use of AI, and advise on any employee relations issues arising from the use of AI. This includes:
- Reviewing policies and procedures to ensure they have kept abreast of the AI uses in your organisation. Key policies include:
- data protection policies and notices,
- recruitment policies,
- IT/communications policies,
- Bring Your Own Device to Work policies,
- hybrid/ and/or remote working policies,
- sickness absence policies,
- disciplinary policies,
- Code of Conduct, and
- Whistleblowing policy.
- Identifying the need for new policies and/or AI specific policies.
- Advising on and implementing appropriate training programmes.
Advising on and undertaking risk assessments to identify:
- AI used in the workplace,
- the risks posed by it, and
- how those risks can be mitigated.
- Assist with regular reviews of AI use to minimise risks of bias and discrimination.
Finally, keeping you up to date with developments in the law relating to AI in the workplace – although, currently, there is no AI specific legislation, this is intended to change given the pace at which it is being introduced into workplaces. There are also laws which will indirectly impact AI use in the workplace – some of which are referenced in the guide.