It seems that everyone is talking about “AI”. From answering complex questions to drafting articles (not this one, we promise), there is both optimism for the potential AI might provide, including to revolutionise the workplace, and fear for its pitfalls, such as privacy breaches.
Overall, there could be lots of positive impacts for employers and workers if associated risks are anticipated and appropriately managed. This gives HR professionals a lot to think about.
Applying AI in the HR context involves using machine learning to make decisions about people. For example, it could be (or already is being) used in the following ways:
- Recruitment: devising job adverts, sourcing candidates, filtering CVs and scoring tests;
- Task allocation and performance management: scheduling shifts and evaluating employee performance;
- Employee engagement: AI tools that can collect employee data and identify employee reactions and attitudes;
- Surveillance and monitoring: looking at employee routine and health and safety.
AI can enable HR to streamline the collection and analysis of data within HR processes. It also may be used to eradicate biases and assumptions to ensure the selection of the best candidates and determine optimal compensation and benefits packages. With that said, there have been some questionable results from AI given that algorithms can reflect biases due to programming, among other technical issues, e.g. the failure of facial recognition software to recognise different ethnic faces.
There are also challenges in the use of AI, including:
- Privacy and cybersecurity risks, including the collection, storage and dissemination of data that AI produces;
- Trusting and relying on AI to do every task allocated without authenticating the accuracy of the data results. (A good example of this was where a New York lawyer was fined after he used AI to prepare a brief which contained fake case citations!)
Regulation of AI in Australia
Australia has taken a principles-based approach to AI regulation, as outlined in the AI Ethics Framework since November 2019. These voluntary principles guide employers and other stakeholders about AI use for safety and reliability. While there are no specific AI laws yet, the Australian Government is currently exploring specific AI regulation. In June 2023, the Government released a discussion paper, Safe and Responsible AI in Australia, following the National Science and Technology Council’s Rapid Response Information Report on Generative AI. The Australian Government has also referenced the proposed EU AI Act (2021) as a guide to a possible regulatory framework in Australia.
Relevantly, the Australian Human Rights Commission’s submission on the issue highlighted human rights risks associated with AI, including privacy concerns, algorithmic discrimination, automation bias, and misinformation. It stressed the need for comprehensive AI regulation to protect individuals and raised that while there’s existing legislation on issues such as data protection and privacy, regulatory gaps exist. The Commission recommended that the Australian Government conduct a regulatory gap analysis and modernise relevant legislation before introducing AI-specific laws. The Commission also reiterated the importance of creating a specific ‘AI Commissioner’ for guidance and updates to both the private sector and government about AI regulation and safety. The Commission concluded by emphasising the need to modernise Australia’s AI approach to protect human rights and foster trust in AI.
AI remains the space to monitor closely – one thing is for certain, there will be many more developments to come!
Article Authors – Aaron Goonrey & Luke Scandrett from Pinsent Masons.