Listen to this article:

AI and trust

As generative artificial intelligence enters the workplace, companies need to consider the risks and the benefits.

The number of businesses using generative AI has exploded: A KPMG survey of global CEOs from this year found that 70% are now investing heavily in generative AI and integrating these technologies into both daily and long-term operations.

It’s no wonder that these technologies are increasingly coveted in the world of work. Automation in the form of chatbots or predictive text input can save time and resources by freeing up payroll practitioners who would otherwise be busy with paperwork and customer service issues, minimize human error, and allow payroll teams to focus on long-term projects and goals. 

An ADP survey on the Potential of Payroll in 2024 found that organizations are looking to increase their payroll team’s technical capacities, with a focus on spending more time on business process improvement (56% of respondents), strategic planning (55%), and integrating emerging technologies into the workflow (56%).

However, there are some valid concerns about integrating AI into our lives. At its core, technology reflects the humans who’ve created it, and this includes all of our biases. That’s why it’s so important for businesses to consider the ways AI can go awry, and implement clear policies to ensure that employees don’t lose trust in new technology. 

Potential biases in AI

AI doesn’t simply exist in a vacuum; it derives information and context from pre-existing data sources that we humans have created. Companies considering using AI tools in their HR need to be aware of existing biases and inequalities, and take active steps to ensure new technology does not perpetuate them.

Oyidiya Oji, a policy and advocacy advisor for digital rights at the European Network Against Racism (ENAR), says relying too much on AI tools for hiring purposes could lead to unintended discrimination. 

“If we’re using AI in the hiring process, for example, we use existing data from the workforce that we currently have, and that is most likely going to be people of a certain age, a certain background, and most likely white and male,” Oji says.  

“If we only continue to use data from past recruitment, which most likely is composed of people of one type of background — white, male, etc. — then we will be recruiting these kinds of people again, because that is how we are training these models,” she says. These scanners could potentially filter out candidates based on their addresses, their surnames, or even the way they’ve structured their resumes. 

This pattern can also affect the reach that job postings have, Oji notes. “These job platforms often categorize ads and send them to users depending on how likely they were to apply for such a position,” she says. This kind of categorization can become highly biased since there are high discrepancies between who’s more likely to apply for more senior roles, including factors like race and gender.

ADP’s use of AI in recruiting is carefully considered. “ADP has a set of AI ethics that acts as our north star,” says Helena Almeida, VP Managing Counsel at ADP and member of the company’s AI & Data Ethics Council. “We understand that the impact of AI on individuals and communities is something we need to be particularly sensitive to.”

Considering how to reduce existing bias and mitigate introduction of bias in recruiting tools takes place at every stage from ideation to implementation, with ongoing bias testing of all products. 

ADP’s Candidate Relevancy (also known as Profile Relevancy) tool uses machine learning to create a relevance score for job applicants on its recruiting platforms. The tool does not use or consider personal identifiers such as gender, age, race, ethnicity, address or any other protected demographic information. And it’s meant to be an assist for hiring managers — not a replacement. “Hiring managers still need to be reading resumes and interviewing candidates,” Almeida says.

Tim Dasey is a U.S.-based AI ethics consultant and the author of the book “Wisdom Factories.” He sees a danger in the blurring of the lines between professional and personal data when it comes to using AI to create user-specific tools and profiles. 

“Some part of AI in the workplace is going to involve personal assistance, and they’re going to be very personalized,” he says. “So if, for example, I have ADHD and my brain tends to wander off track sometimes, and I have a dictation tool that alerts me when I’m getting off-topic, it could be very useful to me. But you’re getting this highly individualized, almost psychological profile of a person that’s now also being used in a workspace.” If other team members have access to this kind of personal information, it could lead to discrimination.

Another potential issue for Dasey is how AI, especially facial recognition software and voice-activated functions, can increase the issue of surveillance in the workplace. “There is the potential threat of companies taking our conversations via web conferences, for example, and analyzing them with AI to assess the morale in the company,” he says, either by scanning for keywords and phrases or analyzing employees’ body language. 

This scenario isn’t that far-fetched: Increasingly sophisticated productivity tracking tools are allowing employers to collect more detailed data on an employee’s mood. Gartner found in 2022 that the number of large employers using tools to track worker productivity has doubled since the beginning of the pandemic to 60%. Employees whose work methods don’t conform to the productivity trackers’ criteria could be at a disadvantage in their performance reviews and chances for promotions, or even lose their jobs.

Building trust

So how can companies not only ensure that their AI tools don’t inadvertently reproduce biases but also build trust among the workforce so that these technologies enhance the employee experience? One of the main pillars of trust is transparency. 

An ENAR report on AI in the workplace emphasized that part of this transparency includes informing employees and potential recruits when they’re interacting with an AI-powered system, as well as clearly asking for consent and implementing an “opt-out” path. 

“One should get consent for recording interactions to feed generative AI,” says Giselle Mota, Chief of Product Inclusion at ADP. Additionally, AI must not operate unilaterally without human oversight. This also means training HR and payroll professionals to better utilize AI, including algorithmic selection tools, so that diverse candidates who don’t match the current workforce’s makeup aren’t filtered out. 

“It’s very important to center the human aspect of human resources, because in the end we’re talking about the lives of people,” Oji says. “It’s a company’s duty to understand how AI works and how it can lead to discriminatory practices.” She also emphasizes that this responsibility cannot lie with just one person or even one team. “There need to be allies inside the company ready to make changes, and who are involved in the decision-making process.”

“Trust needs to be a factor in how AI is designed and used.”

Giselle Mota, Chief of Product InclusioN, ADP

Dasey explains that “the only way to really know whether you’re setting yourself up for improvement or not [when it comes to AI] is to have an individual person or a small team that’s responsible for testing these AI products in realistic workflows and with realistic data, all while reviewing them closely.”

An HR department that’s aware of biases that can slip into AI tools and can override them will be more capable of acting in an employee’s best interest, thereby solidifying trust in a company’s HR professionals as well as in the AI tools being used. “Trust needs to be a factor in how AI is designed and used. Employees want to be treated fairly,” Mota says. “That’s not new, but as data automation begins making life easier for HR, you have to make it human. People don’t want to be thought of as just a cog in the machine.”

Since AI grows and evolves by being fed data, it’s also imperative that employees understand exactly how their personal data is being used and for what purposes.

“As we build, we need to make sure we protect people’s privacy and data,” Mota adds. At ADP, she works together with the company’s AI ethics board and data governance team to ensure the protection of sensitive data while also ensuring that generative AI tools like their chatbot feature are working with inclusive language. “We’re still rolling things out, but we’re holding each other accountable,” she says.

Final takeaways

For many HR and payroll professionals, there is some fear of jobs being replaced by automation and generative AI. That’s why it’s also important for businesses to help their payroll and HR professionals understand how AI will be a positive addition that, instead of making them obsolete, will open up time and resources to focus on long-term projects and strategic innovations. Once payroll and HR professionals are working in sync with these technologies, they’ll also be able to transmit their benefits to a company’s employees.

“We’re at the birth of a technology where it’s in a very raw form,” Dasey says. “So what I usually tell companies is to get your policies in place about how you’re going to deal with these ethical questions.” It’s vital for teams developing AI business tools to partner with HR and payroll specialists to ensure best practices are being followed.

AI may be an increasingly indispensable development in the world of work, but it will only achieve its intended benefits with human oversight and finesse. 

Luisa Rollenhagen

Luisa Rollenhagen is an Argentinian-German freelance journalist based in Berlin. She’s worked for publications including Deutsche Welle, GQ, The Guardian, and BuzzFeed.