An Exploration of AI’s Role to Address Bias in the Workplace

Introduction

Most successful organizations in the world are the ones that create space for diversity in the workplace. Having people with different backgrounds and cultures fosters an environment for innovation. Computer scientist Tim Berners-Lee, one of whose inventions is the World Wide Web (WWW), champions diversity and states, “We need diversity of thought in the world to face the new challenges.” On the other hand, while everyone has unique interests, it’s challenging for large corporations to create an environment without bias. A significant challenge in establishing an unbiased climate is that people managers are human beings, bringing their emotions and perspectives. It becomes even more challenging for organizations with multiple layers of leadership (organizations that are not flat). In every organization, HR Leaders and C-level executives formulate policies and procedures to mitigate bias in the workplace. Although the situation improves with each iteration, there is no perfect solution to eliminate discrimination wholly and discretely. Bias can affect an employee’s journey from recruitment to retirement. Recently, numerous technological breakthroughs have occurred, and we’ve observed some groundbreaking applications of AI in human life. For instance, IBM plans to replace 7,800 jobs with AI. This article aims to explore opportunities to use technology to address various biases in the workspace.

Impact of Bias on the Work

Research shows that working as a team to solve problems leads to better outcomes [8]. People are more likely to take calculated risks that lead to innovation if they have the support of a team behind them. Consent cannot be established merely through processes or policies in a corporation; instead, it should be rooted in trust among the employees. This is where bias can play a negative role. Bias can erode trust more quickly than we might imagine. Companies are taking calculated measures to avoid bias, from job descriptions to selecting senior leadership. However, even implicit bias can result in highly talented candidates being overlooked during recruitment. When bias emerges in the work environment, it becomes challenging for employees to execute even their designated duties.

How to Handle the Bias in Recruitment

The history of applying AI for recruitment goes back a long way. Numerous companies are already using AI (Artificial Intelligence) for recruitment. For instance, in 2018, Amazon trialed an AI-based tool to evaluate applications but eventually stopped using it due to its creation of gender bias during the screening process. Another example is HireVue, which offers video interviewing and recruiting automation technology. However, most of these technologies focus on simplifying the application process. Controlling bias in recruitment is much more challenging than merely shortlisting CVs. We can employ AI-based bots to monitor bias factors, from job descriptions to decision-making. These bots must be trained on the company’s dataset to ensure no algorithmic bias. Recent advancements in high-capacity computation processors allow us to train models with vast amounts of data and derive results from unstructured data. Products like ChatGPT and Bard are testaments to the capabilities of these recent technologies.

Work Assignments

Most companies believe in teamwork, and more than one employee often handles projects. However, the challenge lies in work assignments. Not all tasks within projects have the same difficulty level, but the more difficult tasks often have greater visibility and a higher chance of failure. When managers are assigned tasks based on their best judgment of the people in their teams, bias can quickly come into play. Although it’s usually implicit, there can sometimes be explicit and clear bias in meetings. That’s why companies like Dialpad significantly leverage AI to avoid potential bias during sessions. “Even though there’s talk about robots eliminating jobs and widening the skills gap, artificial intelligence can be an opportunity to help level the playing field,” says Tasha Liniger, the company’s Chief Human Resources Officer.[6] An AI bot can serve as a guiding mentor to help managers and leaders avoid bias and boost employee morale.

Feedback, Benefits, and Bonus

According to People Scientists, bias is “an error in judgment that happens when a person allows their conscious or unconscious prejudices to affect their evaluation of another person. It usually implies an unfair judgment against or in favor of someone. Unconscious bias can lead to inflation or deflation of employee ratings, which can have serious implications in high-stakes situations directly affected by performance assessments.”[9] Though there are different types of bias discussed in detail in the “performance review bias” article, the most important one is the similar-to-me bias, as it is very challenging to address through processes or policies. Companies take many initiatives to address this, but it’s difficult for HR managers to train managers to overcome this bias. Leaders excel when they can manage people who know more than they do, but it’s hard to set that expectation for entry-level managers in the knowledge-based industry. This is where AI bots can help; we can train the LLM models based on how top-level managers and leadership experts handle complex situations and then use that to guide new managers until they can steer their course. Based on employee performance data collection, bots can be leveraged to decide the percentage or individual bonus.

Critical Factors for Bias

The most common factor leading to bias is gender, especially when leaders/managers are not representative of the general population. Other factors include an employee’s religion, mainly if it differs from the team manager’s. Countries like the USA, which attract many immigrants, also face biases based on the land of birth. How can we ensure equal importance is given to people from less populated countries? AI bots can undoubtedly address these challenges effectively. We need to ensure that we provide the necessary data to not introduce bias into the data we input. However, training the model with quality data becomes a valuable tool for leaders/managers, helping them make decisions without the influence of these factors.

How Top Tech Companies are handling the Bias

Companies are not looking at bias problems holistically but rather in parts like recruitment and gender. Most companies use AI systems to avoid bias in hiring, but not all do. Moreover, companies need to leverage knowledge about navigating bias from other parts of the process. For example, L’Oreal and Lloyds Banking Group use virtual reality (VR) to gauge how candidates handle real situations, but what about biases in bonuses or promotions? Diversity and inclusion are significant concerns for almost all companies. Executives believe that “diversity and inclusion” serve as proxies to address bias, and they mandate training on the same. But will it address unconscious bias? Or the “same-as-me” tendency?

Challenges and Limitations using AI Bots to handle Bias

Another challenge is that biases can be introduced within AI systems and amplified as the algorithms evolve. Another crucial aspect is user privacy and employee confidence. Machine learning models are only as good as the data used. However, collecting user data with consent takes much work. We don’t want managers and employees to feel insecure. Recently, organizations have introduced roles like “Chief Analytics Officer” and “Chief AI Officer.” These individuals should be responsible for designing systems so they don’t collect personally identifiable data while clarifying the intention behind data collection.

Understanding unconscious bias requires a deeper level of comprehension. How do we address a tendency that is very new and unknown? This is where feedback comes into play. Companies need to gather comprehensive feedback at every step of the employee journey and use advanced language models to identify new types of bias.

Conclusion

“If I could somehow change your mind about something you have unconscious thoughts about in an hour of training, the world would be pretty chaotic. People would have less stable personalities than they do. But they have stable personalities based on decades of experience,” says Harvard social science professor and author Dr. Frank Dobbin. Often, human resource managers think that giving training on bias/diversity inclusion is enough to handle the bias in the organization. Still, people develop character through culture and things they experience as a child, and it’s not easy to change. Perhaps those training can be used as an awareness tool, but the solution to bias is beyond training. One such option is using technology to guide us, if not to handle it.

References

  1. https://bit.ly/45fXv20
  2. https://bit.ly/3DZ6hFt
  3. https://bit.ly/3DZ6hFt
  4. https://bit.ly/3OxaBkk
  5. https://bit.ly/47AGOjg
  6. https://bit.ly/3E0kmmd
  7. https://bit.ly/3OXZgeq
  8. https://bit.ly/44h3LoV
  9. https://bit.ly/45uJBsD
  10. https://bit.ly/3P1z9n6
  11. https://bit.ly/3KMA9ZD
Vijay Balasubramanian, Microsoft
Senior Data and Applied Scientist at Microsoft | + posts

Vijayasarathi Balasubramanian, known as Vijay, works for Microsoft as a Senior Data and Applied Scientist. Vijay obtained his master’s in data science from the University of Notre Dame, and he has 17 years of progressive professional experience in technology and its practical applications. Vijay is associated with prestigious associations, including IEEE, IET, and BCS. Indian Achievers Forum awarded Vijay the “International Achievers Award,” and Golden Bridge (Globee) awarded the Data Scientist of the Year award. Vijay frequently discusses technology and its applications in various forums, especially Human Resources. He can be reached at [email protected].

Related Articles

Join the world’s largest community of HR information management professionals.

Scroll to Top