Earlier this year, Samsung banned its employees from using ChatGPT on company devices after employees inadvertently revealed sensitive information to the chatbot. HR and recruitment teams looking to invest in artificial intelligence (AI) solutions like ChatGPT to improve the candidate experience and application process must prioritize safeguarding candidate information to avoid such security concerns.
As we stand on the brink of a new era in talent acquisition, understanding how to effectively and responsibly implement AI-driven solutions is crucial for companies looking to stay ahead. To delve deeper into this critical topic, I spoke with Alfons Staerk, a luminary with an impressive pedigree, including tenures at Amazon and Microsoft, and is currently the Senior Director of Global Recruiting Technology and Experience at Boston Consulting Group.
Staerk’s insights and predictions around ethical AI and how it intersects with talent acquisition will be a useful guide for companies in navigating the complex yet exciting world of AI in recruitment.
Redefining Roles: How Technology Complements Human Skills
Q: Our newsfeeds are currently full of headlines about technology replacing people, but the magic happens when people and technology combine. Would you agree?
A: Yes, it’s so important. If we look at the development of technology, it never truly replaces people. It changes what they do, how they do it, and the quality of their outcomes. Even if we look at super cutting-edge technologies, like AI, it helps people, not replace them.
Ultimately, technology empowers people to be better at what they’re doing, more effective, and quicker to the core. Then, people can spend time on what they should be doing rather than less interesting stuff that doesn’t provide as much added value.
AI: Transforming Recruitment Beyond Automation
Q: Technology can also help people find their next opportunity. How have you seen recruitment technologies evolve over the years, and where do you see them heading in the future?
A: If you look back in the beginning, recruiting technology wasn’t very helpful. All it did was replace a paper process with a digital process, but recruiters were still doing the same thing manually. They were still following all the same steps and process flows.
The second step was automation. They began figuring out where you need a human to advance to the next step versus where you can build some algorithm, and that was a massive help for recruiters because it freed up their time and removed some of those non-value-added steps they were doing.
The significant change we are now seeing with AI is not that it removes steps as automation did, but that it helps us manage information overload.
Recruiters receive thousands of job applications and have no way of manually reviewing each one and considering the full candidate profile so they can make better decisions. But AI can now do that. That’s the significant change and difference we’re seeing right now.
In the future, AI will help us find the answers to all our questions. For instance, let’s say I’m looking for a product manager with 25 years of experience.
In that case, AI will better predict whether somebody might be successful in an interview and get a job offer, whether somebody is the right cultural fit for the company, and whether they will perform well in the role.
The Critical Role of Human Oversight in AI Recruitment
Q: Given the unforeseen impact of GenerativeAI and ChatGPT, how can companies effectively balance the utilization of AI in functions like recruitment while maintaining stringent data security as we approach 2024? What are the trends you see emerging in ethical AI in talent acquisition?
A: ChatGPT came like an avalanche, and nobody saw it coming. It changed so much in such a short time. But in 2024, businesses cannot have every department dabbling in AI. It needs to be on the CEO’s agenda and managed strategically from the top. The organization needs to understand what they want to use AI for and what they do not want to use AI for. They must put governance and policies in place so employees know precisely the boundaries they are setting for AI.
Vendor audits are crucial. You must ensure that the solutions you bring follow responsible AI principles. It’s important to remember that AI can considerably improve your business, but it can also cause a lot of damage if you do it wrong.
Employee education is super important today. Everyone is using AI, but only some are educated about it. Employees must understand critical topics such as AI bias and potential intellectual property issues. They also need to be aware of the boundaries of AI-generated content. Sure, it can create content for you, but you want to ensure you’re more than just trusting that content and running into AI hallucinations.
Similarly, AI-driven results around candidate matching and scoring are great in recruiting. But we will always need a human who takes a supercritical look and accepts those inputs as what they are. It requires a CEO approach, starting with policies, governance, vendor audits, understanding what the solutions do, and ensuring that employees have that same understanding and understand the opportunities, boundaries, and risks of AI.
Educating Employees on AI Interpretation and Ethics
Q: Many businesses are attempting to define what a responsible AI program should look like. Can you share what you believe it should and expand on why it’s so crucial in today’s recruitment landscape?
A: Firstly, when looking into vendor solutions or developing your own solutions, you need to understand how those models are trained and what those models are doing that you’re using in AI. So, what’s the data set that goes into the model? Is it large enough? Is it unbiased? Do you or does the vendor have bias controls on their AI to understand and ensure that AI doesn’t add bias to your process and system but reduces it?
We’re having all our vendors audited regularly by a third party to ensure they’re neither biased by us (BCG) nor the vendor. The third party looks at the vendors and the policies, security, and data management the vendor has in place to ensure that all data is appropriately managed. The second critical piece is the employee education I mentioned.
In recruitment, employers need to understand that if they get a score on a candidate, that score is just one perspective. It’s just one input driven by a dataset and AI algorithm.
AI can help recruiters review large data sets. It allows them to see more data points than they could usually see. But you still need to bring in the human element. You need to assess the AI’s results and see it for what it is: an aid, not a decision-maker.
The Future of Ethical AI in Talent Acquisition
Q: How should HR and recruitment teams integrate AI solutions into their existing processes?
A: My advice is to start by understanding your problem and opportunities. For example, when I worked at Amazon, our problem was not finding candidates. We had thousands and tens of thousands of applications for a single engineering role. Our problem was looking at a thousand resumes and finding the right person. And so, the AI solutions we needed differed from the problems we’re facing at BCG, where we need a highly specialized expert who knows sustainable energy.
Instead of sifting through many applications, we must find and source the perfect candidate, even if they are not currently on a job hunt. These two examples require two very different types of AI solutions, so it’s crucial to understand your problem. Only then can you look into vendor solutions, ask tough questions, and evaluate whether the technology works.
Transforming Candidate Experience with AI
Q: What were the most significant challenges and rewards you’ve encountered in using AI for recruitment?
A: One of the biggest challenges is getting bias-free data sets at the correct scale. Building an AI solution to score for you is extremely exciting. However, we needed more resumes for bias-free scoring to train the model. So, it didn’t work for the roles I wanted to use it for. It worked for many, but not for all.
It’s challenging to obtain the right data sets and build bias-free models based on them. However, when you do, you can create magical experiences for candidates and recruiters.
In the past, I have encountered very long and tedious application review processes. It often took candidates two weeks to go from applying to being contacted for a phone screen. But now, we can use AI for specific jobs and bring that down to milliseconds. For example, when you submit your application, and before you can close the browser window, we will be ready to offer you a phone screening interview.
We had to introduce some delays to manage the user experience, but we would only send out those invites the next day. That change, in a competitive market where candidates don’t have to wait two weeks but receive an interview invite the next day, was just magical.
I’m also super proud of matching with candidates coming in and saying, “Hey, I’m this person. I have that experience. I might be interested in that job, but you tell me”.
Then, we would give them five of the top matches. They would often discover jobs they had not thought about before because they had not heard of that department — or had more flexibility and location than they thought. You come in as a candidate and discover something that you weren’t even aware existed, and I can help you with that.
About Alfons Staerk
Alfons Staerk is the Global Senior Director of Global Recruiting Technology and Experience at Boston Consulting Group and has worked across Fortune 100/500 corporations, including Microsoft, Amazon, and BCG, over a 20-year career.
He led four of Amazon’s global supply chain automation initiatives from concept to launch, delivering annual savings of more than $200m. He built a multibillion-dollar hiring internal system for Amazon that supports 130m visits, 10m applications, and 75,000 new hires each year.
The analysis from Staerk highlight the importance of a balanced, informed approach to AI adoption, where human oversight, ethical considerations, and strategic management form the cornerstone of successful implementation.
When we look at the history of ethical AI in talent acquisition, and future trends, we must embrace AI’s potential and conscientiously shape its impact on the recruitment process, ensuring it aligns with the core values and needs of both organizations and candidates.
As we look toward the future, it’s evident that AI in recruitment is not just a tool for innovation but a catalyst for creating more inclusive, efficient, and ethically sound hiring practices.