AI in Recruitment: The Big Interview with Duri Chitayat, CTO, Safeguard Global

Why Trust Techopedia

Employers have long used artificial intelligence (AI) in hiring, such as candidate screening, but the rapid development of new tools is increasing automation.

Techopedia spoke to Duri Chitayat, Chief Technology Officer (CTO) at global workforce management company Safeguard Global, about the role of AI in job application and recruitment processes.

We delve into AI and the importance of maintaining skepticism and human oversight in the process.

Key Takeaways

  • Whether AI makes workplaces more diverse and inclusive will depend on the people who implement and operate the technology.
  • Explainability is vital so that experts can understand and calibrate the features selected for a model.
  • Technical expertise in the recruiting and hiring space is essential so that organizations do not use technology without understanding how it works, which poses a significant risk.
  • Blockchain can be used to secure the rights to applicant data so it can be protected and shared across potential employers in a way they control.

Q: How can AI tools make job application and hiring processes more user-friendly for job seekers and employers in the future?

Duri Chitayat
Duri Chitayat

A: Employers have used AI to automate recruiting and hiring tasks like resume screening and initial candidate evaluations for years, and when used correctly, AI can accelerate hiring and improve the applicant experience.

But there’s a risk of overlooking qualified candidates if the algorithm doesn’t recognize the appropriate range of skills and experiences or the adaptive nature of people’s careers.


That’s why we will never be able to remove the human element – it will remain essential. But at the same time, the opportunities for innovation are incredibly exciting.

There’s much handwringing over the increased use of AI, but in my view, the debate is pointless.

Like the controversy about using mobile phones in the workplace years ago, AI will play a significant role in our lives — and already does.

Therefore, we should focus on applying the technology that can be most useful while considering how best to manage and mitigate unintended outcomes.

The benefits to organizations are too great. For example, AI can help us analyze data from past hiring to improve future strategies, such as identifying effective sourcing channels or traits linked to long-term success for an organization.

The benefits go towards job seekers as well, by helping people to identify and access relevant opportunities, making application processes more efficient, providing insights into companies such as compensation comparison, and sharing verifiable credentials.

Candidates are already beginning to reuse elements of the application process that are especially time-consuming, like work samples.

AI and Data Processing

Q: How do you see AI-driven talent acquisition and retention strategies contributing to global diversity and inclusion efforts in the workplace?

A: If we’re judicious about how we’re applying AI, its strength lies in its ability to process vast amounts of data without the inherent biases that can affect human decision-making.

Customized algorithms can focus purely on skills and qualifications, effectively sidelining demographic factors that often lead to biased hiring decisions. However, the design of tools is critical. If the AI’s objectives aren’t thoughtfully crafted to promote diversity and counteract bias, the system might simply mirror existing practices, thus embedding the very biases it seeks to eliminate.

Then, the human element — the HR professionals tasked with applying these AI tools — is paramount to ensuring how they make decisions based on the results, reflecting the goal of equity and inclusion.

Whether AI makes workplaces more diverse and inclusive will depend on the people who implement and operate the technology. It’s not just about what AI can do but, more importantly, what it should do.

I hope that organizations and platforms that develop and apply these technologies do so with careful design, continuous monitoring, and intentionality toward the outcomes.

Q: How can organizations avoid AI algorithm bias in recruitment?

A: Avoiding bias in AI algorithms starts with ensuring robust training sets that are representative of the population. Further, algorithm explainability is key so that experts can understand and calibrate the features selected for a model.

But there’s a related issue that isn’t getting much attention that will be increasingly salient as AI evolves, and that’s a need for technical expertise in the recruiting and hiring space. Without it, people purchase and use technology without understanding how it works, which poses a significant risk.

To create organizations where people can understand complex technology, not just in IT but in other areas, including HR, we need to provide incentives for employees that encourage upskilling and technical competency because the job will increasingly require it.

In the interim, fostering a culture of curiosity and forming partnerships between technology leaders and the leaders of other business units is a good approach.

AI and the Human Touch

Q: How should HR departments balance adopting automation and AI with the importance of maintaining a human touch in their interactions with employees and candidates?

A: At Safeguard Global, we see it as a parallel track. We’re investing in AI-driven automation at every touchpoint that makes sense for the work, from the first application to confirming qualifications, etc., while having our recruiting team bring their analysis with the data our systems produce.

While I am a passionate advocate for AI in this space, I urge everyone who uses it to maintain a healthy skepticism. This technology is evolving rapidly, and we shouldn’t buy into each iteration wholesale but rather continuously examine results.

Backtesting is essential, and strategies like combative AI, where you use different systems and compare their results, can also be helpful. Approaching AI’s role in work requires a commitment to continuous learning.

Q: What strategies can HR teams use to remain agile and responsive to changing technology and market dynamics?

A: Creating a culture where people are willing to be skeptical and curious and learn must be a top priority. We need to recognize that these technologies are fundamentally changing the nature of the work as they begin to take over sourcing and recruiting tasks.

One of the realities is that as these tools continue to change what tasks and functions are automated, it will change the HR team’s role and responsibilities. Upskilling and training can help people in those roles stay on the leading edge and adjust to new ways of working.

The HR team, much like the payroll leaders in the past, will have more time to focus on strategic work, such as guarding against risks in hiring processes and ensuring the company is building the workforce it needs, rather than more tactical work.

Ethics & AI

Q: What are the potential challenges or ethical concerns related to the widespread use of AI in HR, and how do you recommend addressing them?

A: A primary ethical concern is the risk that overreliance on AI in HR could cement existing social inequities.

We’ve talked about the importance of maintaining a sense of skepticism and human oversight in the process and emphasized the use of backtesting.

These are all critical strategies. But it’s also essential to broaden our view of who the “customer” is.

For every decision you make about your business, you’ll define your responsibilities according to who you think your customer is.

For example, if you think you only serve your investors, you won’t necessarily see addressing inequities as part of your job; your focus will be on EBITDA [Earnings before interest, taxes, depreciation, and amortization].

A more enlightened approach would be to view all the people whose lives your company affects as customers. That includes current and potential employees, stakeholders, investors, partners, clients, and the whole ecosystem.

When we talk about ethical challenges in business, we tend to focus on the impact on individuals. I think we need to widen that lens and consider the broader social fabric too.

Does Blockchain Serve a Role in AI Recruitment?

Q: What is the role of data privacy and security when implementing AI solutions in HR, especially when dealing with remote workforces?

A: The baseline is to have a robust cybersecurity strategy and assets in place, which is incredibly important before, during, and after an AI solution implementation.

But I think we may finally see more widespread adoption of blockchain technology, which has been a solution in search of a problem in many respects.

Blockchain can secure your data’s rights so it can be protected and shared in a way you control, i.e., a self-sovereign model.

The Velocity Network Foundation launched an Internet of Careers platform a few years ago that aims to reinvent how career records are shared using blockchain.

With widespread adoption, this could be a new data privacy and security paradigm that immeasurably improves the job seeker experience because candidates can use the same credentials across job applications, such as code samples, test scores, etc.


Q: What emerging AI technologies or trends do you think will have the most significant impact on global recruitment and retention in 2024? How should companies prepare for them?

A: In the short term, I think we’ll see AI tools accelerate processes like tailoring resumes to specific jobs on the candidate side. On the employer side, natural language processing will streamline job description creation.

Longer term, maybe within the next five years, I believe the industry will develop Chat UX that lets recruiters have a conversation with candidates and use a single interface to take action instead of 10 different systems.

This is an enormous pain point that introduces delays and holds up contracts, which can result in companies missing out on the ideal candidate.

I think we’ll also see more use of predictive analytics so companies can leverage career data to predict outcomes, like how long a potential employee’s tenure will likely be, to enable better decision-making.

To prepare for these emerging technologies, companies need to make data a first-class citizen within their organizations. We’ve talked about data’s value for years, and it has fundamentally changed how we work. Now, AI is taking that to the next level.

About Duri Chitayat

As Chief Technology Officer, Duri leads Safeguard’s technology team to deliver products and experiences that improve people’s lives, including ChatSG, which brings AI to payroll and HR.

Duri holds degrees from Boston College and NYU Stern and is currently earning his Masters degree in Computer Science from Johns Hopkins University.

He has developed and directed high-performance engineering organizations in AdTech, MedTech, Banking, and Finance on three continents.

Duri Chitayat LinkedIn
Safeguard Global LinkedIn


Related Reading

Related Terms

Nicole Willing
Technology Journalist
Nicole Willing
Technology Journalist

Nicole is a professional journalist with 20 years of experience in writing and editing. Her expertise spans both the tech and financial industries. She has developed expertise in covering commodity, equity, and cryptocurrency markets, as well as the latest trends across the technology sector, from semiconductors to electric vehicles. She holds a degree in Journalism from City University, London. Having embraced the digital nomad lifestyle, she can usually be found on the beach brushing sand out of her keyboard in between snorkeling trips.