As we many of us know half the fun of implementing an AI recruiter bot is giving it a cool name that reflects your vision and your brand.
For some, that fun stops when they find out that candidates like the bot better than the recruiters. The fact is, we’re finding that candidates do in fact like working with your AI recruiter bots than working with real people.
The fact is, candidates seem to prefer working with AI recruiter bots rather than with real people. At the recent ERE Recruiting Conference and CandE Awards event in Anaheim, Survale held our Customer Advisory Board meeting. One of our advising customers mentioned they were using Survale to run A/B satisfaction tests with their homegrown chatbots. The results clearly showed that candidates were more satisfied with bots than with humans. We wondered about why and came up with the most likely reasoning. However, we were all a little hesitant to accept AI recruiter bots data as bullet proof in terms of satisfaction.
And what were the reasons we agreed on as driving higher satisfaction? a) Candidates are actually being communicated with—something many are not used to, especially at the top of the funnel, and b) they get quick answers, avoiding the wait for an email response that may or may not come.
As the conference went on, I found this conclusion echoed in numerous AI-focused sessions and case studies. To paint the picture, there were a lot of TA practitioners squirming nervously as influencers and researchers painted the near future of talent acquisition AI as one with less need for recruiters and the classic recruiter skill set.
When you hear that candidates actually like AI recruiter bots better, it can feel like getting kicked when you’re down to TA practitioners and leaders. But the reality is that answering the same questions over and over again is really not a good use of anyone’s time. Nor is scheduling interviews. These are perfect tasks for AI.
But Do Candidates REALLY Like AI Recruiter Bots Better?
This question stuck with me. I find the trick with AI is, whether it is with a general LLM or whether it is overlaid with proprietary data like it is within Survale, not knowing whether it missed anything. In other words, I can find the content of the answer to my question to be quite good. But I don’t know what I don’t know. And I often find when I ask a question another way, there are nuggets that perhaps should have been included in the initial question that would have been missed if I had not persisted.
This is important because your candidates certainly don’t know what they don’t know. And you have to be sure that the answers AI chatbots give them are complete, accurate and fully factual. Something the candidate has no way of knowing in the moment they get the answer.
For example, let’s say a chatbot engages a candidate, tells them there is a job available in their desired location for which they would be qualified. It then does an initial screen and schedules an interview for them. A candidate would naturally be highly satisfied with that interaction. But when you survey them after the interview, you might find that the job was not really available and the hiring manager interviewed them for a different job in a different location. Now that candidate doesn’t love that chatbot as much anymore. But if you surveyed the candidate right after the chatbot interaction, they would have been all thumbs up and a huge problem would have gone undetected.
The point is that we are at an inflection point for recruiting and AI. In order to make sure that our AI is delivering the value it’s capable of, we all need to be getting feedback from candidates. We need to be getting that feedback at multiple points along the journey. That feedback needs to be both statistical and comment based. And it needs to happen as part of the process so issues can be spotted in real time and corrected. As we’ve seen with other recruiting automation, there are multiple points of failure and it’s not always obvious where they are.
Feedback is not only important to ensure your people, processes, technologies and partners are optimized, but it’s now a key part of training your AI models to be accurate and to spot problems like the one I described. The day will come when your AI can monitor candidate feedback and refine its own performance. In any case, robust, multi-stage feedback will be crucial nonetheless.