• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


AI models can vastly increase job candidate pools. It might also improve diversity.

Jun 19, 2024, 12:41pm EDT
tech
Moonhub
PostEmailWhatsapp
Title icon

The Scene

There’s a vibe shift in the way people talk about diversity in Silicon Valley.

Last week, Scale AI CEO Alexandr Wang published a memo titled “Meritocracy at Scale” in which he laid out a hiring strategy focused on merit, excellence and intelligence.

“We treat everyone as an individual. We do not unfairly stereotype, tokenize, or otherwise treat anyone as a member of a demographic group rather than an individual,” he wrote. “Everyone who joins Scale can be confident that they were chosen for their outstanding talent, not any other reasons.”

AD

Coinbase made some similar points last year, when it announced a new, more meritocratic hiring policy.

“The implication that women or minorities need anything other than an even playing field to compete and win on merit has always been insulting, and I’m glad everyone can now say it out loud,” said Lulu Meservey, a tech industry communications executive, on X Monday.

But critics worry that diversity will be ignored or downplayed as an end goal.

AD

One idea to address that is to get more creative about finding talent. Nancy Xu, CEO of AI-driven recruiting firm Moonhub, has a counterintuitive theory that artificial intelligence can increase diversity by helping recruiters find “needle in a haystack” prospects that conventional methods overlook. She began working on the idea at Stanford’s AI Lab when she was a PhD student. Our edited conversation is below.

Title icon

The View From Nancy Xu

Reed: How do you measure success when it comes to more diverse hiring?

Nancy : A lot of people talk about mitigating bias within AI systems. I think there are two other ways you should think about bias. The first is how to use AI to make humans less biased, and then how to use AI to make AI less biased. At Moonhub, we give most of the decision-making power to the human today, and I think for most applications of AI, a lot of the ultimate decision making is still with the human.

AD

The first step is to build AI systems to make the human less biased. In the recruiting world, the first step in any search is to ask what the hiring manager is looking for. An example might be: My hiring manager wants someone who has ‘signs of excellence.’ The recruiter, based on all their priors of how they’ve hired in the past, might say: ‘I’m going to assume this means they’ve worked at Google or Meta.’

Obviously, that is not the only sign of excellence, and what we’ve found is that people — no fault to them — can be lazy. Sometimes it is unintentional — they might be applying those two filters because it’s the two things they know of on how to find people that are excellent. But you might also find excellent people who went to a state school, never worked at Google, never worked at Meta, but happened to have paid their way through college with three engineering jobs, graduated and had an amazing career at a place where they got promoted three times.

Because recruiters especially tend to go for the easiest path, almost every recruiter reaches out to the same 20,000 people even though there’s probably a million people out there they could be reaching out to. The power of AI is to help humans think about how to go from the 20,000 to the million. We have helped some of our customers hire people in Wisconsin that they never would have met otherwise, because their pool is very much focused on people they know in Silicon Valley. One of them told us one of the best candidates they ever hired was one of these people, whose background they had never really considered before.

How does AI help you find the person in Wisconsin?

Two steps. The first is the biggest one: Using AI to make humans less biased. Not a lot of people think about it enough today. AI learns to replicate human values, and then builds a system that creates this new reality. If you can teach humans to be better, that creates better training data for the future for AI to actually be better. But right now the training data is already biased. And so [if] you want to build an AI on top of it, you have to essentially de-bias the data.

When recruiters run searches with Moonhub, instead of putting filters into a search engine, lots of them express more directly what they’re looking for — don’t translate what excellence is, just tell us you want excellence and let us help you suggest what are 10 of the different ways to find excellence, versus you just defaulting to the one way you know.

If you want to help find more underrepresented individuals, don’t just put in ‘female’ because that’s the first reaction you have. Let AI help coach you in other things you should be looking for. Bias is like a bad habit, and AI can fix it. From a human’s perspective, we have AI fitness coaches, AI to help people be better versions of themselves.

Could we go a step deeper on the Wisconsin example? What exactly did it pick up?

It’s something very simple. If you run a search inside LinkedIn today, you can’t search for things like ‘People who have been promoted three times in the last four years.’ Because AI can interpret more broad natural language queries, our AI system can go look for people who have that type of pattern.

Do you actually have a data source that tells you how many times people have been promoted?

We index data about individuals across the public web. We train AI systems on top of this data to essentially look for what I call the “dirty unstructured signal,” that traditional search systems don’t pick up on. Most traditional search systems are built on top of some proxy of a keyword-based search inside [0pen-source search resources Elasticsearch or Apache Lucene]. What that means is the way you can query these search systems is to say, ‘Help me with the title, what are the exact words you want in the title? What are the exact words you want in the location?’ What AI enables is a higher level than this: Don’t tell me exactly what you want, just tell me in natural language what you’re looking for, and then I can interpret that into potentially one of hundreds of combinations of these things that you would traditionally look for in a single search inside a normal search system.

So you’re using a generative AI model to scour the internet for information about people in certain careers.

It’s two things. One, we pull the data across the public web. Then, we use generative AI to merge all of this data intelligently. So a person who has a GitHub profile may also have a LinkedIn profile, but traditionally you wouldn’t know they’re the same person. There’s this level of data integration that happens [with AI] that allows us to see people more holistically for all their different sets of experiences that may not live in one unified platform.

Once you have the unified data, how do you actually — for a human recruiter or an AI — query that data in a less biased way? That’s where humans traditionally will think, I’ve been trained to use the title filter, so let me filter, ‘title equals senior software engineer.’ But some companies today call their software engineers “members of technical staff,” and you will never find that person if you’re just looking for that exact title. The AI can now say, ’You want a software engineer, let me show you in the approximate space of software engineers, everything else that might be relevant.’

So the secret sauce is two things. One, you have to find the data, which is probably a bit of a proprietary effort. And then you have to be able to make sense of it. You’re probably not training models?

We have some custom models within our system that we have trained ourselves. The best way to think about how the system operates is there’s some core data and a bunch of models that sit on top of the data for different use cases. We’re increasingly moving towards a system where you can actually just chat with our AI and it takes a whole conversational history and it understands what you’re looking for.

That’s an example of how you can get a hiring manager to be more expressive, and help them really think through what it is they actually want. It might be someone who has great skills in Python, but they might tell you, ‘I want someone who worked on PyTorch at Google,’ because that’s the one team they know. Helping them really understand at the root what the bullseye is, and then using AI to help them actually get to that bullseye.

Have some of your clients seen improvements in their diversity numbers as a result of using the product?

We show more underrepresented individuals than most other recruiting agencies. We have customers who say, ’one of the reasons I use Moonhub is because I see a more diverse pool of candidates that is not diverse because we want them to be diverse, but diverse because they’re able to find more candidates beyond what traditional recruiting systems are able to find.’

When you do these searches, are you asking the model to find people with diverse backgrounds? Are you even allowed to do that?

Five years ago, there was a big trend in building an active pipeline around diverse candidates. Now the trend is more companies wanting to hire diverse candidates and us wanting to support them, but there is less of an active hiring effort.

So you’re not saying you want a certain number of women, say, as part of the search?

We find the best people who are a good fit, and we do the actual legwork to find people who traditionally may not be found, and those tend to be people who are more diverse. But if a customer comes to us and says, ‘I only want to meet disabled candidates,’ that’s not something we would [do].

So it’s almost like your service is finding the off-the-beaten-path candidates, and that will just generally be more diverse.

Yes. And some of this is also how you message an opportunity. If you say, ‘I want people with at least five years of software engineering experience,’ there are many studies that show men are more likely to apply, even if they have three or four years, while women are less likely to apply. We will work with the customer to help them understand that these types of restrictions within their search may lead them to have a more biased pool of candidates.

Back to that person in Wisconsin. You said there were certain metrics, like promotions. That seems like the kind of thing that they would have posted about on social media. What are these data sources?

There are many different things you can look for. In the [Wisconsin] example, the way you were able to find that particular signal was by looking at people’s historical career progressions on data sources like Linkedin. But there are so many other data sources people can consider that they don’t.

An example is these coding communities or open-source coding challenges. Oftentimes we have found they attract a lot of people who have not worked at the Googles or Facebooks of the world and might be a really smart hacker in India or somewhere, who is super passionate about machine learning and decided to enter this challenge and ended up doing super well. That person you would never really find just looking at, ‘Oh, they went to some school in India.’

Most American employers won’t know the school and will write them off. But this person was actually the Grand Prize winner of this competition, and that’s a big piece of signal. Another example is university websites that have names of all their students and who works in which labs. Academics notoriously don’t have LinkedIn profiles, so you can find these university websites, and identify individuals who have a really deep area of expertise that you may never find on a traditional platform.

Another example is Google Scholar, where you can look at research papers people have published and understand their area of expertise beyond what you might traditionally know about them if you just looked at their career history. There’s also a lot of great AI talent that live in open-source communities that are not in the major hubs of the United States. One of our customers recently hired an amazing engineer in Vietnam and they knew about this person because they happened to be a major contributor to an open-source community.

So it’s kind of the equivalent of the talent scout going to the dive bar to find the next great singer.

That’s a great analogy. I will say the AI we’re building goes beyond just sourcing, but I do think sourcing great candidates is a big part of the puzzle for making recruiting less biased. A good way to think about it is the AI talent scout, and finding great people before it’s obvious they’re great.

What kind of feedback are you getting from this? With the person from Wisconsin, for instance, are you able to use that to reinforce models and increase the quality of the signal?

Over time we learn about the hiring preferences of each of our customers, and we use that to help them find better candidates in the future. There’s a big challenge in recruiting, what we call the calibration step of hiring, where every hiring manager is different. Two managers telling you ‘I want to hire people from top AI startups’ mean totally different things. You might be looking for seed-stage companies that are still in stealth, or people who are working at OpenAI.

One of the biggest challenges for recruiting is distilling the meaning behind what people say they’re looking for. That’s where I think AI can help get closer to the bullseye, by at least understanding what the bullseye is. In terms of the signals, we learn over time across the research. The second piece is the actual signal of which candidates get hired. So we understand that this type of candidate is a great match when someone says, ‘I want a software engineer who has experience in Java, Python, and has worked on these repos.’ This is the example profile, and as more companies hire these more diverse profiles, our AI learns to reinforce that people in Wisconsin could be a great fit as well.

AD