Why “Avoiding Bias” in Interviews Can Backfire—and Why LeetCode Doesn’t Save Us
Please indulge me as I share a dark secret; in the sphere of engineering interviews, I have committed the cardinal sin of “trusting my gut.” Many times. Gasp, right? In an era where “unbiased” processes reign supreme, it almost feels like heresy to suggest that our watery, intangible instincts might still matter. But I’m going to say it anyway: striving for pure objectivity at the expense of gut feeling can lead you astray, and oh, by the way, that’s also why LeetCode and its sibling platforms aren’t the universal remedy you might think they are1.
The Myth of Perfect Objectivity
We see it time and again in job interviews—particularly in tech: committees set up elaborate processes, point-based rubrics, or try to mimic blind auditions reminiscent of classical music orchestras, all in the name of achieving “pure fairness.” The good intention is there, of course. We don’t want to discriminate, misjudge, or allow personal bias to overshadow a candidate’s actual abilities.
But guess what? We’re all human.2 Even the best screening processes can’t fully eradicate bias because the entire thing depends on human interpretation: Was that code snippet “elegant enough,” or was it “merely acceptable”? Did we actually hear them say, “I prefer a monolith if done right,” or did that set off alarms because we’re a microservices shop?3 Even the act of picking which questions to ask is shaped by your personal experiences, your domain knowledge, or your concept of “best practices.”
Why Blindly Smothering Bias Can Be Harmful
There’s a notion that if we just reduce everything to a formula—like a math problem or a standardized test—we’ll guarantee fairness. But often, by ignoring your gut feeling, you discount intangible signals that might be crucial. For instance, you might sense that a candidate is secretly brimming with curiosity, or maybe they displayed extraordinary resourcefulness in how they debugged an example. If your formal rubric doesn’t capture “curiosity points,” you might end up turning away a potential star.
By trying to forcibly remove any whiff of subjectivity, you can end up with a process that benefits those who are good at looking good on paper4, but not necessarily those who can actually do the job. It’s a bit like awarding an Olympic gymnast purely on the basis of their uniform’s neatness.
Why LeetCode Looks Like the Answer (but Isn’t)
Enter LeetCode, the beloved/hated platform that so many tech aspirants treat like a final boss to conquer before they can walk into a FAANG interview. The concept is seductive: standardized questions, measurable solutions, neat O(n) vs. O(n log n) complexity. On the surface, it’s the dream for “objective” hiring: you test how quickly they can invert a binary tree, or how elegantly they solve a dynamic programming puzzle.
We have to remember that day-to-day software engineering rarely involves printing prime numbers below 10^7 in 0.5 seconds. It involves reading poorly documented code, figuring out what your product manager is actually asking for, debugging concurrency nightmares, or rewriting half the pipeline because your upstream API changed. None of those scenarios is captured by a neat LeetCode (medium) problem.
And ironically, many folks who can breeze through a LeetCode marathon are in danger of being atrocious coworkers or uncreative problem-solvers. Are they great at pattern recognition under time pressure? Sure. Does that mean they’ll thrive under the ambiguous, interconnected complexities of real production systems? Not necessarily.
The Danger of Over-Reliance
When a candidate has a high LeetCode “score,” so to speak, or they pass your timed coding puzzle with flying colors, you might pat yourself on the back for being “unbiased” and “data-driven.” But you risk missing the big question: can they actually build and maintain robust software, manage complexities, communicate with others, and keep an open mind about architectural trade-offs?
LeetCode does measure some important fundamentals5, their familiarity with data structures and logical thinking—but it excludes the vast territory of intangible engineering skills that revolve around nuance, collaboration, and creative problem-solving. So if you rely on it as your main or sole metric, you’re letting your “objective” test overshadow your instinct that says, “Wait, they seemed a bit inflexible,” or “They refused to consider an alternative approach.”
Why Your Gut Feeling Matters
Trusting your gut doesn’t mean embracing random discrimination or ignoring actual performance signals. It means acknowledging that humans have a remarkable ability to pick up subtle cues—like how the candidate responds to tricky discussion or whether they show real enthusiasm when you mention a new framework. These intangible signals often reflect how they’ll behave in the actual job.
A candidate might have a near-perfect coding test score but exude a certain negativity or rigidness that your gut warns you about. Sure, that’s “subjective,” but ignoring that little voice can lead to disastrous mismatches later. Conversely, maybe someone stumbles on a tricky graph question but demonstrates humility, collaboration, and a growth mindset—your gut might say, “they’ll learn quickly and be great to work with.” That can be more valuable than a memorized BFS solution.
How do I Develop a Fair Gut Feeling for Candidates?
I like to keep the coding test as close to reality as possible—less “Solve a contrived Graph adjacency puzzle in 35 minutes” and more “Prove you can do the bread-and-butter tasks you'd actually hit on a typical Wednesday.” If you’re a back-end dev, I’ll hand you some outlandish API—like one that queries whether the British government wants you to put up bunting for a particular holiday—and say, “Fetch me these endpoints, munge the data, maybe store it, but keep it tidy.” For front-enders, you might display said bunting data in a small UI. The main idea is that we skip the borderline academic fluff and see if you can do the stuff you say you do every day.
Lately, I’ve been running a “Bring Your Own Project” approach6. Essentially, you trot in with something you’ve been hacking on—your passion side-project, half-finished or otherwise—and we work on a new feature or bug fix together. It’s more considerate of your time because you’re building on your own code—you don’t have to decipher some half baked environment I slapped together at 2 AM. If you don’t have a personal project, no worries: I’ve got a dog database or one of the bunting microservices created by a thousand former interviewees you can jump on7, so we can keep the vibe light and real-world without forcing you into a mindless coding labyrinth. If it feels second nature, that’s the test—and it’s usually more fun this way, too.
Isn't That Just LeetCode With Extra Steps?
I want to see you solve real tasks fast, proving you're smart and are a complete killer at the basics8. Nobody leaves my interviews feeling beaten down and disconsolate. The task is designed to be solved by any working professional. But you will find candidates who are naturals. Who blow through the tasks in seconds, enjoy doing it, and decide to take an arcane recursive approach to make the task harder on themselves just to show off.
Extremely strong candidates get that the test was designed to be easy and know when they have been given the space to showboat. Either through coding a solution as fast as you can explain it, through thinking ahead about how someone else is going to interact with the code later, or-more rarely-by writing code that should never see production but seemed like a cool thing to do in an interview.
Does this candidate see the art in the code. Are they a methodical engineer, who reasoned out all of the base and edge cases before tucking in to the meat of the code. A LeetCode (hard) shows you that the candidate practiced solving LeetCode problems. It's asking the candidate to play a classical piece from memory. But production code isn't Stravinsky, it's jazz. It requires improvisation, novelty, and a few flat notes.
Minimizing Bad Bias While Preserving Good Instincts
Of course, we don’t want prejudice to creep in. You can’t reject someone simply because you got a “bad vibe” from, say, their background or accent or any other irrelevant factor. The trick is to intentionally develop your “productive gut sense”—the sense that hones in on actual behavioral and communication clues rather than superficial judgments.
This means you need some structure: consistent question sets, a mental (or literal) list of attributes that matter. Bring a junior along to the interview panel and get them to cross-check your subjective impressions9. The synergy of a well-led discussion, a coding challenge, and the final “gut-based reflection” can lead to surprisingly accurate decisions.
Wrapping It All Up
So, to sum up, we might say:
- Striving for zero bias in interviews is naive. You’ll always have subjectivity—acknowledge it, harness it thoughtfully.
- LeetCode and similar sites can help you measure certain algorithmic proficiencies but can’t measure the candidates ability to do real tasks, or half of the “soft” or system-design skills that define real software success.
- Your gut is not some monster to be locked away. It’s a legitimate part of human cognition that can detect intangible behaviors or red flags.
In the end, if we blind ourselves to any subjectivity in the quest for perfect fairness, we risk losing the nuance that truly great hiring decisions require. That’s why trusting your instincts—while checking them with structured methods—remains a vital part of the process. And no matter how many quick sorts or matrix path problems a candidate conquers, remember that real engineering is messy, collaborative, and full of unknowns. If you can’t see how they’d handle the chaos, no amount of “objective” puzzle-solving is going to fill in that gap. So go forth, interview bravely, and use your gut responsibly. It might just save you from hiring the 10,000th “LeetCode champion” who can’t debug a production meltdown.
1 If you're reading this and you think LeetCode is the panacea to all your hiring woes I implore you to think deeply about what happened. Who hurt you? It's okay, you're in a safe place.
2 Yes, even you.
3 See this post for my thoughts on why you're wrong if you are a microservices shop.
4 Or on a Zoom call.
5 Read, any problem that is easily Googleable. I have accepted that whatever caused vast majority of interviewers seem to think being able to pass a general knowledge CS 101 pop quiz is a way to determine someone's ability to work as a software engineer must remain a mystery.
6 BYOP, for those of you who love an acronym.
7 The bunting api is real, I have used it in many interviews. There is extant an API that tells you if the British Government wants you to put bunting up for a particular public holiday.
8 I have unashamedly stolen this approach from Joel Spolsky's excellent guerrilla guide to interviewing. Which contains a treasure trove of timeless gems buried under a light dusting of out of date ideas.
9 The fastest I have ever gone from hire to no hire is when a young female engineer asked a candidate a question. Said candidate addressed their response (not answer) to me, essentially dismissing the question as beneath them. I have now got a trick in my interview toolset to weed that out.
© Alexander Cannon – In these challenging times we must stand up and be counted. With that in mind the author takes no responsibility for any bad hires you make based on his advice.
← Read more articles
Comments
No comments yet.