TechX

Insight

A Third of Your Candidates Are Cheating. Good.

We screen engineers every week. Over the past year the pattern has become impossible to ignore. Candidates who sound flawless in the first round cannot explain their own answers in the second. Someone submits a perfect coding assessment and then struggles to walk through it live. A candidate’s vocabulary in a written take-home does not match how they talk on camera. This is not rare anymore. Depending on which data set you trust, somewhere between a quarter and a third of technical candidates are now using AI tools to cheat in interviews.

Our take is probably not what you expect. This is useful information. Not because cheating is fine. It is not. But because the reason cheating works so well right now is that most technical interviews were already broken. The cheating just made it obvious.

A $20 Tool Beats Your Interview Because Your Interview Was Always Beatable

The cheating tools are not subtle anymore. For twenty to fifty dollars a month a candidate gets a real-time overlay that listens to the interviewer’s question, runs it through a model, and displays the answer on screen without showing up in the screen share. The interviewer sees a candidate staring thoughtfully at the camera. The candidate is reading.

Here is the thing. Those tools only work because most interviews ask the same questions everyone else asks. Standard algorithm problems, standard system design prompts, standard behavioral setups. If your interview follows a template that a $20 tool was trained on, the tool will beat it every time. That is not a cheating problem. That is an interview design problem.

The interviews that do not have a cheating problem are the ones that were never predictable in the first place. Hand the candidate a chunk of real code and ask what is wrong with it. Walk through an actual incident your team dealt with and ask how they would have approached it. Give them a design constraint that has no single right answer and see how they reason under pressure. None of that is templateable. None of it can be solved by an overlay reading a cached LeetCode solution.

You Are About to Spend $40K on Eye-Tracking Software. Don’t.

An entire industry has appeared in the last twelve months selling “interview integrity” tools. Eye-gaze tracking. Keystroke analysis. Browser lockdowns. Behavioral biometrics. Some of these platforms are genuinely sophisticated. Most of them are solving the wrong problem.

Here is what happens when you buy detection software. You catch some cheaters. You also create a hiring experience that feels like a TSA checkpoint. Your best candidates, the ones who do not need to cheat, start dropping out because they do not want to be surveilled. The people who are willing to sit through biometric monitoring are either desperate or good enough at cheating to beat the system. Either way, you are selecting for the wrong thing.

The companies we work with that have the least cheating are not the ones with the fanciest detection tools. They are the ones whose interviews are hard to cheat because the questions require real-time judgment, not memorized answers. Surveillance is an arms race you will always be one step behind. A good interview makes the arms race irrelevant.

Even Anthropic Had to Rewrite Their Own Interview Questions

Think about this for a second. Anthropic, the company that builds Claude, had to rewrite its own technical interview because too many candidates were cheating with Claude. If the people who build the model cannot design a cheating-proof interview on the first try, the rest of us should probably stop pretending our standard process is fine.

The lesson is not that AI ruined interviews. The lesson is that interviews were never testing what we thought they tested. They tested whether you had seen the problem before and practiced it enough to reproduce the solution under time pressure. That is a memory exercise, not an engineering evaluation. AI just automated the memory part. What is left, and what actually matters, is whether someone can reason about a system they have not seen, evaluate code they did not write, and make judgment calls when the answer is not obvious.

Camera On, Script Off, Judgment Over Output

Here is what we tell the teams we work with. You do not need more rounds. You do not need proctoring software. You do not need to redesign your entire pipeline. You need to change what your interview actually measures.

Stop asking candidates to produce code from scratch in a timed window. AI does that better and faster than any human now. Start asking them to read, evaluate, and critique existing code. Hand them something with a subtle bug and see if they find it. Hand them something that works but is poorly designed and see if they know why. Hand them a tradeoff with no clean answer and see how they think through it.

The candidates who can do this are the ones who will actually be useful on your team. The ones who cannot will fail whether they cheated or not, because the job itself is now about judgment, not output. Your interview should reflect what the job actually looks like. If your interview still looks like a coding contest from 2018, the cheating is doing you a favor by showing you how far behind your process is.

The Cheating Is the Signal. Listen to It.

A third of your candidates are cheating. That number will keep growing. Every new model release makes the tools cheaper and better. You can spend the next two years chasing detection, buying software, adding rounds, and watching your best candidates walk away from the process. Or you can take the hint.

The interview was broken before anyone cheated on it. The cheating just proved it. Fix the interview and the cheating stops mattering.

TechX engineers go through project-based evaluations before they ever reach your team. We do not run LeetCode rounds. We put people on real work with real constraints and see how they perform. That means the people we deploy have already proven the thing your interview should be testing for: whether they can actually do the job. If your interview process is overdue for an overhaul, let’s talk.

Navigate the innovation curve

Stay ahead with exclusive insights and partnership opportunities delivered directly to your inbox.