Everyone is chasing the best llm models. The ones that can generate entire systems in one shot. But what if bad llm models were actually useful for something? Like, say… technical interviews?
Code-generation models have fundamentally changed how we write software. People don’t type out every line anymore , they prompt AI to generate code and then debug it. And let’s be honest - some candidates are already using these tools to cheat in coding interviews.
So why are we still interviewing engineers as if they’re going to be writing everything from scratch?
Also most AI-generated code is usable but not great, especially when working on large codebases. Even with growing context windows the code while might look perfect, is not necessarily most maintainable. The ability to refine and fix suboptimal code is what separates good engineers from average ones.
A Better Interview Process
Interview platforms like CodeSignal, HackerRank, or HackerEarth should incorporate LLM support directly into their IDEs. But instead of giving candidates the best models, they should integrate suboptimal AI models - ones that produce verbose, inefficient, or even buggy code. The new interview process would look something like this :
- Candidates describe their solution in words instead of writing code from scratch
- The LLM generates a mostly-working but not great solution - full of boilerplate, inefficiencies, and maybe even bugs.
- The real test begins. The candidate must diagnose issues, refine the prompt if needed, and improve the generated solution.
This approach shifts the focus to skills that truly matter now and would help in selecting candidates who can quickly spot inefficiencies, unnecessary complexity, or poor structure demonstrate taste in code - a skill becoming incresingly important now
Why this makes sense
- It mirrors real-world development – Engineers already use tools like Cursor, Copilot, ChatGPT to assist their coding. We should test them in environments they actually work in.
- Subpar LLMs can simulate the behaviour of STOA llms models in larger context and they are cost-effective – Instead of relying on expensive, cutting-edge AI models, companies can deploy open-source LLMs that simulate the real-world shortcomings of AI-generated code. This not only keeps costs down but also ensures privacy and control over the interview process.
AI Won’t Replace You, But It Will Change What Matters
If LLM can generate 80% of your code, the real challenge is in debugging, refactoring, and making it good. A great interview process should reflect that. So maybe… we should start using bad LLMs on purpose?