I got a message from a friend last month that made me stop what I was doing.
He had just finished a coding interview at Meta. Standard loop, nothing unusual about the scheduling or the format on paper. Except when he opened the CoderPad environment, there was an AI assistant built right into the editor. Not a hidden tool he snuck in. Not something he had to ask permission to use. Meta put it there. They wanted him to use it.
He panicked. He had spent three weeks grinding LeetCode problems the traditional way, memorizing patterns, practicing by hand, building the muscle memory to write binary search from scratch without thinking. And now the company was telling him that the skill he had been preparing was not the skill they were testing.
That conversation stuck with me because it captures something happening across the industry right now. The technical interview, the format that has defined developer hiring for two decades, is being rewritten. And a lot of developers are still preparing for the old version.
What Actually Changed
Let me lay out the facts before getting into opinions.
Meta began piloting an AI-enabled coding interview in late 2025 that replaces one of the two coding rounds at the onsite stage. The setup is a 60-minute CoderPad environment with a built-in AI assistant. Candidates can access GPT-4o mini, GPT-5, Claude Sonnet, Claude Haiku, Gemini 2.5 Pro, and Llama 4 Maverick. The models are right there in the editor. You are expected to use them.
This is not a side experiment. Meta is rolling this format out across backend and operations-focused roles. They are also tying AI usage to employee performance reviews starting this year. If you work at Meta, how you use AI is now part of how you are evaluated.
Google followed a similar path. They now factor AI tool usage into software engineer performance reviews for the first time. Rippling and a growing number of mid-stage companies allow or actively encourage AI tools during their technical interviews.
The shift is not theoretical anymore. The biggest employers in tech are changing how they evaluate candidates, and the new criteria look fundamentally different from what most people have been preparing for.
Why Companies Made This Move
The cynical take is that companies want to seem innovative. The real reason is more practical: the old format stopped working.
Traditional coding interviews tested whether you could write algorithms from memory under time pressure. That was a reasonable proxy for programming ability when writing code from scratch was the daily reality of the job. In 2026, it is not.
A study published in March 2026 found that AI coding tools make mistakes roughly one in four times. That sounds bad until you realize the implication: three out of four times, they produce correct, functional code. And developers at these companies are using AI tools all day, every day. Testing someone’s ability to write code without AI is like testing a carpenter’s ability to hammer nails without a hammer. You can do it, but it tells you nothing about how they will actually perform on the job.
Companies also had a cheating problem. Remote interviews made it trivially easy for candidates to paste problems into ChatGPT in another tab. Instead of fighting this with more surveillance, Meta made a smart move: they put the AI right in the interview environment. Now everyone has access to the same tools, and the evaluation shifts to how you use them.
That is the key insight. The playing field leveled, and companies realized they needed to evaluate a different set of skills.
The New Three-Phase Interview
Based on what I am seeing from companies that have adopted AI-assisted interviews, the format is converging on a three-phase structure within a single session.
Phase 1: Problem Decomposition
You get a problem. Before touching any code or any AI tool, you are expected to break it down. What are the requirements? What are the edge cases? What is the right data structure? What is the algorithmic approach?
This phase tests whether you can think about a problem before reaching for a solution. It is the phase most candidates rush through, and it is the phase that matters most. A candidate who decomposes the problem clearly and then uses AI to implement it looks fundamentally different from a candidate who immediately prompts the AI and hopes for the best.
The interviewers I have talked to say this is where they see the biggest gap between strong and weak candidates. Strong candidates spend five to eight minutes on decomposition. Weak candidates start prompting within 30 seconds.
Phase 2: AI-Assisted Implementation
Now you write the code, using the AI assistant as a collaborator. The evaluation here is not about the final code quality alone. It is about your process.
How do you prompt the AI? Do you give it useful context or vague instructions? When the AI generates something, do you read it critically or accept it blindly? When the output has a bug, do you catch it, or does the interviewer have to point it out?
This is where the skill set from context engineering directly applies. The developers who understand what context an AI needs to produce good output are measurably better in this phase than developers who just type “write a function that does X.”
Phase 3: Defense and Review
This is the phase that catches people off guard. The interviewer asks you to walk through every line of code, including the lines the AI wrote. You need to explain what each part does, why it was written that way, and what the alternatives were.
If the AI chose a hash map and you cannot explain why a hash map is the right choice here (versus a sorted array or a trie), that is a problem. If the AI wrote a recursive solution and you cannot explain the base case or the time complexity, that is a problem.
The rule is simple: you are responsible for every line, regardless of who (or what) wrote it. This is the same principle I wrote about in agentic coding and it applies directly here. The person using the tool is accountable for the output.
What They Are Actually Evaluating Now
Let me be explicit about the new evaluation criteria, because understanding them changes how you prepare.
System thinking over syntax recall. Nobody cares if you remember the exact method name for sorting an array in Python. They care if you understand when to sort, why sorting matters for the solution, and what the performance implications are. The AI handles syntax. You handle judgment.
AI collaboration skill. This is a real competency now, not a nice-to-have. Can you prompt effectively? Can you iterate on AI output? Can you recognize when the AI is confidently wrong? Can you steer it toward a better solution? Companies like Meta are explicitly evaluating this because it directly predicts on-the-job performance.
Debugging AI-generated code. The AI will produce code with subtle bugs. Your ability to catch them before the interviewer does is now a core signal. This is harder than it sounds because AI-generated code looks plausible even when it is wrong. The technical debt problem applies in miniature during an interview.
Communication clarity. The three-phase format puts more weight on how you communicate your thinking. Decomposing a problem out loud, explaining your prompting strategy, defending your code choices. Interviewers consistently say that clear communication is the strongest predictor of a hire decision, and the new format gives you more surface area to demonstrate it.
Architectural judgment. When the problem has multiple valid approaches, which one do you choose and why? The AI can implement any of them. The question is whether you can evaluate the tradeoffs and pick the right one for the specific constraints. This requires genuine understanding, not pattern matching.
How to Prepare (The Honest Version)
If you have an interview coming up at a company that uses AI-assisted format, here is what I would actually do based on everything I am seeing.
Stop grinding LeetCode the old way. I know this is heresy. But spending hours memorizing solutions to problems that you will solve with AI assistance in the actual interview is a misallocation of your preparation time. You still need to understand algorithms and data structures. You do not need to write them from memory under time pressure.
Practice decomposition as a standalone skill. Take a problem, close your editor, and spend ten minutes breaking it down on paper or in a notes app. What are the inputs and outputs? What are the constraints? What patterns apply? What are the edge cases? Do this for 15 to 20 problems and you will be dramatically better at Phase 1 than most candidates.
Get comfortable prompting AI for code. If you have not spent meaningful time with Claude Code, Cursor’s agent mode, or similar tools, start now. The skill is different from writing code yourself. Learn how to give the AI enough context to produce good output on the first try. Learn what kinds of tasks it handles well and where it tends to go wrong.
Practice code review, not code writing. Read AI-generated solutions to problems. Find the bugs. Identify the inefficiencies. Explain why the approach works or why a different approach would be better. This is the skill being tested in Phase 3, and most developers do not practice it deliberately.
Build real things and be able to talk about them. The project-based assessment component is growing. Companies want to see what you have built, and more importantly, they want to hear you explain the decisions you made. Why this database? Why this architecture? Why this tradeoff? If you have side projects, make sure you can speak about them at depth. This is the same advice I gave in my post about what I learned from 500+ interviews, and it is even more relevant now.
The Candidates Who Struggle
Having talked to people on both sides of the interview table, there are clear patterns in who struggles with the new format.
The memorization expert. If your interview preparation consisted entirely of memorizing solutions and regurgitating them, the new format exposes this immediately. Phase 3 requires you to actually understand the code, and if your understanding is “I memorized this pattern,” you will not be able to defend it under follow-up questions.
The AI over-reliant candidate. This person prompts the AI, accepts whatever comes back, and moves on. They treat the AI like an oracle rather than a tool. When the interviewer asks “why did you use this approach?” the answer is “because the AI suggested it.” That is a fail. Every time.
The AI avoidant candidate. This person ignores the AI entirely and writes everything by hand, either out of pride or unfamiliarity. They finish half the problem in the time it takes other candidates to finish the whole thing. Using the available tools effectively is part of what is being evaluated. Refusing to use them is like refusing to use an IDE.
The poor communicator. The new format is more conversational. You are expected to think out loud during decomposition, narrate your prompting strategy, and defend your choices. If you are used to solving problems silently and presenting the answer, you need to adjust. The process matters as much as the result.
What This Means for Junior Developers
This is where I have mixed feelings, and I want to be honest about it.
On one hand, the new format could be better for junior developers in some ways. You no longer need to memorize hundreds of algorithm patterns that you will never use on the job. The AI handles the rote implementation, and the evaluation focuses on thinking and communication skills that are not as directly correlated with years of experience.
On the other hand, the “defense” phase is harder without deep fundamentals. If you cannot explain why a solution works, it does not matter that the AI wrote it correctly. And building those fundamentals, genuine understanding of data structures, algorithms, time complexity, system design, takes real study. There is no shortcut.
The junior developer crisis is real, and the changing interview format adds another layer of complexity. Junior candidates now need to demonstrate a skill (effective AI collaboration) that requires the very experience they do not yet have.
My honest advice for junior developers: invest heavily in fundamentals and system design understanding. Use AI tools daily so that collaboration with them feels natural, not performative. And practice explaining your code out loud. The ability to articulate your thinking clearly is worth more than any number of solved LeetCode problems.
The Companies That Have Not Changed Yet
Not every company has adopted AI-assisted interviews. Plenty of companies still run the traditional whiteboard or HackerRank format. Some will resist the change for years.
If you are interviewing broadly, you need to prepare for both formats. That means understanding algorithms well enough to write them by hand (for traditional interviews) while also being skilled at AI-assisted development (for modern interviews).
The uncomfortable reality is that preparing for both takes more total effort than preparing for one. But the companies adopting the new format tend to be the ones offering the highest compensation and working on the most interesting problems. If those companies are on your target list, the preparation investment is worth it.
The Bigger Picture
What I find most interesting about this shift is what it reveals about how companies think about developer skills.
For twenty years, the implicit assumption behind coding interviews was: the best developers are the ones who can produce correct code the fastest from scratch. That assumption shaped everything, how we prepared, how we evaluated, what we practiced, what bootcamps taught.
The new assumption is different: the best developers are the ones who can produce correct systems the fastest, using whatever tools are available, while understanding deeply enough to ensure quality.
That is a better assumption. It is closer to what the actual job looks like. And it rewards a different kind of preparation, one that emphasizes understanding over memorization and judgment over speed.
I have been through hundreds of interviews, and the honest truth is that most of them did a poor job of predicting who would actually be good at the work. The new format is not perfect, but it is testing something closer to the real skills the job requires. That is progress.
Whether you like the change or not, it is happening. The companies that matter are moving this direction. Prepare accordingly.