When you claim "Machine Learning" as your biggest strength but can't do basic arithmetic, you've basically mastered the entire field. The developer here has truly understood the core principle of ML: you don't need to know the answer, you just need to confidently adjust your prediction based on training data. Got it wrong? No problem, just update your weights and insist it's 15. Every answer is 15 now because that's what the loss function minimized to.
Bonus points for the interviewer accidentally becoming the training dataset. This is gradient descent in action, folks—start with a random guess (0), get corrected (it's 15), and now every prediction converges to 15. Overfitting at its finest.
AI
AWS
Agile
Algorithms
Android
Apple
Bash
C++
Csharp