Tests should be written from requirements. Using LLMs to write tests after the code is written (probably also by LLMs) is a huge anti-pattern:
The model looks at what the code is doing and writes tests that pass (or fail because they bungle the setup). What the model does not do, is understand what the code needs to do and write tests that ensure that functionality is present and correct.
Tests are the thing that should get the most human investment because they anchor the project to its real-world requirements. You will have tons more confidence in your vibe coded appslop if you at least thought through the test cases and built those out first. Then, whatever the shortcomings of the AI codebase, if the tests pass you can know it is doing something right.
Tests should be written from requirements. Using LLMs to write tests after the code is written (probably also by LLMs) is a huge anti-pattern:
The model looks at what the code is doing and writes tests that pass (or fail because they bungle the setup). What the model does not do, is understand what the code needs to do and write tests that ensure that functionality is present and correct.
Tests are the thing that should get the most human investment because they anchor the project to its real-world requirements. You will have tons more confidence in your vibe coded appslop if you at least thought through the test cases and built those out first. Then, whatever the shortcomings of the AI codebase, if the tests pass you can know it is doing something right.