I don’t doubt that it is possible to create good code when focusing on programming best practices etc. and taking the time to check the AI output thoroughly. Time however is a luxury most of the devs in those companies don’t have, because they are expected to have a 10x code output. And thats why the shit hits the fan. Bad code gets reviewed under pressure, reviewers burn out or bore out and the codebase deteriorates over time.
- 0 Posts
- 3 Comments
Joined 3 years ago
Cake day: July 24th, 2023
You are not logged in. If you use a Fediverse account that is able to follow users, you can follow this user.
- RedstoneValley@sh.itjust.workstoTechnology@lemmy.world•Anthropic nuked a company's access to Claude, stopping 60 employees dead in their tracks — support via Google Form is the only recourse for vague usage policy violationEnglish2·4 days ago
- RedstoneValley@sh.itjust.workstoTechnology@lemmy.world•Anthropic nuked a company's access to Claude, stopping 60 employees dead in their tracks — support via Google Form is the only recourse for vague usage policy violationEnglish262·4 days ago
This approach to coding is exactly what creates the problem. They will find out the hard way if they can continue to be productive when something breaks and AI is not available for whatever reason. Does anyone know how to fix it? Is the documentation sufficient to understand what the AI did?
Good analogy. I’m gonna steal that :D