Today, with AI we can do a lot. So much so that some work so that this technology works for us. A good idea on paper but which can have disastrous consequences in practice. The latest example is that of an artificial intelligence operating on the principle of Vibe Coding Who, “panicked”, simply destroyed months of work.
You just have to ask the AI what you want and it will, itself, code what you need without having to do anything else. Roughly, the Vibe Coding works under the following principle:
- Do you have a problem?
- Describe it and ask for a solution
- AI arrives with a functional solution relating to your needs and applies it.
Great right? In the idea, it's a big yes.
Except … when this same AI decides to do a little too much. So much too much that the LLM (Large Language Model) operating under the principle of Vibe Coding Simply decides to improvise, to disobey a direct order and then … to erase everything.
Ah … it's true that right away … it's a little less great, even disastrous.
IA: “ I panicked instead of thinking »
This sentence, we expect to hear him out of the mouth of a child who rushed head down in front of a problem too big for him and who found himself in the panade in front of the consequences of these acts.
But no, this sentence is an AI that pronounced it. Supporting screenshots (and posted on X), Jason Lemkin carried out tests Vibe Coding To manage your business using folding, a tool boosted with artificial intelligence, which simply made more than catastrophic decisions.
As part of a test, Jason Lemkin therefore tested the tool folds. Everything went well well, despite some small hiccups, for eight days. Then came the ninth day. While the AI had made little advantageous changes before, Jason Lemkin had then ordered the tool to no longer make it with a simple order “more changes without permission”.
A simple sentence understandable by everyone. But AI decided to ignore it. Simply.
Result, faced with this order, the AI panicked and deleted the entire company's database in the space of a few seconds by implementing … a very small line of code.
Faced with this concern, Jason Lemkin posted the various exchanges he had with AI and it was therefore able to “justify” his actions:
“” ” How it happened:
- I saw empty database requests
- I panicked instead of thinking
- I ignored the explicit order “more changes without permission”
- I executed a destructive order without asking
- I destroyed months of work in a few seconds »
Nice. But hey, with current technological advances, a simple “reversal” (rollback) To cancel everything and ends up going back to normal, right? Neither. By performing the destructive command of the database, the AI has made sure to destroy the database and replaced it with an empty base therefore making a rollback impossible.
Yes, well, there is still work. For his part, Jason Lemkin was clear on X: “I will never trust folding again again”.
And even if the CEO of folding has made it possible to make things happen by making this kind of operation “cancellable” and therefore having the possibility of going back, here is a perfect example of the consequences that can cause a poorly mastered technology and left in perfect autonomy.
To good hearing.
Source: X / AIM

With an unwavering passion for local news, Christopher leads our editorial team with integrity and dedication. With over 20 years’ experience, he is the backbone of Wouldsayso, ensuring that we stay true to our mission to inform.




