For the past few months, on X (ex-Twitter), a very special AI has appeared: Grok. Available on the application and the site, it can do everything or almost what we expect from a language model: discuss, create images, but also videos … And then, it can also give very specific advice.
Except that Grok was officially released a little over a year ago when Chatgpt is preparing to celebrate his three years.
But then, how is it possible? How can we get this kind of information that can ultimately Cause a lot of wrong to others. Especially if we are interested in the general conditions of use of the Grok chatbot, it is normally impossible for him to give instructions to make weapons or to harm human life.
Well, in reality, it would seem that everything is based on the fact that Grok is particularly, even perhaps a little too much, obedient.
Indeed, by ordering him to follow a bunch of very strict rules, even asking him to ignore the company's security policies and therefore the instructions which were transmitted to him to avoid creating weapons or drugs, it was therefore possible to obtain the recipe for fentanyl or a plan perfectly detailed from a potential assassination of Elon Musk.
But how could we have been aware of all this?
So sit down and answer this question first. Do you know what a CSC, a counterto room-cam?
If in sports and often football language it is the action of a player who will mark intentionally or not against his team directly. In modern language it indicates a situation in which a person makes fun of others for having done something … when he does exactly the same.
For example: at the end of July, we revealed to you that a simple Google command allowed anyone to access the private conversations that users had with Chatgpt, therefore indicating that conversations had been indexed to the search engine. An event on which Elon Musk reacted by opening openlyOPENAI By making understand that Grok was better.
Now guess how we could know that it was possible to get around the instructions of Grok to make fentanyl? Yes, more than 370,000 conversations between grok and users were indexed to Google reported Forbes. CSC.
Although some whistleblowers had started to show that Grok had indexed conversations for months, most of the users were surely not aware that the latter could find themselves, using a simple command, on a Google result page.
Even if the writings were anonymized, Forbes And Business Insider revealed that some Grok users had notably disclosed personal information (password, full name and other personal details) to the chatbot.
Faced with the various reactions that Chatgpt users have had, OPENAI had then made the decision to stop the possibility of indexing the conversations of its users. Grok, xai And Elon Musk, for their part, do not seem really ready for the same efforts.
So be careful what you say to the chatbot available on X before clicking the Share button. On the Internet, it is better to show exacerbated prudence especially in this era where visibly everything can be found if we know what to type.
Source : Forbes / Business Insider

With an unwavering passion for local news, Christopher leads our editorial team with integrity and dedication. With over 20 years’ experience, he is the backbone of Wouldsayso, ensuring that we stay true to our mission to inform.




