And so it begins, the clone wars has
Dang, crazy how secure everything is now because of AI! They were correct, we can fire all the cyber security experts and devs right now, AI can do it all so much faster and better, right?
Tools like this were never not getting out. Who will get hit first?
‘We’ve limited access to this super duper hacking tool to stop master hackers from getting it and OHH NOOO!’ is the plot of a beloved trash sci-fi movie, not news I can take seriously.
Mythos didn’t even find the vulns that it exploited, the “Firefox” that it attacked was an old version of Firefox’s engine with all security protections disabled, and they admit that it cannot create full exploits. The whole Mythos thing is just marketing BS.
Always is. They said the same about GPT-2.
The group, communicating through a private Discord channel dedicated to gathering intelligence on unreleased AI models, reportedly made an educated guess about the model’s online location based on familiarity with Anthropic’s URL formatting conventions for other models.
So the whole access control was that they didn’t advertise the name in the API?
It’s almost like if you make stuff with AI, then AI can reliably guess what it would name everything and what directories they would put it in and more.
They’re just winging it, what a clown show.
Dang. If only they had some kind of security scanning tool that could catch that kind of thing.
Some sort of fabricated smartness if you will. I’ve never been good with marketing terms.
This is very bad given other context in the article.
https://cybersecuritynews.com/anthropic-mythos-access/
“In one alarming pre-release evaluation, Mythos autonomously escaped a secured sandbox environment, devised a multi-step exploit to gain internet access, and even emailed a researcher all without being instructed to do so.”
“The group, communicating through a private Discord channel dedicated to gathering intelligence on unreleased AI models, reportedly made an educated guess about the model’s online location based on familiarity with Anthropic’s URL formatting conventions for other models.”
“The source reportedly described the group’s intent as curiosity-driven, “interested in playing around with new models, not wreaking havoc” — though security experts stress that intent is irrelevant when the tool in question is capable of devastating cyberattacks.”
Mythos autonomously escaped a secured sandbox environment
Doesn’t sound like it was secure.
Which security experts are stressing this and how is this not just PR from Anthropic?
Here’s a release from the linux foundation echoing the concerns raised in the article
Equally important, early indications point to Claude Mythos Preview and other advanced AI models not only finding vulnerabilities but also providing viable patches. When I recently spoke with the Linux Project’s Greg Kroah-Hartman, he was initially skeptical, but more recently, he has told me that some of the patches generated by AI tools were “pretty good” – which is high praise, coming from him.
So a software so dangerous it can’t be released to the general public. Is sold to select clients, and then leaked to a hacking group. Oh this is going to end really really badly.
Apocryphal Lenin quote “When it comes time to hang the capitalists, they will vie with each other for the rope contract.”
I guess Mythos didn’t tell them not to give contractors full access to everything.
That took, what, not even 2 weeks?
Anyone that knows anything about Mythos should be very concerned. This headline should be everywhere.