Since February, the Firefox team has been working around the clock using frontier AI models to find and fix latent security vulnerabilities in the browser.
I like to think of it as exploit mining or smarter fuzzing and auto chaining. Unlike most of the bullshit uses for AI a high false positive rate really doesn’t matter. A shell is a shell and sorting through a haystack is easier than baling it then sorting through it.
I get the issues with image generation and using text generation in scams etc. but as a professional coding tool (not just vibe coding slop) AI can be extremely helpful certain tasks, and this use case, where organizations just don’t have the resources to have a security expert pore through millions of lines of code for bugs, is a net positive.
I think this is a case of “don’t throw the baby out with the bathwater” we can absolutely still criticize the industry and specific companies for IP, societal, and environmental concerns but lets not turn away a win just because they’re causing harm elsewhere.
There are more skills to learn in the world than I possibly have time for in my lifetime. If AI or some other tool means I don’t have to learn one skill, great, I can learn some other skill.
No? Things just will exist once they are discovered. Once nuclear warheads were discovered, there was no going back. Once the internet was established, it was going to be around. Same goes for a million other things. And now it applies to AI as well. Even if every big tech fascist stopped making AI, it is still going to be around, and it will be used maliciously. Our best bet is to use it defensibly before it can be used against us.
Thing is inevitable because… Different thing? Brilliant rhetoric!
Same goes for cryptocurrency. Same goes for NFTs. Same goes for Metaverse. Hell, why not say same goes for fascism? We must embrace them all, because other thing!
Facedeer, can we at least agree it’s a bad look for Mozilla to promote a company that helped kill Iranian children and desperately wants to build weapons to kill more?
That’s without even touching on whether your “inevitability” claim is total BS or not.
As part of our continued collaboration with Anthropic
Anthropic is literally the one that refused to let them make autonomous weapons with their AI. There is a whole wikipedia page about it. They explicitly don’t want their AI used for weapons. Of course, that wouldn’t stop governments/militaries from doing so anyway. It would be different if Mozilla was working with OpenAI, but of the two Anthropic is currently the better one.
And yes, the AI is out of the box. Just like once nuclear warheads were created, there is no going back.
Dario practically salivates with the desire to build weapons with their AI. They provided the AI for bombing Venezuelan boats, they provided the AI for killing Iranian children. Your own article says he works with Palantir. He is a child murderer and you don’t need to whitewash him.
Why is it slop? “This was human-directed, not autonomous code generation.” can’t you read the entire post before calling this instance of AI-assisted code slop? Every programmer and their mother uses code assisting tools since their very first iterations, AI is just another tool for us, if we implement it responsibly and deliberately and not just “vibe” code it, then it’s a perfectly fair use of AI without having to add the term slop to it.
I guarantee every new tool will be used irresponsibly. However, that doesn’t mean the tool itself is bad. You can use the tool responsibly and if you do that you can get great benefits. I regularly use AI for my coding. However, I have to review everything closely and I regularly find things that there is no way this would be correct, no way human would write that, and otherwise code is unacceptable. However, I can easily fix those problems and they are often much easier to fix than trying to write everything by hand.
I’m very torn on Mozilla collaborating with not only slop conductors, but crypto bros as well.
I like to think of it as exploit mining or smarter fuzzing and auto chaining. Unlike most of the bullshit uses for AI a high false positive rate really doesn’t matter. A shell is a shell and sorting through a haystack is easier than baling it then sorting through it.
I get the issues with image generation and using text generation in scams etc. but as a professional coding tool (not just vibe coding slop) AI can be extremely helpful certain tasks, and this use case, where organizations just don’t have the resources to have a security expert pore through millions of lines of code for bugs, is a net positive.
I think this is a case of “don’t throw the baby out with the bathwater” we can absolutely still criticize the industry and specific companies for IP, societal, and environmental concerns but lets not turn away a win just because they’re causing harm elsewhere.
admitting to intentionally deskilling yourself has to be humiliating. Ouch.
So you code strictly in assembly? If you do, good for you. If you don’t, the I can’t believe you would intentional deskill yourself.
This is a bad-faith argument and an invalid comparison to boot.
There are more skills to learn in the world than I possibly have time for in my lifetime. If AI or some other tool means I don’t have to learn one skill, great, I can learn some other skill.
The AI will exist either way, and people who use that AI will discover these exploits with it. I’d rather it be Mozilla.
AI bros love to normalize their fascist technology by saying that it’s inevitable.
No? Things just will exist once they are discovered. Once nuclear warheads were discovered, there was no going back. Once the internet was established, it was going to be around. Same goes for a million other things. And now it applies to AI as well. Even if every big tech fascist stopped making AI, it is still going to be around, and it will be used maliciously. Our best bet is to use it defensibly before it can be used against us.
Thing is inevitable because… Different thing? Brilliant rhetoric!
Same goes for cryptocurrency. Same goes for NFTs. Same goes for Metaverse. Hell, why not say same goes for fascism? We must embrace them all, because other thing!
Facedeer, can we at least agree it’s a bad look for Mozilla to promote a company that helped kill Iranian children and desperately wants to build weapons to kill more?
That’s without even touching on whether your “inevitability” claim is total BS or not.
Anthropic is literally the one that refused to let them make autonomous weapons with their AI. There is a whole wikipedia page about it. They explicitly don’t want their AI used for weapons. Of course, that wouldn’t stop governments/militaries from doing so anyway. It would be different if Mozilla was working with OpenAI, but of the two Anthropic is currently the better one.
And yes, the AI is out of the box. Just like once nuclear warheads were created, there is no going back.
This is a blatant lie, unsupported by your source. Because they explicitly do. In Dario’s own bloodthirsty words:
Don’t believe and regurgitate these lies about “red lines” when they are worse than meaningless.
Dario practically salivates with the desire to build weapons with their AI. They provided the AI for bombing Venezuelan boats, they provided the AI for killing Iranian children. Your own article says he works with Palantir. He is a child murderer and you don’t need to whitewash him.
E
That’s why Servo and Ladybird need to be vastly built up
bad news, ladybird is all in on slop too
but servo should be fine, in fact right now they have an explicit anti-ai policy!
Why is it slop? “This was human-directed, not autonomous code generation.” can’t you read the entire post before calling this instance of AI-assisted code slop? Every programmer and their mother uses code assisting tools since their very first iterations, AI is just another tool for us, if we implement it responsibly and deliberately and not just “vibe” code it, then it’s a perfectly fair use of AI without having to add the term slop to it.
“Human directed” is a euphamism for someone pushing a button to generate a result. Huh, sounds people vibe coding.
“If” is doing a lot of heavy lifting in your statement. What makes you think vibe coders will use their new drug responsibly?
Removed by mod
I guarantee every new tool will be used irresponsibly. However, that doesn’t mean the tool itself is bad. You can use the tool responsibly and if you do that you can get great benefits. I regularly use AI for my coding. However, I have to review everything closely and I regularly find things that there is no way this would be correct, no way human would write that, and otherwise code is unacceptable. However, I can easily fix those problems and they are often much easier to fix than trying to write everything by hand.
What are they doing with cryptocurrency now?
Probably a reference to Mozilla adding Brave’s adblocking system to Firefox.
Huh, I hadn’t heard about that! Honestly seems useful, and if it’s only the engine, I don’t see how crypto bros are relevant.
If there’s some “pay to unblock” scheme, that’s a different story.