This is pretty wild. Checkout.com got hacked by a group that claims to be Shiny Hunters again. Checkout said in blog post that it would not be extorted by criminals.
"We will not pay this ransom.
Instead, we are turning this attack into an investment in security for our entire industry. We will be donating the ransom amount to Carnegie Mellon University and the University of Oxford Cyber Security Center to support their research in the fight against cybercrime."
Far too many victim firms just pay up, to get back to business as usual asap. Imagine if a fraction of those victims instead paid into a fund for research that actively disrupts these groups.
checkout.com/blog/protecting-o…
Protecting our Merchants: Standing up to Extortion
Our statement detailing an incident concerning a legacy system. We outline our commitment to transparency, accountability, and planned investment in cyber security research.www.checkout.com


Matt Campbell
in reply to Feoh • • •Hypolite Petovan
in reply to Feoh • • •@Feoh What do you think more people creating their own programs would achieve, realistically?
I'm a programmer by training and by trade, and I didn't create the overwhelming majority of programs I use daily.
Let's take your premise around and reverse it: let's say that generative AIs are absolutely good. They don't draw from a corpus of stolen work nor draw hazardous amounts of electricity and water and produce correct programs. What good would it be for most people?
Jonathan Lamothe
in reply to Hypolite Petovan • • •@Hypolite Petovan @Feoh The majority of the code I've written never sees the light of day because it was some niche utility I wrote for a weirdly specific task that nobody else is ever going to give a crap about (highly customised scripts and stuff that wouldn't even make sense outside of my own personal system).
Programming is a useful tool to have in your toolbox, but most people can (and do) also generally get along quite well without it.
Hypolite Petovan
in reply to Jonathan Lamothe • • •Feoh
in reply to Hypolite Petovan • • •@hypolite @me So, I want to apologize for my extreme response.
I'll admit I'm a bit frustrated with:
Re-reading your post it seems I over-reacted and you weren't necessarily doing any of those things.
Hypolite Petovan
in reply to Feoh • • •@Feoh @Jonathan Lamothe Thank you for this. I am frequently equally frustrated in conversations about AI, specifically generative AIs based on Large Language models because the people who make any sort of claims about what they can or could do also are people with the least understanding of how it works technically.
The truth is that this crop of AI is engineered to fool us humans, including about their capabilities. Because that's the target model trainers have set for them. And it turns out machine learning systems are uncannily good at reaching their set goals, regardless of any other consideration.
And so you have people who use AIs casually who are rightfully bewildered by their apparent capabilities, while experts in their respective fields who try to use AIs to enhance their workflow end up dropping them for a variety of reasons (inaccuracy, lack of underlying understanding of the subject matter, loss of ownership of output, etc...).
Does this mean a fooling machine can't produce an accurate answers? Absolutely not, but it will make figuring out the inevitable inaccurate answers harder because it's already been so good at fooling the people who trained the model.
Even without considering the ethics (or mostly the lack thereof) of this current crop of AI, it cannot answer any need that isn't about fooling people at scale.
reshared this
Dennis reshared this.