Generative AI and Climate Change Are on a Collision Course
https://www.wired.com/story/true-cost-generative-ai-data-centers-energy/?utm_source=flipboard&utm_medium=activitypub
Posted into Security News @security-news-WIRED

Generative AI and Climate Change Are on a Collision Course
https://www.wired.com/story/true-cost-generative-ai-data-centers-energy/?utm_source=flipboard&utm_medium=activitypub
Posted into Security News @security-news-WIRED
Silicon Valley's "authoritarian turn" is hard to miss: tech bosses have come out for autocrats like Trump, Orban, Milei, Bolsonaro, et al, and want to turn San Francisco into a militia-patrolled apartheid state operated for the benefit of tech bros:
https://newrepublic.com/article/180487/balaji-srinivasan-network-state-plutocrat
--
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/12/10/bdfl/#high-on-your-own-supply
1//
Mastodon isn't perfect.
But the fact a social network exists that is completely free to use
has no venture capital investors
has no shareholders to answer to
has no growth targets
with a web interface with zero tracking cookies
and mobile apps with zero trackers at all
with ten thousand server administrators who donate their time for user safety
is - in my opinion - mindbogglingly cool, given the state of the world we live in. Not everything has to be shit. People make things better.
Adding as reply since the edit id not cross the bridge:
You need to follow https://bsky.app/profile/ap.brid.gy and be followed back before the like. Otherwise I wont be notified nor be able to see you
Well, edit does not work over the bridge
For bsky users. To be follow-able from my mastodon:
1. follow https://bsky.app/profile/ap.brid.gy, after a minute it will follow you back
2. like this post
What happens if I edit a post that is bridged to bsky?
Like this so I can follow from mastodon.
edit: you need to follow https://bsky.app/profile/ap.brid.gy and be followed back before the like.
I wrote a very timely introduction to digital security for journalists for @GIJN, this guidance may also apply to activists, lawyers, and anyone else doing at-risk work these days. https://gijn.org/resource/introduction-investigative-journalism-digital-security
Two of my favourites. I personally think the one with Koki Kano and Tibor Andrasfi fencing against the backdrop of the Grand Palais is stunning->
In pictures: The Paris Olympics 2024 - BBC News
https://www.bbc.com/news/resources/idt-c9425ca9-5c30-4ae0-b059-3eae6abcc18f
How many professional programmers are working on pointless and/or actively harmful products?
Give me your best guess.
(If you vote, please boost to diversify the results. It’s polite.)
My latest for Wired. How researchers hacked time to crack an 11-year-old password protecting $3 million in cryptocurrency. They found a significant flaw in RoboForm's password manager that made its pseudo-random-number generator not so random. The flaw allowed famed hardware hacker Joe Grand to turn back time and cause the RoboForm password manager to believe it was 2013 and spit out the same passwords it generated back then. RoboForm says it fixed the flaw in 2015, but it appears it never told customers about it. This means that if any of RoboForm's current 6 million users are using passwords generated by the password manager prior to 2015, before the company silently fixed the flaw, they may have passwords that can be cracked in the same way .
https://www.wired.com/story/roboform-password-3-million-dollar-crypto-wallet/
I spent 3 months writing the best interactive introduction to queueing I could, and it's live now!
You can read it here: https://encore.dev/blog/queueing.
It starts off by showing what happens when you have no queue...
“9.9999%” is technically five nines.
Fascinante: una herramienta que altera digitalmente las imágenes para que los modelos de #InteligenciaArtificial piensen que están viendo algo diferente de lo que las imágenes muestran y los artistas puedan defenderse de que sus creaciones sean usadas sin permiso o compensación.
AI-poisoning tool Nightshade now available for artists to use | VentureBeat
https://venturebeat.com/ai/nightshade-the-free-tool-that-poisons-ai-models-is-now-available-for-artists-to-use/