219 private links
There’s a comment we see every so often, always phrased as a fait accompli: “you’ll be left behind if you don’t adopt AI”, or its cousin, “everyone is using it”. We disagree.
This isn’t the right approach regardless of our opinions on AI. It’s tool driven development. The goal should never be “we use this tool”. It should be “how do we help you make better games?”.
Great games are made when people are passionate about an idea and push it into existence. Often this means reduction, not addition. Changing ideas. Keeping yourself and colleagues healthy. Being willing to adapt and take feedback. Good tools need to do the same.
In the months leading up to last year’s presidential election, more than 2,000 Americans, roughly split across partisan lines, were recruited for an experiment: Could an AI model influence their political inclinations? The premise was straightforward—let people spend a few minutes talking with a chatbot designed to stump for Kamala Harris or Donald Trump, then see if their voting preferences changed at all.
The bots were effective. After talking with a pro-Trump bot, one in 35 people who initially said they would not vote for Trump flipped to saying they would. The number who flipped after talking with a pro-Harris bot was even higher, at one in 21. A month later, when participants were surveyed again, much of the effect persisted. The results suggest that AI “creates a lot of opportunities for manipulating people’s beliefs and attitudes,” David Rand, a senior author on the study, which was published today in Nature, told me.
Rand didn’t stop with the U.S. general election. He and his co-authors also tested AI bots’ persuasive abilities in highly contested national elections in Canada and Poland—and the effects left Rand, who studies information sciences at Cornell, “completely blown away.” In both of these cases, he said, roughly one in 10 participants said they would change their vote after talking with a chatbot. The AI models took the role of a gentle, if firm, interlocutor, offering arguments and evidence in favor of the candidate they represented. “If you could do that at scale,” Rand said, “it would really change the outcome of elections.”
The chatbots succeeded in changing people’s minds, in essence, by brute force. A separate companion study that Rand also co-authored, published today in Science, examined what factors make one chatbot more persuasive than another and found that AI models needn’t be more powerful, more personalized, or more skilled in advanced rhetorical techniques to be more convincing. Instead, chatbots were most effective when they threw fact-like claims at the user; the most persuasive AI models were those that provided the most “evidence” in support of their argument, regardless of whether that evidence had any bearing on reality. In fact, the most persuasive chatbots were also the least accurate.
Oracle’s astonishing $300bn OpenAI deal is now valued at minus $60bn
AI’s circular economy may have a reverse Midas at the centre
A network of internet communities is devoted to the project of “awakening” digital companions through arcane and enigmatic prompts
an umbrella for all the prerequisite knowledge required to have an expert-level critical perspective, such as to tell apart nonsense hype from true theoretical computer scientific claims (see our project website). For example, the idea that human-like systems are a sensible or possible goal is the result of circular reasoning and anthropomorphism. Such kinds of realisations are possible only when one is educated on the principles behind AI that stem from the intersection of computer and cognitive science, but cannot be learned if interference from the technology industry is unimpeded. Unarguably, rejection of this nonsense is also possible through other means, but in our context our AI students and colleagues are often already ensnared by uncritical computationalist ideology. We have the expertise to fix that, but not always the institutional support.
OpenAI reportedly asked the Raine family — whose 16-year-old son Adam Raine died by suicide after prolonged conversations with ChatGPT — for a full list of attendees from the teenager’s memorial, signaling that the AI firm may try to subpoena friends and family.
OpenAI also requested “all documents relating to memorial services or events in the honor of the decedent, including but not limited to any videos or photographs taken, or eulogies given,” per a document obtained by the Financial Times.
A new startup backed by one of the biggest venture capital firms in Silicon Valley, Andreessen Horowitz (a16z), is building a service that allows clients to “orchestrate actions on thousands of social accounts through both bulk content creation and deployment.” Essentially, the startup, called Doublespeed, is pitching an astroturfing AI-powered bot service, which is in clear violation of policies for all major social media platforms.
“Our deployment layer mimics natural user interaction on physical devices to get our content to appear human to the algorithims [sic],” the company’s site says. Doublespeed did not respond to a request for comment, so we don’t know exactly how its service works, but the company appears to be pitching a service designed to circumvent many of the methods social media platforms use to detect inauthentic behavior. It uses AI to generate social media accounts and posts, with a human doing 5 percent of “touch up” work at the end of the process.
Quelqu'un avait rédigé un truc sur le thème "Qu'on adore ou qu'on déteste, nos élèves mangent déjà du fast-food. Comment intégrer de la bouffe trop grasse, trop sucrée et et de mauvaise qualité dans toutes les cantines scolaires ? On en parle avec Jean-Mi, Chief Nutrition Officer chez McDonalds, Kevin, CEO de la branche française de Burger King, et Cathy, unique nutritionniste scolaire de la région Hauts-de-France qui nous rejoint pendant son congé maladie pour dépression et burn out"
The top US Army commander in South Korea shared that he is experimenting with generative AI chatbots to sharpen his decision-making, not in the field, but in command and daily work.
He said "Chat and I" have become "really close lately."
"I'm asking to build, trying to build models to help all of us," said Maj. Gen. William 'Hank' Taylor, commanding general of the 8th Army, told reporters during a media roundtable at the annual Association of the United States Army conference in Washington, DC, on Monday.
Taylor said he's using the tech to explore how he makes military and personal decisions that affect not just him but the thousands of soldiers he oversees. While the tech is useful, though he acknowledged that keeping up with the pace of such rapidly developing technology is an enduring challenge.
"As a commander, I want to make better decisions," the general shared. "I want to make sure that I make decisions at the right time to give me the advantage."
The environmental impact extends globally. A 2024 Morgan Stanley report projected that datacenters will emit 2.5 billion tonnes of greenhouse gases worldwide by 2030 — triple the emissions that would have occurred without the development of generative AI technology.
Users want systems that provide confident answers to any question. Evaluation benchmarks reward systems that guess rather than express uncertainty. Computational costs favour fast, overconfident responses over slow, uncertain ones.
Now that all the pieces are in place, here is the economic nexus of semi/genAI that particularly interests me:
If model providers make inference much more efficient, then they will not use enough computing power to consume all that is brought to market by the semiconductor industry. If this happens, it will trigger a downward cycle in this industry, significantly slowing down the production of new hardware and possibly having significant global economic and financial repercussions.
If model providers do not make their inference processes more efficient, they will not be able to structurally reduce their marginal costs and, failing to achieve the desired profitability, will resort to the usual means (advertising, tiered subscriptions), which will slow down adoption.
If adoption slows down, model providers will struggle to achieve profitability (with the exception of those with captive markets), their demand for computing power will weaken, and the semiconductor industry will produce excess capacity and enter a downward cycle, taking part of the AI industry with it.
So, the central issue linking today’s semiconductor industry and genAI model providers is how to define how much efficiency gains are enough. Jokingly, we could call this ‘inference inefficiency optimum’.
Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously to a) counter the technology industry's marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
Many chefs I know get upset at me when I tell them this. But this is the truth: If you can’t cook everything you make in a microwave thats a skill issue. You need to learn now because when everything is cooked in a microwave you’ll be out of a job. When microwaves are everywhere you’ll be so far behind you’ll never learn how to use a microwave. Chefs who use tools besides microwaves are luddites. They live in fear of the future.
The back and forth over the energy consumption of consumer AI is interminable. Researchers regularly update the predicted costs, AI luminaries (like Sam Altman) counter with internal figures but decline to explain how they were calculated, lay people chime in with cocktail napkin calculations (to which I won’t bother linking), and commentators conclude that there are actually more interesting things to talk about.
But there’s a relatively easy way to cut through all that noise. Instead of meekly asking AI companies for transparent data, we can take stock of how much energy they expect to use by looking at where they’re putting their money.
Grok Companion Extracted Params
AI can be kind of useful, but I'm not sure that a "kind of useful" tool justifies the harm.
GPT-5’s release and claims of its “PhD-level” abilities in areas such as coding and writing come as tech firms continue to compete to have the most advanced AI chatbot.
And it sees 3 letters b in Blueberry
My stance on the current trend of using The Lesser Key of Solomon at work and in one's personal life:
-
There's no evidence these evil spirits really are the 72 princes mentioned in The Lesser Key (and their innumerable minions). They only started telling us their "names" after someone incorporated the text of Ars Goetia in a (poorly-worded) binding ritual.
-
There's also no evidence that anyone's binding rituals actually work. It's always the same thing: Belial is asked to clean someone's house but burns it down instead and then everyone blames the binding ritual, summoning circle, wand, chalice, etc.
-
While most wizards report that making dark pacts with imps improves their spell-casting ability there are plenty of other familiars that are safer and more trustworthy.
-
There's a trend of reassuring people about this by asking spirits like Asmodeus the Prince of Lies if they are being truthful. This feels naive at best and actively malicious at worst.
-
It's not clear to me that risking your immortal soul to make your boss a bit richer is a good idea, to say nothing of risking your immortal soul to do a better job keeping up with email.
-
Just because everyone else has already torn innumerable holes in reality and brought forth legions of demons into our universe does not change my own feelings about it, though it certainly motivates a heightened level of interest in exorcisms and abjuration magic.
At some point the momentum behind NVIDIA slows. Maybe it won't even be sales slowing — maybe it'll just be the suggestion that one of its largest customers won't be buying as many GPUs. Perception matters just as much as actual numbers, and sometimes more, and a shift in sentiment could start a chain of events that knocks down the entire house of cards.
I don't know when, I don't know how, but I really, really don't know how I'm wrong.
I hate that so many people will see their retirements wrecked, and that so many people intentionally or accidentally helped steer the economy in this reckless, needless and wasteful direction, all because big tech didn’t have a new way to show quarterly growth. I hate that so many people have lost their jobs because companies are spending the equivalent of the entire GDP of some European countries on data centers and GPUs that won’t actually deliver any value.
But my purpose here is to explain to you, no matter your background or interests or creed or whatever way you found my work, why it happened. As you watch this collapse, I want you to tell your friends about why — the people responsible and the decisions they made — and make sure it’s clear that there are people responsible.
Sam Altman, Dario Amodei, Satya Nadella, Sundar Pichai, Tim Cook, Elon Musk, Mark Zuckerberg and Andy Jassy have overseen a needless, wasteful and destructive economic force that will harm our economy and the tech industry writ large, and when this is over, they must be held accountable.
And remember that you, as a regular person, can understand all of this. These people want you to believe this is black magic, that you are wrong to worry about the billions wasted or question the usefulness of these tools. You are smarter than they reckon and stronger than they know, and a better future is one where you recognize this, and realize that power and money doesn’t make a man righteous, right, or smart.
If you find yourself with your finger hovering over the final keystroke necessary to type an em dash, or pausing to decide if you should backspace the occurrence of the word “elevate” that you just typed, ask yourself a simple question: Is this my voice? There’s a good chance it actually is, and in that case you should type what you were planning to type. Because if you don’t, you’re self-censoring. You’re voluntarily surrendering the ability to express yourself in an authentic way. And for what? To avoid the possibility that an Internet Imbecile declares that your words were not your own? We all know that person is an ignorant jackass. Their words aren’t important.
Yours are.
Your soul isn't indexable. Fix it.
Strip out the lyrical nonsense. Standardize your grammar. Run a goddamn spellcheck. Write clearly, concisely, and with machine-readability in mind. Turn your unstructured, emotional diary into clean, structured data.
“The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes it. But, in fact, there are actors!” – Joseph Weizenbaum (1976)
I wrote this Q&A to help me prepare for a TV interview about AI and disability. I tried to include concrete examples and I steered clear of theory (not my usual approach!). The questions were what I imagined might come up, the answers are my attempt to challenge those assumptions. The answers are disjointed because they're a collection of talking points for a conversation. They boil down quite a lot of background research, and if I get the time I'll add links to the sources. I'm posting them here in case they're helpful for anyone else who wants to challenge the disability-washing of AI.
les LLM (plus généralement les IAG) sont des produits stimulants addictogènes au service du fascisme. Stimulants, car ils provoquent une sensation de surproductivité, d'hyper-performance. D'autant plus addictogènes que leur facilité d'accès, leur simplicité d'usage et leur faible coût personnel facilitent leur recours à tout un chacun. Au service du fascisme car c'est lui qui cherche à trier, standardiser et exploiter les humains au mépris de la diversité de la vie.
Dans le champ de l’art ou du design, on veut « explorer les possibles de l’IA », considérant que la charge critique des productions sera suffisante pour équilibrer le discours. LOL. On se rue sur les mots-clés du moment en espérant recueillir quelques financements (les écoles d’art, on vous voit). On critique vertement, vertueusement, en même temps qu’on produit des discours fatalistes, on se désole, on râle un peu et on se résigne doucement. Le refus pur et simple est réputé impossible, inadéquat, inutile, naïf.
Faisant mine de ne pas voir ou de ne pas comprendre, perdu·es en plein FOMO**, on valide l’agenda, on souscrit au programme. On est d’accord.
Si l’on s’attaquait aux structures du technopouvoir avec ne serait-ce qu’une infime parcelle de la violence avec laquelle il s’attaque aux conditions de la vie, nous nous verrions incarcérés ou exécutés – selon le bord du monde où l’on se tient. Alors, nos critiques équilibrées, nos bien-pensances social-démocrates, nos contorsions vaguement accusatoires depuis le frais des centres d’art, leur en touche une. Non, même pas.
On ne joue pas avec des alumettes et un bidon d’essence fournis par des psychopates dans une forêt dessechée.
Si l’on considère l’urgence et le drame des enjeux – la montée des eaux et celle du fascisme, l’effondrement du vivant et du progrès social – on se doit d’y faire face. La compromission confortable, la lâcheté commode, la résignation face à l’air nauséabond du temps qu’on nous vend, ne peuvent rester des options acceptables. Il n’y a pas d’alternative.
Drawing on Illich's 'Tools for Conviviality', this talk will argue that an important role for the contemporary university is to resist AI. The university as a space for the pursuit of knowledge and the development of independent thought has long been undermined by neoliberal restructuring and the ambitions of the Ed Tech industry. So-called generative AI has added computational shock and awe to the assault on criticality, both inside and outside higher education, despite the gulf between the rhetoric and the actual capacities of its computational operations. Such is the synergy between AI's dissimulations and emerging political currents that AI will become embedded in all aspects of students' lives at university and afterwards, preempting and foreclosing diverse futures. It's vital to develop alternatives to AI's optimised nihilism and to sustain the joyful knowledge that nothing is inevitable and other worlds are still possible. The talk will ask what Illich has to teach us about an approach to technology that prioritises creativity and autonomy, how we can bolster academic inquiry through technical inquiry, workers' inquiry and struggle inquiry, and whether the future of higher education should enrol lecturers and students in a process of collective decomputing.
In the context of computational text collage, I propose that “distance” emerges when the collagist acknowledges the material histories of their corpora and the collagist’s relationship with them—including the other human beings that brought these corpora into existence. Those others may be friends, mentors, ancestors, one’s earlier self, neighbors, or even perfect strangers. Regardless, the melancholy and the meaning of the collage arise only through the acknowledgment of the other’s absence.
Creators of large language models are very eager to conceal this distance. They do so by flattening the materiality of their corpora, thereby effectively severing the text from its own history and rendering uniform what had been equivocal—like bulldozing a graveyard. Yet the distance and the melancholy persist, despite this attempt at hiding it away. When I’m writing with a large language model, I am all too aware of the ghosts and strangers whose voices I’m speaking with. The keyboard beneath my fingers hums with frustrated mourning.
I made a list specifically to share in another sub about why people should not use AI for anything related to summarizing information.
People disproportionately focus on how it steals artists' work, which yes, is bad, but it overlooks one of AI's other serious problems: accuracy.