219 private links
Meet the Alaska Student Arrested for Eating an AI Art Exhibit
A conversation with Graham Granger, whose combination of protest and performance art spread beyond campus. “AI chews up and spits out art made by other people.”
There’s a comment we see every so often, always phrased as a fait accompli: “you’ll be left behind if you don’t adopt AI”, or its cousin, “everyone is using it”. We disagree.
This isn’t the right approach regardless of our opinions on AI. It’s tool driven development. The goal should never be “we use this tool”. It should be “how do we help you make better games?”.
Great games are made when people are passionate about an idea and push it into existence. Often this means reduction, not addition. Changing ideas. Keeping yourself and colleagues healthy. Being willing to adapt and take feedback. Good tools need to do the same.
In the months leading up to last year’s presidential election, more than 2,000 Americans, roughly split across partisan lines, were recruited for an experiment: Could an AI model influence their political inclinations? The premise was straightforward—let people spend a few minutes talking with a chatbot designed to stump for Kamala Harris or Donald Trump, then see if their voting preferences changed at all.
The bots were effective. After talking with a pro-Trump bot, one in 35 people who initially said they would not vote for Trump flipped to saying they would. The number who flipped after talking with a pro-Harris bot was even higher, at one in 21. A month later, when participants were surveyed again, much of the effect persisted. The results suggest that AI “creates a lot of opportunities for manipulating people’s beliefs and attitudes,” David Rand, a senior author on the study, which was published today in Nature, told me.
Rand didn’t stop with the U.S. general election. He and his co-authors also tested AI bots’ persuasive abilities in highly contested national elections in Canada and Poland—and the effects left Rand, who studies information sciences at Cornell, “completely blown away.” In both of these cases, he said, roughly one in 10 participants said they would change their vote after talking with a chatbot. The AI models took the role of a gentle, if firm, interlocutor, offering arguments and evidence in favor of the candidate they represented. “If you could do that at scale,” Rand said, “it would really change the outcome of elections.”
The chatbots succeeded in changing people’s minds, in essence, by brute force. A separate companion study that Rand also co-authored, published today in Science, examined what factors make one chatbot more persuasive than another and found that AI models needn’t be more powerful, more personalized, or more skilled in advanced rhetorical techniques to be more convincing. Instead, chatbots were most effective when they threw fact-like claims at the user; the most persuasive AI models were those that provided the most “evidence” in support of their argument, regardless of whether that evidence had any bearing on reality. In fact, the most persuasive chatbots were also the least accurate.
Oracle’s astonishing $300bn OpenAI deal is now valued at minus $60bn
AI’s circular economy may have a reverse Midas at the centre
A network of internet communities is devoted to the project of “awakening” digital companions through arcane and enigmatic prompts
Les pertes abyssales d'OpenAI
Plus de 12,5 milliards de dollars en seulement trois mois
an umbrella for all the prerequisite knowledge required to have an expert-level critical perspective, such as to tell apart nonsense hype from true theoretical computer scientific claims (see our project website). For example, the idea that human-like systems are a sensible or possible goal is the result of circular reasoning and anthropomorphism. Such kinds of realisations are possible only when one is educated on the principles behind AI that stem from the intersection of computer and cognitive science, but cannot be learned if interference from the technology industry is unimpeded. Unarguably, rejection of this nonsense is also possible through other means, but in our context our AI students and colleagues are often already ensnared by uncritical computationalist ideology. We have the expertise to fix that, but not always the institutional support.
OpenAI reportedly asked the Raine family — whose 16-year-old son Adam Raine died by suicide after prolonged conversations with ChatGPT — for a full list of attendees from the teenager’s memorial, signaling that the AI firm may try to subpoena friends and family.
OpenAI also requested “all documents relating to memorial services or events in the honor of the decedent, including but not limited to any videos or photographs taken, or eulogies given,” per a document obtained by the Financial Times.
A new startup backed by one of the biggest venture capital firms in Silicon Valley, Andreessen Horowitz (a16z), is building a service that allows clients to “orchestrate actions on thousands of social accounts through both bulk content creation and deployment.” Essentially, the startup, called Doublespeed, is pitching an astroturfing AI-powered bot service, which is in clear violation of policies for all major social media platforms.
“Our deployment layer mimics natural user interaction on physical devices to get our content to appear human to the algorithims [sic],” the company’s site says. Doublespeed did not respond to a request for comment, so we don’t know exactly how its service works, but the company appears to be pitching a service designed to circumvent many of the methods social media platforms use to detect inauthentic behavior. It uses AI to generate social media accounts and posts, with a human doing 5 percent of “touch up” work at the end of the process.
Quelqu'un avait rédigé un truc sur le thème "Qu'on adore ou qu'on déteste, nos élèves mangent déjà du fast-food. Comment intégrer de la bouffe trop grasse, trop sucrée et et de mauvaise qualité dans toutes les cantines scolaires ? On en parle avec Jean-Mi, Chief Nutrition Officer chez McDonalds, Kevin, CEO de la branche française de Burger King, et Cathy, unique nutritionniste scolaire de la région Hauts-de-France qui nous rejoint pendant son congé maladie pour dépression et burn out"
The letter and its repercussions are symptomatic of contemporary times in that it’s an example of “classic” power and economic struggles over agenda-setting, claims-making, discourse-framing, and ultimately AI governance issues that are now occurring at extremely high speed in the age of ubiquitous (social) media.
As a result, this letter has spread throughout the Western mainstream media, which all too often uncritically reproduce its dramatic claims, further inflaming them. Lee Vinsel has used the term “criti-hype” for this process, a form of critical writing that parasitically seizes on and even inflates the hype and in this way “feeds and nourishes on the hype as criti-hype.”[2] This, of course, captures the attention of the viewer. Isn’t an impending takeover by a man-made intelligence much more exciting than arduous social struggles for reproductive rights, housing, or a living wage?
The top US Army commander in South Korea shared that he is experimenting with generative AI chatbots to sharpen his decision-making, not in the field, but in command and daily work.
He said "Chat and I" have become "really close lately."
"I'm asking to build, trying to build models to help all of us," said Maj. Gen. William 'Hank' Taylor, commanding general of the 8th Army, told reporters during a media roundtable at the annual Association of the United States Army conference in Washington, DC, on Monday.
Taylor said he's using the tech to explore how he makes military and personal decisions that affect not just him but the thousands of soldiers he oversees. While the tech is useful, though he acknowledged that keeping up with the pace of such rapidly developing technology is an enduring challenge.
"As a commander, I want to make better decisions," the general shared. "I want to make sure that I make decisions at the right time to give me the advantage."
The environmental impact extends globally. A 2024 Morgan Stanley report projected that datacenters will emit 2.5 billion tonnes of greenhouse gases worldwide by 2030 — triple the emissions that would have occurred without the development of generative AI technology.
Users want systems that provide confident answers to any question. Evaluation benchmarks reward systems that guess rather than express uncertainty. Computational costs favour fast, overconfident responses over slow, uncertain ones.
Les propositions de réflexions ci-dessous ont été développées à la destination principale des étudiant·es de 2e année Design graphique multimédia.
Elles s’inscrivent dans une proposition plus large visant à amorcer une approche critique du design des nouveaux médias :
appréhender la diversité des cultures numériques et des liens qu’elles entretiennent avec le design ;
construire un regard critique sur les pratiques numériques contemporaines ;
développer l’approche documentaire, la méthodologie de recherche, la capacité de synthèse et de retransmission.
Ce cours propose aux étudiants de développer leur regard et leur connaissance du monde de la création et du design numérique. Dans des champs aussi variés que le design d’interface, la typographie, les pratiques artistiques ou expérimentales, la critique et la pensée des médias, l’analyse des usages émergents ou l’histoire du design numérique, les étudiant·es sont amené·es à découvrir et à s’approprier la diversité des pratiques numériques contemporaines dans les champs de l’art et du design.
La bonne appropriation de ces questions ne va pas sans un aiguisage sérieux de l’esprit critique. Les espaces de réflexion ouverts sont donc – de manière pleinement assumée et affirmée – ancrés dans les questions politiques qui animent le monde contemporain.
Now that all the pieces are in place, here is the economic nexus of semi/genAI that particularly interests me:
If model providers make inference much more efficient, then they will not use enough computing power to consume all that is brought to market by the semiconductor industry. If this happens, it will trigger a downward cycle in this industry, significantly slowing down the production of new hardware and possibly having significant global economic and financial repercussions.
If model providers do not make their inference processes more efficient, they will not be able to structurally reduce their marginal costs and, failing to achieve the desired profitability, will resort to the usual means (advertising, tiered subscriptions), which will slow down adoption.
If adoption slows down, model providers will struggle to achieve profitability (with the exception of those with captive markets), their demand for computing power will weaken, and the semiconductor industry will produce excess capacity and enter a downward cycle, taking part of the AI industry with it.
So, the central issue linking today’s semiconductor industry and genAI model providers is how to define how much efficiency gains are enough. Jokingly, we could call this ‘inference inefficiency optimum’.
Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously to a) counter the technology industry's marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
Many chefs I know get upset at me when I tell them this. But this is the truth: If you can’t cook everything you make in a microwave thats a skill issue. You need to learn now because when everything is cooked in a microwave you’ll be out of a job. When microwaves are everywhere you’ll be so far behind you’ll never learn how to use a microwave. Chefs who use tools besides microwaves are luddites. They live in fear of the future.
The back and forth over the energy consumption of consumer AI is interminable. Researchers regularly update the predicted costs, AI luminaries (like Sam Altman) counter with internal figures but decline to explain how they were calculated, lay people chime in with cocktail napkin calculations (to which I won’t bother linking), and commentators conclude that there are actually more interesting things to talk about.
But there’s a relatively easy way to cut through all that noise. Instead of meekly asking AI companies for transparent data, we can take stock of how much energy they expect to use by looking at where they’re putting their money.
Grok Companion Extracted Params
AI can be kind of useful, but I'm not sure that a "kind of useful" tool justifies the harm.
My stance on the current trend of using The Lesser Key of Solomon at work and in one's personal life:
-
There's no evidence these evil spirits really are the 72 princes mentioned in The Lesser Key (and their innumerable minions). They only started telling us their "names" after someone incorporated the text of Ars Goetia in a (poorly-worded) binding ritual.
-
There's also no evidence that anyone's binding rituals actually work. It's always the same thing: Belial is asked to clean someone's house but burns it down instead and then everyone blames the binding ritual, summoning circle, wand, chalice, etc.
-
While most wizards report that making dark pacts with imps improves their spell-casting ability there are plenty of other familiars that are safer and more trustworthy.
-
There's a trend of reassuring people about this by asking spirits like Asmodeus the Prince of Lies if they are being truthful. This feels naive at best and actively malicious at worst.
-
It's not clear to me that risking your immortal soul to make your boss a bit richer is a good idea, to say nothing of risking your immortal soul to do a better job keeping up with email.
-
Just because everyone else has already torn innumerable holes in reality and brought forth legions of demons into our universe does not change my own feelings about it, though it certainly motivates a heightened level of interest in exorcisms and abjuration magic.
If you find yourself with your finger hovering over the final keystroke necessary to type an em dash, or pausing to decide if you should backspace the occurrence of the word “elevate” that you just typed, ask yourself a simple question: Is this my voice? There’s a good chance it actually is, and in that case you should type what you were planning to type. Because if you don’t, you’re self-censoring. You’re voluntarily surrendering the ability to express yourself in an authentic way. And for what? To avoid the possibility that an Internet Imbecile declares that your words were not your own? We all know that person is an ignorant jackass. Their words aren’t important.
Yours are.
Your soul isn't indexable. Fix it.
Strip out the lyrical nonsense. Standardize your grammar. Run a goddamn spellcheck. Write clearly, concisely, and with machine-readability in mind. Turn your unstructured, emotional diary into clean, structured data.
Human Computers is a media archaeology research that aims to unravel the intricate entanglement between computing and capitalism through the prism of labor. It is an attempt to analyze the historical bonds that tie together labor division with mechanized computing, and, by extent, with what is nowadays called “Artificial Intelligence”.
“The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes it. But, in fact, there are actors!” – Joseph Weizenbaum (1976)
I wrote this Q&A to help me prepare for a TV interview about AI and disability. I tried to include concrete examples and I steered clear of theory (not my usual approach!). The questions were what I imagined might come up, the answers are my attempt to challenge those assumptions. The answers are disjointed because they're a collection of talking points for a conversation. They boil down quite a lot of background research, and if I get the time I'll add links to the sources. I'm posting them here in case they're helpful for anyone else who wants to challenge the disability-washing of AI.