228 private links
“AI-powered writing tools are increasingly integrated into our e-mails and phones. Now a new study finds biased AI suggestions can sway users’ beliefs”
“We told people before, and after, to be careful, that the AI is going to be (or was) biased, and nothing helped,” Naaman said. “Their attitudes about the issues still shifted.”
- If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take absolutely unhinged risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are leaders who use their influence to steamroll objections to these tools because they're "obviously" so good
Around the world, cultural workers are striking, protesting, running campaigns and mobilizing in relation to the use of AI in the workplace, such as Hollywood writers, game performers in the US and voice actors in Brazil. This tracker aims to document strikes, protests, campaigns and mobilizations by cultural workers — broadly understood as the arts, culture and media sectors — in relation to AI around the world.
Cory labels people’s values and their prioritization as “purity politics” (referring back to the black and white strawman the started this part of his post with) and then pulls a really interesting spin here: Many people criticizing LLMs come from a somewhat leftist (in contrast to Cory’s libertarian) background. Cory intentionally frames those leftist thoughts that put politics based on values as “neoliberal ideology” that reduces “all politics to personal consumption choices”. This is narrativecly clever: Tell those stupid leftists that they are just neoliberals, the thing they hate! Awesome.
AI companies like Anthropic and Meta are hiring social media creators to post sponsored content on apps like Facebook, Instagram, YouTube and LinkedIn.
Companies including Microsoft and Google have paid creators between $400,000 and $600,000 for long-term partnerships spanning several months, CNBC has learned.
AI companies have increased advertising considerably, spending more than $1 billion on digital ads in the U.S. in 2025, according to Sensor Tower, up 126% from 2024.
I am being a bit cheeky, but I really do think this is a powerful psychological force that shapes orgs far more than we realize: it can be emotionally damaging to have people tell you “no” or question your ideas, and leadership means getting that •all the time•. People in management / executive positions — who are in fact very much people, all too human — will go to great lengths to protect their own psyches from the injury of pushback.
This is a •powerful• force in orgs.
There’s a comment we see every so often, always phrased as a fait accompli: “you’ll be left behind if you don’t adopt AI”, or its cousin, “everyone is using it”. We disagree.
This isn’t the right approach regardless of our opinions on AI. It’s tool driven development. The goal should never be “we use this tool”. It should be “how do we help you make better games?”.
Great games are made when people are passionate about an idea and push it into existence. Often this means reduction, not addition. Changing ideas. Keeping yourself and colleagues healthy. Being willing to adapt and take feedback. Good tools need to do the same.
In the months leading up to last year’s presidential election, more than 2,000 Americans, roughly split across partisan lines, were recruited for an experiment: Could an AI model influence their political inclinations? The premise was straightforward—let people spend a few minutes talking with a chatbot designed to stump for Kamala Harris or Donald Trump, then see if their voting preferences changed at all.
The bots were effective. After talking with a pro-Trump bot, one in 35 people who initially said they would not vote for Trump flipped to saying they would. The number who flipped after talking with a pro-Harris bot was even higher, at one in 21. A month later, when participants were surveyed again, much of the effect persisted. The results suggest that AI “creates a lot of opportunities for manipulating people’s beliefs and attitudes,” David Rand, a senior author on the study, which was published today in Nature, told me.
Rand didn’t stop with the U.S. general election. He and his co-authors also tested AI bots’ persuasive abilities in highly contested national elections in Canada and Poland—and the effects left Rand, who studies information sciences at Cornell, “completely blown away.” In both of these cases, he said, roughly one in 10 participants said they would change their vote after talking with a chatbot. The AI models took the role of a gentle, if firm, interlocutor, offering arguments and evidence in favor of the candidate they represented. “If you could do that at scale,” Rand said, “it would really change the outcome of elections.”
The chatbots succeeded in changing people’s minds, in essence, by brute force. A separate companion study that Rand also co-authored, published today in Science, examined what factors make one chatbot more persuasive than another and found that AI models needn’t be more powerful, more personalized, or more skilled in advanced rhetorical techniques to be more convincing. Instead, chatbots were most effective when they threw fact-like claims at the user; the most persuasive AI models were those that provided the most “evidence” in support of their argument, regardless of whether that evidence had any bearing on reality. In fact, the most persuasive chatbots were also the least accurate.
Oracle’s astonishing $300bn OpenAI deal is now valued at minus $60bn
AI’s circular economy may have a reverse Midas at the centre
A network of internet communities is devoted to the project of “awakening” digital companions through arcane and enigmatic prompts
an umbrella for all the prerequisite knowledge required to have an expert-level critical perspective, such as to tell apart nonsense hype from true theoretical computer scientific claims (see our project website). For example, the idea that human-like systems are a sensible or possible goal is the result of circular reasoning and anthropomorphism. Such kinds of realisations are possible only when one is educated on the principles behind AI that stem from the intersection of computer and cognitive science, but cannot be learned if interference from the technology industry is unimpeded. Unarguably, rejection of this nonsense is also possible through other means, but in our context our AI students and colleagues are often already ensnared by uncritical computationalist ideology. We have the expertise to fix that, but not always the institutional support.
OpenAI reportedly asked the Raine family — whose 16-year-old son Adam Raine died by suicide after prolonged conversations with ChatGPT — for a full list of attendees from the teenager’s memorial, signaling that the AI firm may try to subpoena friends and family.
OpenAI also requested “all documents relating to memorial services or events in the honor of the decedent, including but not limited to any videos or photographs taken, or eulogies given,” per a document obtained by the Financial Times.
A new startup backed by one of the biggest venture capital firms in Silicon Valley, Andreessen Horowitz (a16z), is building a service that allows clients to “orchestrate actions on thousands of social accounts through both bulk content creation and deployment.” Essentially, the startup, called Doublespeed, is pitching an astroturfing AI-powered bot service, which is in clear violation of policies for all major social media platforms.
“Our deployment layer mimics natural user interaction on physical devices to get our content to appear human to the algorithims [sic],” the company’s site says. Doublespeed did not respond to a request for comment, so we don’t know exactly how its service works, but the company appears to be pitching a service designed to circumvent many of the methods social media platforms use to detect inauthentic behavior. It uses AI to generate social media accounts and posts, with a human doing 5 percent of “touch up” work at the end of the process.
Quelqu'un avait rédigé un truc sur le thème "Qu'on adore ou qu'on déteste, nos élèves mangent déjà du fast-food. Comment intégrer de la bouffe trop grasse, trop sucrée et et de mauvaise qualité dans toutes les cantines scolaires ? On en parle avec Jean-Mi, Chief Nutrition Officer chez McDonalds, Kevin, CEO de la branche française de Burger King, et Cathy, unique nutritionniste scolaire de la région Hauts-de-France qui nous rejoint pendant son congé maladie pour dépression et burn out"
The top US Army commander in South Korea shared that he is experimenting with generative AI chatbots to sharpen his decision-making, not in the field, but in command and daily work.
He said "Chat and I" have become "really close lately."
"I'm asking to build, trying to build models to help all of us," said Maj. Gen. William 'Hank' Taylor, commanding general of the 8th Army, told reporters during a media roundtable at the annual Association of the United States Army conference in Washington, DC, on Monday.
Taylor said he's using the tech to explore how he makes military and personal decisions that affect not just him but the thousands of soldiers he oversees. While the tech is useful, though he acknowledged that keeping up with the pace of such rapidly developing technology is an enduring challenge.
"As a commander, I want to make better decisions," the general shared. "I want to make sure that I make decisions at the right time to give me the advantage."
The environmental impact extends globally. A 2024 Morgan Stanley report projected that datacenters will emit 2.5 billion tonnes of greenhouse gases worldwide by 2030 — triple the emissions that would have occurred without the development of generative AI technology.
Users want systems that provide confident answers to any question. Evaluation benchmarks reward systems that guess rather than express uncertainty. Computational costs favour fast, overconfident responses over slow, uncertain ones.
Now that all the pieces are in place, here is the economic nexus of semi/genAI that particularly interests me:
If model providers make inference much more efficient, then they will not use enough computing power to consume all that is brought to market by the semiconductor industry. If this happens, it will trigger a downward cycle in this industry, significantly slowing down the production of new hardware and possibly having significant global economic and financial repercussions.
If model providers do not make their inference processes more efficient, they will not be able to structurally reduce their marginal costs and, failing to achieve the desired profitability, will resort to the usual means (advertising, tiered subscriptions), which will slow down adoption.
If adoption slows down, model providers will struggle to achieve profitability (with the exception of those with captive markets), their demand for computing power will weaken, and the semiconductor industry will produce excess capacity and enter a downward cycle, taking part of the AI industry with it.
So, the central issue linking today’s semiconductor industry and genAI model providers is how to define how much efficiency gains are enough. Jokingly, we could call this ‘inference inefficiency optimum’.
Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously to a) counter the technology industry's marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
Many chefs I know get upset at me when I tell them this. But this is the truth: If you can’t cook everything you make in a microwave thats a skill issue. You need to learn now because when everything is cooked in a microwave you’ll be out of a job. When microwaves are everywhere you’ll be so far behind you’ll never learn how to use a microwave. Chefs who use tools besides microwaves are luddites. They live in fear of the future.
The back and forth over the energy consumption of consumer AI is interminable. Researchers regularly update the predicted costs, AI luminaries (like Sam Altman) counter with internal figures but decline to explain how they were calculated, lay people chime in with cocktail napkin calculations (to which I won’t bother linking), and commentators conclude that there are actually more interesting things to talk about.
But there’s a relatively easy way to cut through all that noise. Instead of meekly asking AI companies for transparent data, we can take stock of how much energy they expect to use by looking at where they’re putting their money.
Grok Companion Extracted Params
AI can be kind of useful, but I'm not sure that a "kind of useful" tool justifies the harm.
GPT-5’s release and claims of its “PhD-level” abilities in areas such as coding and writing come as tech firms continue to compete to have the most advanced AI chatbot.
And it sees 3 letters b in Blueberry
My stance on the current trend of using The Lesser Key of Solomon at work and in one's personal life:
-
There's no evidence these evil spirits really are the 72 princes mentioned in The Lesser Key (and their innumerable minions). They only started telling us their "names" after someone incorporated the text of Ars Goetia in a (poorly-worded) binding ritual.
-
There's also no evidence that anyone's binding rituals actually work. It's always the same thing: Belial is asked to clean someone's house but burns it down instead and then everyone blames the binding ritual, summoning circle, wand, chalice, etc.
-
While most wizards report that making dark pacts with imps improves their spell-casting ability there are plenty of other familiars that are safer and more trustworthy.
-
There's a trend of reassuring people about this by asking spirits like Asmodeus the Prince of Lies if they are being truthful. This feels naive at best and actively malicious at worst.
-
It's not clear to me that risking your immortal soul to make your boss a bit richer is a good idea, to say nothing of risking your immortal soul to do a better job keeping up with email.
-
Just because everyone else has already torn innumerable holes in reality and brought forth legions of demons into our universe does not change my own feelings about it, though it certainly motivates a heightened level of interest in exorcisms and abjuration magic.
At some point the momentum behind NVIDIA slows. Maybe it won't even be sales slowing — maybe it'll just be the suggestion that one of its largest customers won't be buying as many GPUs. Perception matters just as much as actual numbers, and sometimes more, and a shift in sentiment could start a chain of events that knocks down the entire house of cards.
I don't know when, I don't know how, but I really, really don't know how I'm wrong.
I hate that so many people will see their retirements wrecked, and that so many people intentionally or accidentally helped steer the economy in this reckless, needless and wasteful direction, all because big tech didn’t have a new way to show quarterly growth. I hate that so many people have lost their jobs because companies are spending the equivalent of the entire GDP of some European countries on data centers and GPUs that won’t actually deliver any value.
But my purpose here is to explain to you, no matter your background or interests or creed or whatever way you found my work, why it happened. As you watch this collapse, I want you to tell your friends about why — the people responsible and the decisions they made — and make sure it’s clear that there are people responsible.
Sam Altman, Dario Amodei, Satya Nadella, Sundar Pichai, Tim Cook, Elon Musk, Mark Zuckerberg and Andy Jassy have overseen a needless, wasteful and destructive economic force that will harm our economy and the tech industry writ large, and when this is over, they must be held accountable.
And remember that you, as a regular person, can understand all of this. These people want you to believe this is black magic, that you are wrong to worry about the billions wasted or question the usefulness of these tools. You are smarter than they reckon and stronger than they know, and a better future is one where you recognize this, and realize that power and money doesn’t make a man righteous, right, or smart.
If you find yourself with your finger hovering over the final keystroke necessary to type an em dash, or pausing to decide if you should backspace the occurrence of the word “elevate” that you just typed, ask yourself a simple question: Is this my voice? There’s a good chance it actually is, and in that case you should type what you were planning to type. Because if you don’t, you’re self-censoring. You’re voluntarily surrendering the ability to express yourself in an authentic way. And for what? To avoid the possibility that an Internet Imbecile declares that your words were not your own? We all know that person is an ignorant jackass. Their words aren’t important.
Yours are.
Your soul isn't indexable. Fix it.
Strip out the lyrical nonsense. Standardize your grammar. Run a goddamn spellcheck. Write clearly, concisely, and with machine-readability in mind. Turn your unstructured, emotional diary into clean, structured data.