Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

TechScape: Why I can’t stop writing about Elon Musk

“I hope I don’t have to cover Elon Musk again for a while,” I thought last week after I sent TechScape to readers. Then I got a message from the news editor. “Can you keep an eye on Elon Musk’s Twitter feed this week?”
I ended up doing a close-reading of the world’s most powerful posting addict, and my brain turned to liquid and trickled out of my ears:
But somehow I was still surprised by what I found. I knew the rough contours of Musk’s internet presence from years of covering him: a three-way split between shilling his real businesses, Tesla and SpaceX; eager reposting of bargain-basement nerd humour; and increasingly rightwing political agitation.
Following Musk in real-time, though, revealed the ways his chaotic mode has been warped by his shift to the right. His promotion of Tesla is increasingly inflected in culture war terms, with the Cybertruck in particular promoted with language that makes it sound like buying one will help defeat the Democrats in the US presidential election this November. The bargain-basement nerd humour mentioned above is tinged with an anger at the world for not thinking he’s the coolest person in it. And the rightwing political agitation is increasingly extreme.
Musk’s involvement in the disorder in the UK seems to have pushed him further into the arms of the far right than ever before. This month has seen him tweet at Lauren Southern for the first time, a far-right Canadian internet personality who is most famous in the UK for earning a visa ban from Theresa May’s government over her Islamophobia. More than just tweet: he also supports her financially, sending around £5 a month through Twitter’s subscription feature. Then there was the headline-grabbing retweet of Britain First’s co-leader. On its own, it could have been chalked up to Musk not knowing the pond in which he was swimming; two weeks on, the pattern is more clear. These are his people, now.
A neat example of the difference between scientific press releases and scientific papers, today from the AI world. The press release, from the University of Bath:
The paper, from Lu et al:
The press release version of this story has gone viral, for predictable reasons: everyone likes seeing Silicon Valley titans punctured, and AI existential risk has become a divisive topic in recent years.
But the paper is several steps short of the claim the university’s press office wants to make about it. Which is a shame, because what the paper does show is interesting and important anyway. There is lots of focus on so-called “emergent” abilities with frontier models: tasks and capabilities that didn’t exist in the training data but which the AI system demonstrates in practice.
Those emergent abilities are concerning to people who worry about existential risk, because they suggest that AI safety is harder to guarantee than we’d like. If an AI can do something it’s not been trained to do, then there’s no easy way to guarantee a future AI system is safe: you can leave things out of the training data but it might work out how to do them anyway.
The paper demonstrates that, at least in some situations, those emergent abilities are nothing of the sort. Instead, they’re an outcome of what happens when you take an LLM like GPT and hammer it into the shape of a chatbot, before asking it to solve problems in the form of a question-and-answer conversation. That process, the paper suggests, means the chatbot can’t ever truly be given “zero-shot” questions, where it has no prior training: the art of prompting ChatGPT is inherently one of teaching it a bit about what form the answer should take.
It’s an interesting finding! Not quite one that proves the AI apocalypse is impossible, but – if you want some good news – one that suggests it’s unlikely to happen tomorrow.
Nvidia scraped YouTube to train its AI systems. Now that’s coming back to bite:
This lawsuit is unusual in the AI world if for no other reason than the fact that Nvidia was slightly taciturn about its sources of training data. Most AI companies that have faced lawsuits have been proudly open about their disregard for copyright limitations. Take Stable Diffusion, which sourced its training data to the open-source LAION dataset. Well:
Of course, not every AI company plays on an even field here. Google has a unique advantage: everyone gives it consent to train its AI on their material. Why? Because otherwise you get booted off search entirely:
One more, self-indulgent, note. After 11 years, I’m leaving the Guardian at the end of this month, and 2 September will be my last TechScape. I’ll be answering reader questions, big and small, as I sign off, so if there’s anything you’ve ever wanted an answer on, from tech recommendations to industry gossip, then hit reply and drop me an email.
If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Tuesday.

en_USEnglish