Bullshit images

Hicks, Humphries and Slater have recently published a paper entitled ChatGPT is bullshit. First of all, 10/10 to them for the paper title. Their point is simple:

Applications of these [LLMs like ChatGPT] systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. 

I think this is a fair point, LLMs do produce results that are just wrong, and seem rather indifferent to whether their output is correct.

According to Wikipedia, “Frankfurt determines that bullshit is speech intended to persuade without regard for truth. The liar cares about the truth and attempts to hide it; the bullshitter doesn’t care whether what they say is true or false.” To be fair OpenAI etc are working, I am sure, to remove many of the inaccuracies. But still I think the point remains that a lot of the programming of LLMs seems to aim for plausibility over accuracy. Maybe accuracy is fundamentally harder than accuracy? Or maybe it is more that the way LLMs work makes them fundamentally better at plausibility than accuracy?

And it is not just the text answers, above is what I got from ChatGPT when I asked it to draw the DNA double helix. The image is bullshit. The helix is clear, but what is with all the bubbles, and the random text at top left? I am very bad drawing schematics so hoped for help from ChatGPT. Looks like I will need to wait a while for improvements to be made.

Leave a Comment