Asking AI to count how many Rs are in the word stra(w)berry. Can AI be affected by the Illusory Thruth Effect?

I recently found a very interesting article about AI: "How Many R's in 'Strawberry'? This AI Doesn't Know". That article sparked my interest, so I decided to try it myself. The results were interesting, to say the least.

First, I wanted to check if the issue was still present in ChatGPT: although I was very doubtful that I was going to see this issue still today, I wanted to try anyway. And - as expected - ChatGPT answered my question the right way: The word "strawberry" contains three R's. So far so good. Right? But then I wanted to test it more and decided to try again, this time adding a small typo in the word: 'how many Rs are in the word straberry' (note the missing w letter).

Somebody might say: Why would you do that? That does not make any sense! Well, I was expecting the AI to inform me about the typo and eventually correct me with something like: "Dear user, I believe you intended to write `strawberry` instead of straberry. Besides, the word strawberry contains 3 Rs...", or something like the "Did you mean..." feature in google, so to speak. Also, think about how many times people make typos when writing on a keyboard... (very likely I made some of them just writing this article - I am not a writer and English is not my native language...).

To my surprise, ChatGPT replied simply with The word "straberry" contains two R's.

I tried to correct it but AI stood its ground... I tried to convince it that its answer was wrong - and I eventually achieved my goal - although my conversation with it has been a though one... 😊 (you can find the link to the full conversation at the end of this page).

I admit having been a little bit sassy towards ChatGPT, but my ultimate goal was to seriously test it: it looks like that a minor variation on the input brought the initial issue back.

Furthermore, towards the end of my "argument" with ChatGPT, it finally accepted the reality:

How should I interpret this? ChatGPT finally accepted my answer because I was persistent enough? What would happen if I would start questioning everything, providing completely made up facts and being persistent about it, despite what the AI has been trained on? Would AI ultimately change its "mind" and accept my statements as true? Is AI also affected by the Illusory Thruth Effect?

But these might be questions for another day.


You can find the whole conversation here, if you are interested: https://chatgpt.com/share/e8c43b11-a8e5-4f7d-b8b3-ad0ebc8dac98 (in the linked session you would see my question about the word 'straberry' first, but I believe my point is still valid).


Opinions expressed here are solely my own and do not express the views or opinions of anybody else.