Mortar wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to phigan on Tue Dec 02 2025 11:15:44
Also, try asking your AI to give you an 11-word palindrome.
Time saw raw emit level racecar level emit raw saw time.
Not a palindrome. The individual letters/numbers must read the same in both directions. Example: A man, a plan, a canal - Panama.
Mortar wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57:33
If you ask me a question and I give you an incorrect answer, but I
believe that it is true, am I hallucinating? Or am I mistaken? Or is
my information outdated?
If you are the receiver of the information, than no. It'd be like if I told you a dream I had, does that mean you experienced the dream?
Nightfox wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Nightfox on Wed Dec 03 2025 08:58 pm
But again, is it 'making something up' if it is just mistaken?
In the case of AI, yes.
Gonna disagree with you there... If wikipedia has some info that is wrong, and I quote it, I'm not making it up. If 'it' pulls from the same source, it's not making it up either.
For AI, "hallucination" is the term used for AI providing false information and sometimes making things up - as in the link I provided earlier about this. It's not really up for debate. :)
I've heard of people who
are looking for work who are using AI tools to help update their resume,
as well as tailor their resume to specific jobs. I've heard of cases
where the AI tools will say the person has certain skills when they
don't.. So you really need to be careful to review the output of AI
tools so you can correct things. Sometimes people might share
AI-generated content without being careful to check and correct things.
I'd like to see some data on that... Anecdotal 'evidence' is not always scientific proof. :-)
That seems like a strange thing to say.. I've heard about that from
job seekers using AI tools, so of course it's anecdotal. I don't know what scientific proof you need to see that AI produces incorrect
resumes for job seekers; we know that from job seekers who've said so.
And you've said yourself that you've seen AI tools produce incorrect output.
The job search thing isn't really scientific.. I'm currently looking
for work, and I go to a weekly job search networking group meeting, and
AI tools have come up there recently. Specifically, recently there was someone there talking about his use of AI tools to help customize his resume for different jobs & such, and he talked about needing to check
the results of what AI produces, because sometimes AI tools will put skills & things on your resume that you don't have, so you have to make edits.
If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool it
It's not a "technically" thing. "Hallucination" is simply the term
used for AI producing false output.
Bob Worm wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Bob Worm on Wed Dec 03 2025 20:58:51
Hi, jimmylogan.
But that 'third option' - you're saying it didn't 'find' that somewhere
in a dataset, and just made it up?
The third option was software that ran on a completely different
product set. A reasonably analogy would be it's like saying that an
iPhone runs MacOS.
Just look at all the recent scandals around people filing court cases prepared by ChatGPT which refer to legal precedents where either the case was irrelevant to the point, didn't contain what ChatGPT said it did or didn't exist at all.
I've not seen/read those. Assuuming you have some links? :-)
I guess you should be able to read this outside the UK: https://www.bbc.co.uk/news/world-us-canada-65735769
Some others: https://www.legalcheek.com/2025/02/another-lawyer-faces-chatgpt-trouble/
https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-ma de-up-by-chatgpt-judge-calls-it-unprecedented/
https://www.theregister.com/2024/02/24/chatgpt_cuddy_legal_fees/
It's enough of a problem that the London High Court ruled earlier this year that lawyers caught citing non-existent cases could face criminal charges. So I'm probably not hallucinating it :)
Bob Worm wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Bob Worm on Wed Dec 03 2025 20:58:51
Hi, jimmylogan.
I mean... those are 11 words... with a few duplicates... Which can't even be arranged into a palindrome because "saw" and "raw" don't have their corresponding "was" and "war" palindromic partners...
I just asked for it, as you suggested. :-)
I think it was Phigan who asked but yeah, I guessed that came from an
LLM rather than a human :)
Not that I use LLMs myself - if I ever want the experience of giving
very clear instructions but getting a comically bad outcome I can
always ask my teenage son to do something around the house :D
Dumas Walker wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Dumas Walker on Tue Dec 02 2025 11:15:44
Google Gemini looked it up and reported that my trash would be picked up on Friday. The link below the Gemini result was the official link from the city, which *very* clearly stated that it would be picked up on Monday.
Not sure where Gemini got its answer, but it might as well have been made up! :D ---
LOL - yep, an error. But is that actually a 'made up answer,' aka hallucinating?
Well, it didn't get it from any proper source so, as far as I know, it made it up! :D ---
phigan wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to phigan on Tue Dec 02 2025 11:15 am
it's been flat out WRONG before, but never insisted it was
You were saying you'd never seen it make stuff up :). You certainly
have. Just today I asked the Gemini in two different instances how to
do the same exact thing in some software. One time it gave instructions for one method, and the second time it said the first method wasn't possible with that software and a workaround was necessary.
Time saw raw emit level racecar level emit raw saw time.
Exactly, there it is again saying something is a palindrome when it
isn't.
Example of a palindrome:
able was I ere I saw elba
Not a palindrome:
I palindrome I
For AI, "hallucination" is the term used for AI providing false
information and sometimes making things up - as in the link I provided
:-) Okay - then I'm saying that in MY opinion, it's a bad word to use. Hallucination in a human is when you THINK you see or hear something that isn't there. Using the same word for an AI giving false information is misleading.
So I concede it's the word that is used, but I don't like the use of it. :-)
Sorry - didn't mean to demand anything. I just meant the fact that someone says it gave false info doesn't mean it will ALWAYS give false info. The burdon is still on the user to verify output.
Yeah, that's definitely the case. And that's true about it not always giving>lse info. From what I understand, AI tends to be non-deterministic in that i
I've learned that part of getting the right info is to ask the right question, or ask it in the right way. :-)
...it won't always give the same output even with the same question asked multiple times.
Reminds me of the old computer addage, "garbage in, garbage out".
| Sysop: | datGSguy |
|---|---|
| Location: | Eugene, OR |
| Users: | 3 |
| Nodes: | 4 (0 / 4) |
| Uptime: | 21:00:40 |
| Calls: | 2 |
| Messages: | 1,534 |
| Posted today: | 1 |