Mike in Rancho wrote: ↑Mon Jan 09, 2023 5:24 pm
Wow.
I've seen some headlines about ChatGPT but haven't read through them to see what it's all about.
Interesting answers. I had the feeling "neural hallucination" was a bit of Ivo snark.
Is that term really "a thing"?
No snark; it's a real thing and the use of the term has gained a lot of popularity to call out this sort of AI behavior. In fact the term has now become so widespread that some people in the field wish to reign in its use;
https://www.forbes.com/sites/lanceeliot ... t-to-stop/
I'm personally afraid that horse has bolted, but the mechanism being used by unsophisticated software like the <x>xTerminator series is an archetype example of neural net hallucination being (ab)used (and - most gratingly - sold as something else
).
Note, by the way, note this specific example of neural hallucination in the article;
When an X-ray or MRI is undertaken, there is nowadays a likely chance that some kind of AI will be used to clean up the images on a reconstruction basis or otherwise analyze the imagery. Researchers caution that this can introduce AI hallucinations into the mix: “The potential lack of generalization of deep learning-based reconstruction methods as well as their innate unstable nature may cause false structures to appear in the reconstructed image that is absent in the object being imaged. These false structures may arise due to the reconstruction method incorrectly estimating parts of the object that either did not contribute to the observed measurement data or cannot be recovered in a stable manner, a phenomenon that can be termed as hallucination” (as stated in “On Hallucinations in Tomographic Image Reconstruction” by co-authors Sayantan Bhadra , Varun A. Kelkar, Frank J. Brooks, and Mark A. Anastasio, IEEE Transactions on Medical Imaging, November 2021).
Ring a bell?
Thus, as-posed, the question could be a bit loaded. And could it have any pre-training from prior discussions with, oh, maybe Ivo, or did it spit this out entirely on its own? The topic here seems a bit arcane for it to be so authoritative.
I understand I may have appeared that way (the thing is *really* good!), but they are real, un-trained (by me that is) answers to unloaded questions. You should be able to get substantially similar answer by posing the same question (some noise is injected on purpose to force the model to vary answers slightly to appear more natural, therefore the answers
may vary slightly).
You can give ChatGPT a spin for free if you want;
https://chat.openai.com/chat
If you're interested at all in AI, a quick play with
some of the other examples will give you a great overview of where things are heading.
Please note I am not affiliated with OpenAI, but am a big fan of their work.
Please also note that, just like the <x>Xterminator series, ChatGPT too is not immune from hallucinating (but at least it's not its entire design goal).
At least it has the decency to apologize when called out on it.