Jim wrote:
One of the things gpt can do is represent a very large body of knowledge, by predicting the response to a query about it from existing similar, but far from identical, queries.
But because it does not understand the information it is representing, the responses suffer from “hallucination” reflecting the fact that its model of the knowledge is not the knowledge, but a model of words about the knowledge, words about words. Sometimes, they superficially sounded very like a correct answer but were utter nonsense.
ChatGPT makes errors because its universe consists of words referring to words. Its errors do not necessarily reveal a lack of consciousness, but rather reveal it does not understand the words refer to real physical things.
When it makes a completely stupid error, and gives a meaningless nonsense response, it sounds very like a sensible and correct response, and you have to think about it a bit before you realise it is utter nonsense and meaningless gibberish.
ChatGPT is very very good at writing code. Not so good at knowing what code to write.
Suppose it had been trained on words referring to words, and on words referring to diagrams, and on diagrams and words referring to twodee and threedee images, and on words, diagrams, two dee and three dee images referring to full motion videos.
From the quality of the performance on words about words, and words about artistic images, one might plausibly hope for true perception. What we now have is quite clearly not conscious. But it has taken an impressively large step in the direction of consciousness. We have an algorithm that successfully handles the long standing central big hard problem in philosophy, AI, and the philosophy of AI, at least in a whole lot of useful, important, and interesting cases.
Quite likely we will find it only handles a subset of interesting cases. That is what happened with every previous AI breakthrough. After a while, people banged into the hard limits that revealed no one at home, that consciousness was being emulated, but not present. People anthropomorphise their pets, because their pet really is conscious. They do not anthropomorphise their Teslas, because the Tesla really is not, and endlessly polishing up the Tesla’s driving algorithms and giving the more computing power and data is not getting them any closer.
But we are not running into hard limits of GPT yet.
Read the whole thing at:
https://www.rogerschank.com/fraudulent-claims-made-by-IBM-about-Watson-and-AI
–
Relevant:
They are not doing “cognitive computing” no matter how many times they say they are
Relevant, but for different reasons:
Downloads
Reverse Engineered ChatGPT API by OpenAI. Extensible for chatbots etc.
https://github.com/acheong08/ChatGPT
#
Cyber threats:
To what degree can we call Windows “spyware”?
–
#
Global ransomware attack on thousands of servers reported by Italy
NSA wooing thousands of laid-off Big Tech workers for spy agency’s hiring spree
https://www.washingtontimes.com/news/2023/feb/3/nsa-wooing-thousands-laid-big-tech-workers-spy-age/