artificially intelligent threats

Jim wrote:

One of the things gpt can do is represent a very large body of knowledge, by predicting the response to a query about it from existing similar, but far from identical, queries.

But because it does not understand the information it is representing, the responses suffer from “hallucination” reflecting the fact that its model of the knowledge is not the knowledge, but a model of words about the knowledge, words about words. Sometimes, they superficially sounded very like a correct answer but were utter nonsense.

ChatGPT makes errors because its universe consists of words referring to words. Its errors do not necessarily reveal a lack of consciousness, but rather reveal it does not understand the words refer to real physical things.

When it makes a completely stupid error, and gives a meaningless nonsense response, it sounds very like a sensible and correct response, and you have to think about it a bit before you realise it is utter nonsense and meaningless gibberish.

ChatGPT is very very good at writing code. Not so good at knowing what code to write.

Suppose it had been trained on words referring to words, and on words referring to diagrams, and on diagrams and words referring to twodee and threedee images, and on words, diagrams, two dee and three dee images referring to full motion videos.

From the quality of the performance on words about words, and words about artistic images, one might plausibly hope for true perception. What we now have is quite clearly not conscious. But it has taken an impressively large step in the direction of consciousness. We have an algorithm that successfully handles the long standing central big hard problem in philosophy, AI, and the philosophy of AI, at least in a whole lot of useful, important, and interesting cases.

Quite likely we will find it only handles a subset of interesting cases. That is what happened with every previous AI breakthrough. After a while, people banged into the hard limits that revealed no one at home, that consciousness was being emulated, but not present. People anthropomorphise their pets, because their pet really is conscious. They do not anthropomorphise their Teslas, because the Tesla really is not, and endlessly polishing up the Tesla’s driving algorithms and giving the more computing power and data is not getting them any closer.

But we are not running into hard limits of GPT yet.

Continue reading

Posted in digital privacy | Leave a comment

when the mass media pushes a story about X, ask yourself what they are distracting you from

The mass media is always pushing a narrative. Ask yourself how distant their narrative is from common sense and the facts you can ascertain for yourself.

Continue reading

Posted in events that are not especially current | Leave a comment

Pfertility problems and Bloons defense

0.9 MB

0.4 MB

You’ve heard about the supposedly Chinese balloon. You may not have heard about “Bloons” so the joke above might not make sense. Here’s what Sseth has to say about Bloons:

0.9 MB

Continue reading

Posted in events that are not especially current | Leave a comment

sons and daughters of America

8kun reports that some Americans are seeing ads such as the following:

Also, some 8kun folks claim that the end is in sight:

Obviously I expect readers to take any timeline with a bucket of skepticism, because so many attempts to set a timeline for Q-related stuff have failed so miserably in the past.





Boy howdy, do we have some infosnacks for you.

Continue reading

Posted in events that are not especially current | 1 Comment

Naked infosnacks

Continue reading

Posted in events that are not especially current | 1 Comment