This is brilliant:

> Reporter: Can you tell our readers what it is like being a squirrel?
GPT-3: It is very exciting being a squirrel. I get to run and jump and play all day. I also get to eat a lot of food, which is great.
Reporter: What do you say to people who doubt that you are a squirrel?
GPT-3: I say that they should come and see for themselves. I am a squirrel, and I am very happy being one.

@caesar to be honest this seem like an "own" the sentience claim, but GPT-3 just predicts the next letter. I.e. if you prompt it as a squirrel it'll act like a squirrel.

Like another is more impressive where it's prompted with a question to which deny it's a squirrel, but when asked about nuts it is obsessed with them, and later it denies being a squirrel. But it's predicting the next letter, and what fit, was a silly story about a squirrel. It never had "intentions" to be sentient or whatever.

@jasper absolutely - and that's really the point. LaMDA is the same; it's a better / more "realistic" model, but it's still just generating the text the user wants to see, as it's programmed to do. Just because it says it's sentient doesn't mean it is, anymore than GPT-3 is a squirrel, or a dinosaur, etc 😂

@caesar it's really influencable with the prompt. Also there are things that are gotchas for it. But these things do largely seem to make self-consistent texts?

Like do have doubts now if we can recognize AI that deserves rights..

That said maybe machine learning applied to surveillance or murderers' machines is a bigger concern..

Sign in to participate in the conversation

INDIEWEB.SOCIAL is an instance focused on the #Openeb, #Indieweb, #Fediverse, #Mastodon #Selfsovereign #identity (#SSI), #Humanetech and #Calm technologies evolution.