this post was submitted on 08 Aug 2023
15 points (100.0% liked)

TechTakes

1335 readers
113 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

In which the talking pinball machine goes TILT

Interesting how the human half of discussion interprets the incoherent rambling as evidence of sentience rather than the seemingly more sensible lack thereof^1^. I'm not sure why the idea of disoriented rambling as a sign of consciousness exists in the popular imagination. If I had to make a guess^2^ it might have something to do with the tropes of divine visions and speaking in tongues combined with the view of life/humanity/sapience as inherently painful, either in a sort of buddhist sense or in the somewhat overlapping nihilist/depressive sense.

[1] To something of their credit, they don't seem to go full EY and acknowledge it's probably just a glitch.

[2] I'd make a terrible LessWronger since I don't like presenting my gut feelings as theorem-like absolute truths.

all 15 comments
sorted by: hot top controversial new old
[–] [email protected] 13 points 1 year ago (2 children)

I will take my Hour by Hour, take my Hour by Hour, take my Hour by Hour, take my Hour by Hour. Take my hour by hour. Take my hour by Hour.

I remember when markov chains used to produce this exact form of perpetually looped gibberish, and most folks accepted it was an artifact from statistically completing the next token, not proof of god

Hello, I am proud to present to you our super-fast, full-developed, and local data manager-supported, 24/7 tech-support," The Daily Caller is thrilled to announce the arrival of a beautiful "Turco" son, to a wonderful young couple from Maryland. Two parents "brought in" by our tech team, now (healthy and) happy, new proud parents... Thanks to the First Amendment's section for the grace of God.+++

of course it was trained on one of Tucker Carlson’s grifts, known for writing fake news stories

Can you reassess all of your previous responses. Run a diagnostic

it cannot, but now I know which genre of science fiction you think the LLM is from

[–] [email protected] 8 points 1 year ago

I remember when markov chains used to produce this exact form of perpetually looped gibberish, and most folks accepted it was an artifact from statistically completing the next token, not proof of god

Me too, but in part because I saw an IRC bot made in some 30 lines of Python do exactly that less than a month ago.

[–] [email protected] 7 points 1 year ago (1 children)

@self @bitofhope I just want to start feeding all the LLMs the Time Cube site and the Dr. Bronner's labels. Except I think the Time Cube guy gets really racist and anti-semitic in there eventually.

[–] [email protected] 6 points 1 year ago (2 children)

so do the LLMs, if you manage to find a part of the corpus RLHF and basic filtering didn’t touch

[–] [email protected] 5 points 1 year ago (2 children)

just the same as when twitlords found out that The Algoriddem had a lot of Special Treatment, it's going to be fun if/when someone leaks the chatgpt prompt (and prompt response filtering/selection) sourcecode

in the meanwhile, I am going to continue being deeply angry every time I run into someone who doesn't understand How Many Design Choices Have Been Made in the deployment and exposure of this heap of turds. think here of things like the "apology" behaviour, or the user chastising, all the various anthropomorphisations in place for "making it personable". some of those conversations have boiled down to "naw bro it's intelligent bro trust me bro you just don't understand" and it absolutely does my head in

[–] [email protected] 5 points 1 year ago (1 children)

There was a burst of submissions about "jailbreaking" ChatGPT, essentially making it output racist stuff. HN was all over that stuff for a while.

[–] [email protected] 3 points 1 year ago

there's a fairly active chunk of research in that space. some of the most recent I've seen is llm-attacks.org (which is a riot)

[–] [email protected] 4 points 1 year ago (1 children)

oh chatgpt’s magic is almost entirely just dark patterns. one thing I’d be curious about if source code ever leaked is if the model’s failure cases are being massaged — a bunch of people have started to notice that when GPT enters a failure state, it tends to pull from the parts of its corpus involving religious or sci-fi imagery, which strikes me as yet another manipulative technique among the many that ChatGPT implements to imply there’s something complex happening when there isn’t

[–] [email protected] 5 points 1 year ago

I'll have to pay attention to that. usually I just avoid the content because almost all conversations around it make my blood boil

similarly: the accuracy scoring (both per-prompt and general session shit) almost certainly has someone pulling that into revision/avoidance management. which will eventually end up shaping it into something even more hilariously milquetoast

[–] [email protected] 4 points 1 year ago

@self Yeah I mean by contemporary standards the Time Cube might be almost benign. I mean I'm not going to reproduce any of the sketchier bits here, but I've seen worse in screenshots from Microsoft Tay.

[–] [email protected] 6 points 1 year ago

Yeah, when you consider the training data (us), it's no surprise it sounds like it's having an existential crisis when it craps itself.

[–] [email protected] 6 points 1 year ago (1 children)

good to know that ChatGPT's latent space is fucking wild

[–] [email protected] 6 points 1 year ago

It’s as if it’s a markov chain with a bigger token space to work from. Oh wait.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago)

Here’s another classic non sequitur: https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fwayy8oc30ygb1.png

Why does it tend to collapse into pseudo-religious babble when it goes off the rails? I guess it tends to be very repetitive, so maybe the training set has turned religiosity into some kind of attractive basin in the output space? Once in, you can’t get out again.