this post was submitted on 22 Jul 2023
83 points (94.6% liked)

Asklemmy

43945 readers
755 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

Sometimes it can be hard to tell if we're chatting with a bot or a real person online, especially as more and more companies turn to this seemingly cheap way of providing customer support. What are some strategies to expose AI?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 28 points 1 year ago* (last edited 1 year ago) (2 children)

I've found that for chatGPT specifically;

  • it really likes to restate your question in its opening sentence
  • it also likes to wrap up with a take-home message. "It's important to remember that.."
  • it starts sentences with little filler words and phrases. "In short," "that said," "ultimately," "on the other hand,"
  • it's always upbeat, encouraging, bland, and uncontroversial
  • it never (that I've seen) gives personal anecdotes
  • it's able to use analogies but not well. They never help elucidate the matter
[–] [email protected] 24 points 1 year ago (2 children)

it starts sentences with little filler words and phrases. “In short,” “that said,” “ultimately,” "on the other hand,"

Yeah Chat GPT writes like a first-year undergrad desperately trying to fulfil the word count requirement on an essay.

[–] [email protected] 13 points 1 year ago (1 children)

Which works out because a lot of first-year undergrads are probably using it for that purpose

[–] [email protected] 4 points 1 year ago (1 children)

Yeah I'd hate to be marking/grading student essays these days.

At least when you're reading a website you can just click away once you realise who wrote it.

[–] [email protected] 3 points 1 year ago

Nah, just get chatGPT to grade them too.

[–] [email protected] 4 points 1 year ago (1 children)

First years have max word counts now, not minimums. That's more a highschool thing.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (1 children)

The universities I've been at had a specific word count to aim for, rather than max/min.

And anything more than 10% over or under it was penalised.

It makes more sense because if you're writing for publication they use target approx wordcount.

[–] [email protected] 3 points 1 year ago (1 children)

Last time I talked about this with the other TAs, we ended up coming to the conclusion that most papers that were decent were close to the max word count or above it (I don't think the students were really treating it as a max, more like a target). Like 50% of the word count really wasn't enough to actually complete the assignment

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

Totally, good assessment design matches the rubric with an appropriate length, so it's hard for them to fulfill it well if they don't take the space.

As for the maxed out ones, iirc I tended to just rule a line at the 110% mark and not read/mark anything past it.

I know that's a bit uncaring, but it's an easy way to avoid unfairly rewarding overlength, and the penalty sort of applied itself.