this post was submitted on 08 Jul 2023
339 points (97.5% liked)

Technology

59197 readers
3095 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Slow June, people voting with their feet amid this AI craze, or something else?

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 131 points 1 year ago (4 children)

It's Summer. Students are on break, lots of people on vacation, etc. Let's wait to see if the trend persists before declaring another AI winter.

[–] [email protected] 24 points 1 year ago (4 children)

Agreed. I think being between academic years is likely a much bigger factor than we realize. I’m a college professor, and at the end of spring quarter we had a lot of conversations with undergrads, grad students, and faculty about how people are actually using AI.

Literally every undergrad student I spoke with said they use it for every written assignment (for the large part in non-cheating legit educational resource ways). Most students used it for all or most of their programming assignments. Most use it to summarize challenging or long readings. Some absolutely use it to just do all their work for them, though fewer than you might expect.

I’d be pretty surprised if there isn’t a significant bounce-back in September.

load more comments (4 replies)
load more comments (3 replies)
[–] [email protected] 41 points 1 year ago* (last edited 1 year ago) (2 children)

It's not just that the novelty has worn off, It's progressively gotten less useful. Any god damn question I ask gets 90,000 qualifiers and it refuses to provide any data at all. I think OpenAI is so terrified of liabilty they have significantly dumbed down it's utility in the public release. I can't even ask ChatGPT to provide a link to study it references, if it references anything at all rather than making ambiguous statements.

[–] [email protected] 8 points 1 year ago (2 children)

Also, ChatGPT 4 came out but is still only available to people who pay (as far as I know). So using ChatGPT 3 feels like only having access to the leftovers. When it first came out, that was exciting because it felt like progress was going to be rapid, but instead it stagnated. (Luckily interesting LLM stuff is still happening, it's just nothing to do with OpenAI.)

[–] [email protected] 8 points 1 year ago

Chatgpt4 has also noticeably declined in quality since it was released too. I use it less because it's become less useful and more frustrating to use. I think openAI have been steadily gimping it trying to get their costs down and make it respond faster.

[–] [email protected] 6 points 1 year ago (1 children)

I pay for it and it's... Okay for most things. It's pretty great at nerd stuff though*. Pasting an error code or cryptic log file message with a bit of context and it's better than googling for 4 days.

*If you know enough to sus out the obviously wrong shit it produces every once in a while.

[–] [email protected] 5 points 1 year ago (1 children)

Pasting an error code or cryptic log file message with a bit of context and it’s better than googling for 4 days.

I usually can find what I'm looking for unless it's really obscure with days of searching. If something is that obscure, it seems kind of unlikely ChatGPT is going to give a good answer either.

If you know enough to sus out the obviously wrong shit it produces every once in a while.

That's one pretty big problem. If something really is difficult/complex you likely won't be able to tell the difference between a wrong answer from ChatGPT and one that's correct unless it just says something obviously ridiculous.

Obviously humans make mistakes too, but at least when you search you see results in context, other can potentially call out/add context to things that might not be correct (or even misleading), etc. With ChatGPT you kind of have to trust it or not.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (1 children)

Yeah if it's that hard to find gpt is just going to hallucinate some bs into the response. I use it as a stack overflow at times and often run into garbage when I'm trying to solve a truly novel problem. I'll often try to simplify it to something contrived but mostly find the output useful as a sort of spark. I can't say I ever find the raw code it generates useful or all that good.

It'll often give wrong answers but some of those can contain useful bits that you can arrange into a solution. It's cool, but I still think people are oddly enamored with what is really just a talking Google. I don't think it's the game changer people are thinking it is.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 37 points 1 year ago (1 children)

It's because it's summer and students aren't using it to cheat on their assignments anymore.

[–] [email protected] 4 points 1 year ago

It's definitely this. Except the kids taking summer classes, which statistically probably have higher instances of cheating.

[–] [email protected] 34 points 1 year ago (1 children)

Well yeah it's kinda cool but the novelty will wear off. It's useful sometimes but it's not a magic elixer.

[–] [email protected] 21 points 1 year ago (1 children)

I use it for quick dnd ideas. Need an NPC on the fly? Chatgpt will help you out

load more comments (1 replies)
[–] [email protected] 22 points 1 year ago (2 children)

It's really fucking annoying getting "As an AI language model, I don't have personal opinions, emotions, or preferences. I can provide you with information and different perspectives on..." at the beginning of every prompt, followed by the driest, most bland answer imaginable.

[–] [email protected] 8 points 1 year ago

Yeah, it's boring as shit, if want a conversation partner there's better (if less reliable) options out there, and groups like personal.ai that repackage it for conversation. There's even scripts to break through the "guardrails"

I love the boring. Every other day, I think "man, I really don't want to do this annoying task. I'm not sure if it even saves much time since I have to look over the work, but it's a hell of a lot less mentally exhausting.

Plus, it's fun having it Trumpify speeches. It's tremendous. I've spent hours reading the bigglyest speeches. Historical speeches, speeches about AI, graduation speeches where bears attack midway through... Seriously, it never gets old

[–] [email protected] 5 points 1 year ago

It definitely has its uses but it also has massive annoyances as you pointed out. One thing has really bothered me, I asked it a factual question about Mohammed the founder of Islam. This is how I a human not from a Muslim background would answer

"Ok wikipedia says this ____"

It answered in this long winded way that had all these things like "blessed prophet of Allah". Basically the answer I would expect from an Imam.

I lost a lot of trust in it when I saw that. It assumed this authority tone. When I heard about that case of a lawyer citing madeup caselaw from it I looked it as confirmation. I don't know how it happened but for some questions it has this very authoritative tone like it knows this without any doubt.

[–] [email protected] 21 points 1 year ago (1 children)

For my professional work, the training data is way too outdated by now for ChatGPT to be anywhere near being useful. The browsing feature also can’t make up for it, because it’s pretty bad at Internet search (bad search phrases etc).

[–] [email protected] 11 points 1 year ago (7 children)

i find even for really complex stuff it’s pretty good as long as you direct it: it can suggest some things, you can do some searching based on that, maybe give it a few links to summarise for you, etc

it doesn’t do the work for you, but it makes a pretty good assistant that doesn’t quite understand the subject matter

load more comments (7 replies)
[–] [email protected] 13 points 1 year ago (1 children)

I love Stable Diffusion but I really have no use for ChatGPT. I'm amazed at how good the output can be... i just don't have a need to generate text like that. Also, OpenAI has been making it steadily worse with 'safety' restrictions. I find it super annoying and even insulting when Bing-Sydney is "THIS CONVERSATION IS OVER". It's like being chastised by facebook or twitter for being 'violent' when you made a joke.

The ability to generate photographs and illustrations of practically anything, though, is fantastic. My girlfriend has been flagellating me into creating a bunch of really useless crap to promote her business on social media using SD, and I actually enjoy that part. I've made thousands of photos of scenery.

load more comments (1 replies)
[–] [email protected] 12 points 1 year ago (1 children)

I didn't and don't really care. Call me when there's (free) AI that is good at dirty talk.

[–] [email protected] 6 points 1 year ago

Orca 13b is coming out and is open source and can be run locally so you’ll get your wish really soon

[–] [email protected] 12 points 1 year ago

I use it now and again but I couldn't imagine paying $20+ a month for it.

[–] [email protected] 12 points 1 year ago (1 children)

ChatGPT has mostly given me very poor or patently wrong answers. Only once did it really surprise me by showing me how I configured BGP routing wrong for a network. I was tearing my hair out and googling endlessly for hours. ChatGPT solved it in 30 seconds or less. I am sure this is the exception rather than the rule though.

[–] [email protected] 7 points 1 year ago

It all depends on the training data. If you pick a topic that it happens to have been well trained on, it will give you accurate, great answers. If not, it just makes things up. It's been somewhat amusing, or perhaps confounding, seeing people use it thinking it's an oracle of knowledge and wisdom that knows everything. Maybe someday.

[–] [email protected] 12 points 1 year ago

Personally I've abandoned ChatGPT in favor of Claude. It's much more reliable.

[–] [email protected] 10 points 1 year ago (3 children)

I still use it sometimes, but ohhh boy it can be a wreck. Like I've started using the Creation Kit for Bethesda games, and you can bet your ass that anything you ask it, you'll have to ask again. Countless times it's a back-and-forth of:

Me: Hey ChatGPT, how can I do this or where is this feature?

ChatGPT: Here is something that is either not relevant or just does not exist in the CK.

Me: Hey that's not right.

ChatGPT: Oh sorry, here's the thing you are looking for. and then it's still a 50-50 chance of it being real or fake.

Now I realize that the Creation Kit is kinda niche, and the info on it can be a pain to look up but it's still annoying to wade through all the shit that it's throwing in my direction.

With things that are a lot more popular, it's a lot better tho. (still not as good as some people want everyone to believe)

[–] [email protected] 9 points 1 year ago

Lol, Chat has it's pros and cons. For helping me write or refine content, it's extremely helpful.

However I did try to use it to write code for me. I design 3D models using a programming language (OpenSCAD) and the results are hilarious. Literally it knows the syntax (kinda) and if I ask it to do something simple, it will essentially write the code for a general module (declaring key variables for the design), and then it calls a random module that doesn't exist (like it once called a module "lerp()" which is absolutely not a module) - this magical module mysteriously does 99% of the design..... but ChatGPT won't give it to me. When I ask it to write the code for lerp(), it gives me something random like this

module lerp() { splice(); }

Where it simply calls up a new module that absolutely does not exist. The results are hilarious, the code totally does not compile or work as intended. It is completely wrong.

But I think people are working it out of their system - some found novelty in it that wore off fast. Others like myself use it to help embellish product descriptions for ebay listings and such.

[–] [email protected] 6 points 1 year ago (5 children)

I’ve been building a tool that uses ChatGPT behind the scenes and have found that that’s just part of the process of building a prompt and getting the results you want. It also depends on which chat model is being used. If you’re super vague, it’s going to give you rubbish every time. If you go back and forth with it though, you can keep whittling it down to give you better material. If you’re generating content, you can even tell it what format and structure to give the information back in (I learned how to make it give me JSON and markdown only).

Additionally, you can give ChatGPT a description of what it’s role is alongside the prompt, if you’re using the API and have control of that kind of thing. I’ve found that can help shape the responses up nicely right out of the box.

ChatGPT is very, very much a “your mileage may vary” tool. It needs to be setup well at the start, but so many companies have haphazardly jumped on using it and they haven’t put in enough work prepping it.

[–] [email protected] 4 points 1 year ago (1 children)

Have you see the JollyRoger Telco - they've started using ChatGPT to help have longer conversations with telemarketing scammers. I might actually re-subscribe to the jolly roger (used them previously) if the new updated bots perform as well enough.

load more comments (1 replies)
load more comments (4 replies)
[–] [email protected] 4 points 1 year ago (1 children)

I recently asked it about Nix Flakes, which were very niche and bew during ChatGPTs Training. It was able to give me a reasonable answer in English, but if I first asked it in German, it couldn't do it. It could reasonably translate the english one though, after it generated that. Depending on what language you use to prompt it, you get very different answers, because it doesn't do the transfer of ideas and concepts between languages or more generally, disconnected bodies of text sources.

It is somewhat obvious if you know about the statistical nature of the models they use, but it's a great example of why these things don't KNOW things, they just regurgitate what they read in context before.

load more comments (1 replies)
[–] [email protected] 9 points 1 year ago (3 children)

I have noticed that I use it less myself. I think honestly though, at least for me, that it is 90% related to the clunky and awkward UI of ChatGPT. If it was easy to natively type the prompt in the browser bar I'd use it much more.

Plus, the annoying text scrolling thingy ... Just show me the answer already, hehe.

[–] [email protected] 11 points 1 year ago* (last edited 1 year ago) (1 children)

The annoying text scrolling can't be removed because the AI generates one word at a time, which is what you are seeing.

[–] [email protected] 15 points 1 year ago (4 children)

Sure it can. Finish generating it server-side, then send it as one big chunk to the user.

To be honest though, ChatGPT is pretty fast at generating text these days compared to how it was at the beginning so it doesn't bother me as much.

[–] [email protected] 9 points 1 year ago

GPT-4 isn't fast yet so if it will frustrate people if they do that.

load more comments (3 replies)
load more comments (2 replies)
[–] [email protected] 8 points 1 year ago (2 children)

I tried it for about 20 minutes

Had it do a few funny things

Thought huh that's neat

Went on with life

Since then the only times I've thought about ChatGPT has been seeing people using it in classes I'm in and just sitting here thinking "this is a fucking introductory course and you're already cheating?"

load more comments (2 replies)
[–] [email protected] 7 points 1 year ago

I'm not really surprised at all, a lot of people I know wouldn't stop talking about it for the grand total of maybe 2 weeks but then it all went quite. In fairness this is a sample of people who are all non-tech people, so I think a lot of it is just the fact they probably forgot the name of it or how to turn their computer on (definitely the case for some).

[–] [email protected] 7 points 1 year ago

Using it for work from time to time, mostly when I have issues with HTML/CSS or some quick bash scripts. I'd probably miss copilot more. It saves a lot of time with code suggestions.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago)

OpenAI's models, including its GPT series, are available via APIs and Microsoft Azure, and so a drop in ChatGPT's website use may be due to people moving to programmatic interfaces

I feel like this is an important detail that changes the conclusion of the article: there may be a lot more end user, through 3d party apps, but the way of measuring won't reveal it. This especially important considering that (correct me if I'm wrong) API users are paying ones !

[–] [email protected] 7 points 1 year ago (1 children)

Vacations could be one of the biggest factors.

load more comments (1 replies)
[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (3 children)

I still use free GPT-3 as a sort of high level search engine, but lately I'm far more interested in local models. I havent used them for much beyond SillyTavern chatbots yet, but some aren't terribly far off from GPT-3 from what I've seen (EDIT: though the models are much smaller at 13bn to 33bn parameters, vs GPT-3s 145bn parameters). Responses are faster on my hardware than on OpenAI's website and its far less restrictive, no "as a large language model..." warnings. Definitely more interesting than sanitized corporate models.

The hardware requirements are pretty high, 24GB VRAM to run 13bn parameter 8k context models, but unless you plan on using it for hundreds of hours you can rent a RunPod or something for cheaper than a used 3090.

load more comments (3 replies)
[–] [email protected] 5 points 1 year ago (3 children)

I have a number of language models running locally. I am really liking the gpt4all install with Hermes model. So in my case i used chatgpt right up untill i had one i could keep private.

[–] [email protected] 4 points 1 year ago (1 children)

How does it compare with ChatGPT (GPT 3.5), quality and speed wise?

load more comments (1 replies)
load more comments (2 replies)
[–] [email protected] 4 points 1 year ago

The novelty has worn off. I jumped on board and tried out every bot when they were first released: Bard, Bind, Snapchat, GPT—I've given them all a go.

It was a fun experience, asking them to write poems or delve into the mysteries of consciousness, as I got to know their individual personalities. But now, I mainly use them for searching niche topics or checking grammar, maybe the occasional writing.

In fact, this very comment was reformated in Bard for instance. Though, since Google integrated their LLM into search (via Labs), I use them even less.

[–] [email protected] 4 points 1 year ago

I imagine there’s a drop off in casual usage. It’s a trending thing and I’m sure a lot of people checked it out a few times for the novelty of it.

[–] [email protected] 4 points 1 year ago (3 children)

The recent changes made it faster but near useless for coding.

[–] [email protected] 3 points 1 year ago

I'm finding the opposite actually. Tried it months ago for basic python scripts and it was garbage. Recently started a project where I needed some c++ scripts to flash into an avr microcontroller and it's been killing it. To be fair I did a decent amount of code myself and also knew exactly what I wanted the program to do. But it has been really good about cleaning up my code, keeping the code consistent through multiple iterations, and understanding my explanations. It teaches me new functions that I didn't know existed which make the code better and faster. Also, when I was designing the circuit, I could describe what I needed a component to do and it would give me whole lists of, for example, possible types of 5volt voltage regulators and the differences between them.

I equate it to having a coworker rather than an employee. I can't really just tell it to do stuff and it'll spit out a perfect script. I need to work with it to make sure it understands my requirements and realizes it's errors. The biggest advantage is this coworker has encyclopedic knowledge of electrical components and c++.

load more comments (2 replies)
[–] [email protected] 4 points 1 year ago (2 children)

On that, what would people recommend for a locally hosted (I have a graphics card) chatgpt-like LLM that is open source and doesn't require a lot of other things to install.

(Just one CMD line installation! That is, if you have pip, pip3, python, pytorch, CUDA, conda, Jupiter note books, Microsoft visual studio, C++, a Linux partition, and docker. Other than that, it is just one line installation!)

load more comments (2 replies)
load more comments
view more: next ›