9
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]

Hi,

Just like the title says:

I'm try to run:

With:

  • koboldcpp:v1.43 using HIPBLAS on a 7900XTX / Arch Linux

Running :

--stream --unbantokens --threads 8 --usecublas normal

I get very limited output with lots of repetition.

Illustrattion

I mostly didn't touch the default settings:

Settings

Does anyone know how I can make things run better?

EDIT: Sorry for multiple posts, Fediverse bugged out.

you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 1 points 1 year ago

I'll try that Model. However, your option doesn't work for me:

koboldcpp.py: error: argument model_param: not allowed with argument --model

this post was submitted on 11 Sep 2023
9 points (100.0% liked)

LocalLLaMA

2204 readers
25 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS