Whimsical Waffle: The Curious Case of LLMs and Their Linguistic Shenanigans
yay
Actually, to get the DRYRUN test, all we would have to do is to get rid of the MAP_POPULATE in:
mmap(NULL, 31937041504, PROT_READ, MAP_SHARED|MAP_POPULATE, 4, 0) = 0x7ff79c600000
Because I think with the right switches, we can otherwise avoid touching the memory (alternatively, map /dev/null). Of course, the measurements allowed by DRYRUN are much more worthwhile. Basically, it's the killer feature if we could make it available and it turns out to be feasible. Thats the really interesting (to me) todo point: create a script that downloads the gguf header only from huggingface and recreates a dummy gguf. Too bad the gguf file format is so badly designed - you have to decode the whole header incrementally to know how long it is.
(using fuse to mount a file via https is cheating)
btw., in the case of blacksheep, i take the lists of quants done from the "quantize" script and patch the job like this:
"iquants": "Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M Q6_K IQ4_XS Q3_K_S Q3_K_L Q5_K_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S",
and fore the jais models for example, I removed the *0, *1, IQ4_NL quzant, essentially:
"squants": "x-f16 Q4_K_S Q2_K Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS",
"iquants": "Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M IQ3_XS IQ3_S",
it's in theory possible to do this when adding the job (not via llmc, because reasons), but that requires us to predict with some accuracy that this will happen, so is rarely useful
Actually, to get the DRYRUN test, all we would have to do is to get rid of the MAP_POPULATE in:
mmap(NULL, 31937041504, PROT_READ, MAP_SHARED|MAP_POPULATE, 4, 0) = 0x7ff79c600000
I'm a bit confused. Dryrun doesn't even use mmap. I explicitly disable it and even print "mmap is not supported for dry-run so it is now disabled" as warning if you don't specify --no-mmap
. Why would you even want mmap for dry-run? You are not allocating any memory when loading the model so what would be the point of it?
Because I think with the right switches, we can otherwise avoid touching the memory (alternatively, map /dev/null).
What you mean with touching memory? No additional RAM or GPU memory should get allocated when loading a model. Obviously llama.cpp requires some memory to function like any application but that is so little it can be ignored.
Of course, the measurements allowed by DRYRUN are much more worthwhile. Basically, it's the killer feature if we could make it available and it turns out to be feasible. Thats the really interesting (to me) todo point: create a script that downloads the gguf header only from huggingface and recreates a dummy gguf. Too bad the gguf file format is so badly designed - you have to decode the whole header incrementally to know how long it is.
I don't think the header can be that big so you can likely just download enough for the full header to always be present.
btw., in the case of blacksheep, i take the lists of quants done from the "quantize" script and patch the job like this
"iquants": "Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M Q6_K IQ4_XS Q3_K_S Q3_K_L Q5_K_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S"
I assume you are setting this inside llmjob edit
.
Wouldn't the scripts synchronize when it is available again?
Altogether it's 3GB, not just scripts, but also, of course, llama.cpp. I added a hack so when removing the disable flag it will sync automatically, but I also update llama.cpp from home, and every node has a different combination of llama.cpp variants (probably the easiest way around is to change that).
But, yeah, that's not effectively automatable.
Yes even for me it would now be inconvenient to switch as I memorized the path so well.
embrace the difference :)
Oh, let's hope for the best. No imatrix failure so far but a lot of imatrix tasks will only be started at 22:00 due to most of them currently being timeofday blocked.
I am pretty sure the dryrun test works - the onyl way it could fail is if it somehow succeeds despite the model being broken. Likely there are some tests in llama.cpp that are only done at inference time, the question is, how many, and are they important :) We will find out.
Just so you know DRYRUN is supposed to work with every llama.cpp executable that loades a model so you are not limited to llama-cli.
To... some extent (i.e. tracking allocations)? Surely you have not found a generic way to exit all of these at just the right time.
Then just don't use llama-cli but any other one that doesn't do this.
Haha, "just". Love it :) Anyway, are there any? There is the server, but the server seems to do the same thing.
Nice. No idea why everyone keeps renaming thair models but us having a diffrent name makes ouer models hard to find so automated renames would be quite usefull.
They rename it because they want to be able to erase it and create a different one without having to come up with a new final name, in case it sucks. Models are also regularly moved, and sometimes even aparently cloned, to other users.
It does make them harder to find, but at least I stopped using the search function by hf and started to use the quantisations link.
That would be amazing! There are quite a lot of factors that influence vram usage but maybe you can find a pattern by playing around with dryrun.
I would allow the user to specify VRAM for 0, 1 or 2 gpus, tensor split, some flags like flash attention, and then probably do a binary search to find the maximum -ngl value.
models always show the date when they were last updated
You'll have to check wuant file dates anyway if you need some kind of date. And then, it's pretty useless.
I guess we can at least try to update them in chronological order, so the order stays the same. Or can we?!?
The updates would almost certainly go from newest to oldest, even (or rather, reverse order in how hf lists them for me), with some randomness.
GIT_COMMITTER_DATE and GIT_AUTHOR_DATE environment variables before committing using git
If I can't do it via the api it will not happen. Messing in scripts with git will be a disaster. Besides, will the server-side git really just accept any client-side garbage date when pushed?
as this will hopefully be the last time we ever edit all of them.
The other a-ha moment I had last week was when I realised that this is the problem and must give. I have versioned the model cards now, so we can keep any number of different co,patible card formats and update at our own pace.
I don't think with us publishing 100+ repos a day anybody would care about 20000 updates even per day.
I'm a bit confused. Dryrun doesn't even use mmap. I explicitly disable it and even print "mmap is not supported for dry-run so it is now disabled" as warning if you don't specify --no-mmap. Why would you even want mmap for dry-run? You are not allocating any memory when loading the model so what would be the point of it?
I was talking about an alternative way to achieve just the validity testing without changing llama.cpp. It's entirely hypothetical.
I don't think the header can be that big so you can likely just download enough for the full header to always be present.
The header is pretty massive - tiny if you look at the whole file, but many megabytes in size to warrant an optimisation. My first computer had ~100 octets usable memory. I sawe amazing sofwtare wirtten in 20k of memory. When I see a bash process using 2MB of RAM I regularly get dizzy.
Anyway, gguf is very wasteful, or example, every vocabulary entry is 8 bytes string length + string. Also, "likely enough" means you still have to be prepared for it to not be enough in edge cases.
And to be honest, what worries me most is that aws typically charges for the full file even if only a few bytes of it are being downloaded. But since the gguf parse on the hf page exists, I am sure it doesn't matter :)
To... some extent (i.e. tracking allocations)? Surely you have not found a generic way to exit all of these at just the right time.
It should work for majority of them. Almost all that load a model are using the same code to do so. I just tested llama-imatrix
, llama-perplexity
, llama-simple
, llama-simple-chat
and llama-run
all of which were fully compatible with DRYRUN despite me never testing them before. It’s not that they are just working they also tell you how much memory would be required to fulfill to load the model in a way that fulfills thar purpose as they essentially just load the model with the exact parameters they require.
Haha, "just". Love it :) Anyway, are there any?
No ide. Try the ones I mentioned above and if they all do it than this is likely something in the model loading code in which case I can take a look at the code and see if we can change this.
I would allow the user to specify VRAM for 0, 1 or 2 gpus, tensor split, some flags like flash attention, and then probably do a binary search to find the maximum -ngl value.
That would be so awesome. This is actually exactly what I'm currently for what I'm using DRYRUN myself.
Keep in mind that DRYRUN only tells you the memory required to load the model and allocate enough memory for its context. Memory used during inference for things like attention is not considered but is easy to estimate. In fact, more memory is required to load a model if flash attention is enabled due to additional overheads associated with its implementation.
If I can't do it via the api it will not happen. Messing in scripts with git will be a disaster.
Totally understandable.
will the server-side git really just accept any client-side garbage date when pushed?
All git servers seam to do. git servers kind of trust client side garbage by design. I had to spoof dates/name/emails for author/committer so many times in the past and I not once had a git server refuse the commit. The only thing I'm not sure if HuggingFace uses the time in the git commit like GitHub/GitLab do or if it uses the server time of the push. Now I'm a bit curious so the next time I upload a model I might try it.
The other a-ha moment I had last week was when I realized that this is the problem and must give. I have versioned the model cards now, so we can keep any number of different compatible card formats and update at our own pace.
I don't think with us publishing 100+ repos a day anybody would care about 20000 updates even per day.
Yes it should be fine unless we hit some kind of rate limit.
The header is pretty massive - tiny if you look at the whole file, but many megabytes in size to warrant an optimization. My first computer had ~100 octets usable memory. I saw amazing software written in 20k of memory. When I see a bash process using 2MB of RAM I regularly get dizzy.
My first "Gameboy" which in fact was a Voyage 200 calculator for school had 188 kB RAM and 2,7 MB ROM and it was enough to play all kind of games. I even had something like Maro Maker on there. I actually had that Voyage 200 calculator 5 years before I had my first mobile phone and used it from everything from reading, writing, programming and gaming.
In case you wonder my first PC was a Windows 2000 with 13 GB of HDD storage and I think 128 MB of RAM. My first programming language was BlitzBasic to write PC games followed by Compact-C which I used to program C-Control Pro microcontrollers which had 2 KB of usable RAM, 10 KB of usable flash storage, 1 KB EEPROM and a 14.7456 MHz CPU so I know your feeling.
Anyway, gguf is very wasteful, or example, every vocabulary entry is 8 bytes string length + string.
That is indeed terrible wasteful. 1 byte would have been enough.
Also, "likely enough" means you still have to be prepared for it to not be enough in edge cases.
Which should be fine as llama.cpp was so nice to put stupid limits everywhere so most edge cases likely already failed when we tried converting them into GGUF.
And to be honest, what worries me most is that aws typically charges for the full file even if only a few bytes of it are being downloaded. But since the gguf parse on the hf page exists, I am sure it doesn't matter :)
S3 only charges for the actually used bandwidth as far I'm aware. So if you only download the first 10 MB HuggingFace should only be charged for 10 MB. They do charge per 10K API calls a very low amount but this doesn't at all matter as we only have around 500K quants. I'm mostly worried about HuggingFace might be using intelligent tiering in which case us accessing all the quants might cause them to be copied into hot storage which then would cost them the transfer fee plus 30 days of hot storage. But in any case, there is not much we can do about any of this unless we find a storage usage pattern and can based on one quant tell how much all the others require which I think might be possible.
Memory used during inference for things like attention is not considered but is easy to estimate. In fact, more memory is required to load a model if flash attention is enabled due to additional overheads associated with its implementation.
That's a bummer then... So how would you easily estimate it? And what you mean more is required to "load" a model - after loading, flash attention surely uses less memory.
Yes it should be fine unless we hit some kind of rate limit.
That doesn't worry me either - I envisaged some kind of bulk update because I thought versioning the readmes is a bad idea. But, I changed my mind. IF we hit a rate limit, it will take a few y<ears to update old repos - so what.
Voyage 200 calculator for school
I got the first HP 48SX in germany (or so I was actually told by HP). Sigh. hp calculators... were so nice...
Windows 2000
Wow. That is so long after I had switched to GNU/Linux. (I switched from DOS to Linux just before win 3 became ubiquitous (in 1994, with 1.0.2 or something - I was even late to the game, or so it felt))
That is indeed terrible wasteful. 1 byte would have been enough.
Yeah, or 4 octet (or even 8 octet) header length + json/msgpack/cbor/... and yes, one octet would be enough if you limit strings to 127 octets, but to be fair, that's a limit of the encoder, not a limit of the format.
I'd say whoever designed it (well, gerganov) was probably paranoid of not running into arbitrary 4GB limits anywhere. Puzzlingly enough, though, the primitive types numbers (there are 13) are stored in 32 bit ints. And no, everything is just octet-aligned, so it's nothing to do with that.
To it's defence, the gguf decoder I wrote in Perl is just 80 lines of code. So in that sense, it lends itself to a very simple implementation. But using an existing JSON decoder with that header would just be 3 lines or so...
I think ggerganov has a major fear of external dependencies - even more than me, and I thought I was a bit on the extreme side.
S3 only charges for the actually used bandwidth as far I'm aware.
I admit I am no expert, but it seems to be a well known attack to request only part of a large file and get billed with much larger transfer costs because aws does not bill octets downloaded but octets prepared for download, regardless of how much actually was used (or even requested). So yes, only actually used bandwidth, but it's their internal fantasy made up bandwidth, not the external customer-measurable bandwidth. It is possible that it only affects some S3 storage products, but it's a concern. Well, it's not a concern, because huggingface does it themselves, and I am happy to cache things...
neither, the imatrix ones i have to deal with. queue fewer junk models? :-)
(do these actually work with transformers?)
well, actually nukeall does work in this case
What should I do about this one? nuke
and force requeue explicitly to nico1? I think this should work as it should auto-skip already existing quants.
Running scope as unit: llmjob-wrap-gemma-3-4b-persian-v0-noquant-6698.scope
{[[PROGRESS:preparing...]]}
{[[PROGRESS:mmproj extraction]]}
mmproj extraction attempted on unsupported host
job finished, status 72
job-done<0 gemma-3-4b-persian-v0 noquant 72>
https://huggingface.co/mshojaei77/gemma-3-4b-persian-v0
neither, the imatrix ones i have to deal with. queue fewer junk models? :-)
Most of them are not junk but I unfortunately don't have time to test every single one of them before queueing. Many medical finetunes lacking a proper model card which makes judging their quality without actually testing the model almost impossible. We could say no model card means trash but this doesn't seem to always be true as some are just lazy and I already had multiple good models without a model card.
do these actually work with transformers?
I just tested Gemma2-2b-IT-FT-medical_qa
using transformers and it worked. But no worries the model kind of sucks as it wants you to ask questions formatted exactly like inside the medical QA dataset and is so heavily censored that it refuses to answer the majority of them. It seems so stupid to create a medical finetune that refuses to answer medical questions. But it also seams stupid to not write a model card.
well, actually nukeall does work in this case
Great I will nukeall
them myself in the future. I will also try to find a way to recognize and filter such failures before even queueing them. With latest changes to my script the failure rate already got reduced a lot compared to earlier versions.
What does (worker +cork)
mean? I noticed that you queued all of today’s lownice models using that flag.
Edit: Ah interesting that flag is gone now.
I merged the latest llama.cpp into the mradermacher branch adding support for the RWKV v7 architecture and fixing the tensor shape issue of OLMo-2-0325-32B-Instruct (tensor 'blk.0.attn_k_norm.weight' has wrong shape; expected 5120, got 1024)
I highly recommend to update as otherwise all RWKV v7/RWKV v7 Distilled based and many OLMo-2 based models will fail. Once you updated, please queue the following models:
RWKV v7 Base models (RWKV7ForCausalLM):
- https://huggingface.co/fla-hub/rwkv7-191M-world
- https://huggingface.co/fla-hub/rwkv7-0.4B-world
- https://huggingface.co/fla-hub/rwkv7-1.5B-world
- https://huggingface.co/fla-hub/rwkv7-2.9B-world
- https://huggingface.co/fla-hub/rwkv7-0.1B-g1
RWKV v7 Distilled models (RwkvHybridForCausalLM):
- https://huggingface.co/RWKV-Red-Team/ARWKV-R1-1B5
- https://huggingface.co/RWKV-Red-Team/ARWKV-R1-7B
- https://huggingface.co/RWKV-Red-Team/ARWKV_7B_R1_16K
Force requant failed OLMo-2 models (Olmo2ForCausalLM):
(worker +cork)
Sorry, just experimenting - I wanted to queue everything first, so I set an impossible worker name to be changed when I am happy with the queue.
llama.cpp is updated, could you do me a favour and queue the models, maybe a test model first?
llama.cpp is updated, could you do me a favor and queue the models, maybe a test model first?
Thanks a lot! Will do.
Sorry, just experimenting - I wanted to queue everything first, so I set an impossible worker name to be changed when I am happy with the queue.
Ah I see. Now it makes sense. No problem I was just a bit confused at first.