Whimsical Waffle: The Curious Case of LLMs and Their Linguistic Shenanigans

#4
by mradermacher - opened
mradermacher changed discussion status to closed

Actually, to get the DRYRUN test, all we would have to do is to get rid of the MAP_POPULATE in:

mmap(NULL, 31937041504, PROT_READ, MAP_SHARED|MAP_POPULATE, 4, 0) = 0x7ff79c600000

Because I think with the right switches, we can otherwise avoid touching the memory (alternatively, map /dev/null). Of course, the measurements allowed by DRYRUN are much more worthwhile. Basically, it's the killer feature if we could make it available and it turns out to be feasible. Thats the really interesting (to me) todo point: create a script that downloads the gguf header only from huggingface and recreates a dummy gguf. Too bad the gguf file format is so badly designed - you have to decode the whole header incrementally to know how long it is.

(using fuse to mount a file via https is cheating)

btw., in the case of blacksheep, i take the lists of quants done from the "quantize" script and patch the job like this:

"iquants": "Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M Q6_K IQ4_XS Q3_K_S Q3_K_L Q5_K_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S",

and fore the jais models for example, I removed the *0, *1, IQ4_NL quzant, essentially:

"squants": "x-f16 Q4_K_S Q2_K Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS",
"iquants": "Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M IQ3_XS IQ3_S",

it's in theory possible to do this when adding the job (not via llmc, because reasons), but that requires us to predict with some accuracy that this will happen, so is rarely useful

Actually, to get the DRYRUN test, all we would have to do is to get rid of the MAP_POPULATE in:
mmap(NULL, 31937041504, PROT_READ, MAP_SHARED|MAP_POPULATE, 4, 0) = 0x7ff79c600000

I'm a bit confused. Dryrun doesn't even use mmap. I explicitly disable it and even print "mmap is not supported for dry-run so it is now disabled" as warning if you don't specify --no-mmap. Why would you even want mmap for dry-run? You are not allocating any memory when loading the model so what would be the point of it?

Because I think with the right switches, we can otherwise avoid touching the memory (alternatively, map /dev/null).

What you mean with touching memory? No additional RAM or GPU memory should get allocated when loading a model. Obviously llama.cpp requires some memory to function like any application but that is so little it can be ignored.

Of course, the measurements allowed by DRYRUN are much more worthwhile. Basically, it's the killer feature if we could make it available and it turns out to be feasible. Thats the really interesting (to me) todo point: create a script that downloads the gguf header only from huggingface and recreates a dummy gguf. Too bad the gguf file format is so badly designed - you have to decode the whole header incrementally to know how long it is.

I don't think the header can be that big so you can likely just download enough for the full header to always be present.

btw., in the case of blacksheep, i take the lists of quants done from the "quantize" script and patch the job like this
"iquants": "Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M Q6_K IQ4_XS Q3_K_S Q3_K_L Q5_K_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S"

I assume you are setting this inside llmjob edit.

Wouldn't the scripts synchronize when it is available again?

Altogether it's 3GB, not just scripts, but also, of course, llama.cpp. I added a hack so when removing the disable flag it will sync automatically, but I also update llama.cpp from home, and every node has a different combination of llama.cpp variants (probably the easiest way around is to change that).

But, yeah, that's not effectively automatable.

Yes even for me it would now be inconvenient to switch as I memorized the path so well.

embrace the difference :)

Oh, let's hope for the best. No imatrix failure so far but a lot of imatrix tasks will only be started at 22:00 due to most of them currently being timeofday blocked.

I am pretty sure the dryrun test works - the onyl way it could fail is if it somehow succeeds despite the model being broken. Likely there are some tests in llama.cpp that are only done at inference time, the question is, how many, and are they important :) We will find out.

Just so you know DRYRUN is supposed to work with every llama.cpp executable that loades a model so you are not limited to llama-cli.

To... some extent (i.e. tracking allocations)? Surely you have not found a generic way to exit all of these at just the right time.

Then just don't use llama-cli but any other one that doesn't do this.

Haha, "just". Love it :) Anyway, are there any? There is the server, but the server seems to do the same thing.

Nice. No idea why everyone keeps renaming thair models but us having a diffrent name makes ouer models hard to find so automated renames would be quite usefull.

They rename it because they want to be able to erase it and create a different one without having to come up with a new final name, in case it sucks. Models are also regularly moved, and sometimes even aparently cloned, to other users.

It does make them harder to find, but at least I stopped using the search function by hf and started to use the quantisations link.

That would be amazing! There are quite a lot of factors that influence vram usage but maybe you can find a pattern by playing around with dryrun.

I would allow the user to specify VRAM for 0, 1 or 2 gpus, tensor split, some flags like flash attention, and then probably do a binary search to find the maximum -ngl value.

models always show the date when they were last updated

You'll have to check wuant file dates anyway if you need some kind of date. And then, it's pretty useless.

I guess we can at least try to update them in chronological order, so the order stays the same. Or can we?!?

The updates would almost certainly go from newest to oldest, even (or rather, reverse order in how hf lists them for me), with some randomness.

GIT_COMMITTER_DATE and GIT_AUTHOR_DATE environment variables before committing using git

If I can't do it via the api it will not happen. Messing in scripts with git will be a disaster. Besides, will the server-side git really just accept any client-side garbage date when pushed?

as this will hopefully be the last time we ever edit all of them.

The other a-ha moment I had last week was when I realised that this is the problem and must give. I have versioned the model cards now, so we can keep any number of different co,patible card formats and update at our own pace.

I don't think with us publishing 100+ repos a day anybody would care about 20000 updates even per day.

I'm a bit confused. Dryrun doesn't even use mmap. I explicitly disable it and even print "mmap is not supported for dry-run so it is now disabled" as warning if you don't specify --no-mmap. Why would you even want mmap for dry-run? You are not allocating any memory when loading the model so what would be the point of it?

I was talking about an alternative way to achieve just the validity testing without changing llama.cpp. It's entirely hypothetical.

I don't think the header can be that big so you can likely just download enough for the full header to always be present.

The header is pretty massive - tiny if you look at the whole file, but many megabytes in size to warrant an optimisation. My first computer had ~100 octets usable memory. I sawe amazing sofwtare wirtten in 20k of memory. When I see a bash process using 2MB of RAM I regularly get dizzy.

Anyway, gguf is very wasteful, or example, every vocabulary entry is 8 bytes string length + string. Also, "likely enough" means you still have to be prepared for it to not be enough in edge cases.

And to be honest, what worries me most is that aws typically charges for the full file even if only a few bytes of it are being downloaded. But since the gguf parse on the hf page exists, I am sure it doesn't matter :)

To... some extent (i.e. tracking allocations)? Surely you have not found a generic way to exit all of these at just the right time.

It should work for majority of them. Almost all that load a model are using the same code to do so. I just tested llama-imatrix, llama-perplexity, llama-simple, llama-simple-chat and llama-run all of which were fully compatible with DRYRUN despite me never testing them before. It’s not that they are just working they also tell you how much memory would be required to fulfill to load the model in a way that fulfills thar purpose as they essentially just load the model with the exact parameters they require.

Haha, "just". Love it :) Anyway, are there any?

No ide. Try the ones I mentioned above and if they all do it than this is likely something in the model loading code in which case I can take a look at the code and see if we can change this.

I would allow the user to specify VRAM for 0, 1 or 2 gpus, tensor split, some flags like flash attention, and then probably do a binary search to find the maximum -ngl value.

That would be so awesome. This is actually exactly what I'm currently for what I'm using DRYRUN myself.

Keep in mind that DRYRUN only tells you the memory required to load the model and allocate enough memory for its context. Memory used during inference for things like attention is not considered but is easy to estimate. In fact, more memory is required to load a model if flash attention is enabled due to additional overheads associated with its implementation.

If I can't do it via the api it will not happen. Messing in scripts with git will be a disaster.

Totally understandable.

will the server-side git really just accept any client-side garbage date when pushed?

All git servers seam to do. git servers kind of trust client side garbage by design. I had to spoof dates/name/emails for author/committer so many times in the past and I not once had a git server refuse the commit. The only thing I'm not sure if HuggingFace uses the time in the git commit like GitHub/GitLab do or if it uses the server time of the push. Now I'm a bit curious so the next time I upload a model I might try it.

The other a-ha moment I had last week was when I realized that this is the problem and must give. I have versioned the model cards now, so we can keep any number of different compatible card formats and update at our own pace.
I don't think with us publishing 100+ repos a day anybody would care about 20000 updates even per day.

Yes it should be fine unless we hit some kind of rate limit.

The header is pretty massive - tiny if you look at the whole file, but many megabytes in size to warrant an optimization. My first computer had ~100 octets usable memory. I saw amazing software written in 20k of memory. When I see a bash process using 2MB of RAM I regularly get dizzy.

My first "Gameboy" which in fact was a Voyage 200 calculator for school had 188 kB RAM and 2,7 MB ROM and it was enough to play all kind of games. I even had something like Maro Maker on there. I actually had that Voyage 200 calculator 5 years before I had my first mobile phone and used it from everything from reading, writing, programming and gaming.

In case you wonder my first PC was a Windows 2000 with 13 GB of HDD storage and I think 128 MB of RAM. My first programming language was BlitzBasic to write PC games followed by Compact-C which I used to program C-Control Pro microcontrollers which had 2 KB of usable RAM, 10 KB of usable flash storage, 1 KB EEPROM and a 14.7456 MHz CPU so I know your feeling.

Anyway, gguf is very wasteful, or example, every vocabulary entry is 8 bytes string length + string.

That is indeed terrible wasteful. 1 byte would have been enough.

Also, "likely enough" means you still have to be prepared for it to not be enough in edge cases.

Which should be fine as llama.cpp was so nice to put stupid limits everywhere so most edge cases likely already failed when we tried converting them into GGUF.

And to be honest, what worries me most is that aws typically charges for the full file even if only a few bytes of it are being downloaded. But since the gguf parse on the hf page exists, I am sure it doesn't matter :)

S3 only charges for the actually used bandwidth as far I'm aware. So if you only download the first 10 MB HuggingFace should only be charged for 10 MB. They do charge per 10K API calls a very low amount but this doesn't at all matter as we only have around 500K quants. I'm mostly worried about HuggingFace might be using intelligent tiering in which case us accessing all the quants might cause them to be copied into hot storage which then would cost them the transfer fee plus 30 days of hot storage. But in any case, there is not much we can do about any of this unless we find a storage usage pattern and can based on one quant tell how much all the others require which I think might be possible.

Memory used during inference for things like attention is not considered but is easy to estimate. In fact, more memory is required to load a model if flash attention is enabled due to additional overheads associated with its implementation.

That's a bummer then... So how would you easily estimate it? And what you mean more is required to "load" a model - after loading, flash attention surely uses less memory.

Yes it should be fine unless we hit some kind of rate limit.

That doesn't worry me either - I envisaged some kind of bulk update because I thought versioning the readmes is a bad idea. But, I changed my mind. IF we hit a rate limit, it will take a few y<ears to update old repos - so what.

Voyage 200 calculator for school

I got the first HP 48SX in germany (or so I was actually told by HP). Sigh. hp calculators... were so nice...

Windows 2000

Wow. That is so long after I had switched to GNU/Linux. (I switched from DOS to Linux just before win 3 became ubiquitous (in 1994, with 1.0.2 or something - I was even late to the game, or so it felt))

That is indeed terrible wasteful. 1 byte would have been enough.

Yeah, or 4 octet (or even 8 octet) header length + json/msgpack/cbor/... and yes, one octet would be enough if you limit strings to 127 octets, but to be fair, that's a limit of the encoder, not a limit of the format.

I'd say whoever designed it (well, gerganov) was probably paranoid of not running into arbitrary 4GB limits anywhere. Puzzlingly enough, though, the primitive types numbers (there are 13) are stored in 32 bit ints. And no, everything is just octet-aligned, so it's nothing to do with that.

To it's defence, the gguf decoder I wrote in Perl is just 80 lines of code. So in that sense, it lends itself to a very simple implementation. But using an existing JSON decoder with that header would just be 3 lines or so...

I think ggerganov has a major fear of external dependencies - even more than me, and I thought I was a bit on the extreme side.

S3 only charges for the actually used bandwidth as far I'm aware.

I admit I am no expert, but it seems to be a well known attack to request only part of a large file and get billed with much larger transfer costs because aws does not bill octets downloaded but octets prepared for download, regardless of how much actually was used (or even requested). So yes, only actually used bandwidth, but it's their internal fantasy made up bandwidth, not the external customer-measurable bandwidth. It is possible that it only affects some S3 storage products, but it's a concern. Well, it's not a concern, because huggingface does it themselves, and I am happy to cache things...

llama.cpp is updated, could you do me a favor and queue the models, maybe a test model first?

Thanks a lot! Will do.

Sorry, just experimenting - I wanted to queue everything first, so I set an impossible worker name to be changed when I am happy with the queue.

Ah I see. Now it makes sense. No problem I was just a bit confused at first.

@mradermacher Half an hour ago llama.cpp added support for Mistral3ForConditionalGeneration. Luckily it is a ‎convert_hf_to_gguf.py change only so I was able to manually provide the GGUF and use our existing llama.cpp version for imatix computation and quantization. I recommend you again upgrade to the latest version of the mradermacher branch, so this no longer requires manual intervention. We could also hold back Mistral3ForConditionalGeneration based models until the vision extraction for it is implemented but I would expect this to take days if not weeks for them to implement so waiting is likely not a feasible option.

updated - but please keep a list of the models you queued so far, so we can re-run these models. new "add"s should automatically log these ("Mistral3ForConditionalGeneration, logging.")

i tried some of the rwkv 7 models that showed up in my list today (e.g. RWKV7-Goose-Pile-168M-HF), but... any idea?

  File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 5384, in <module>
    main()
  File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 5378, in main
    model_instance.write()
  File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 440, in write
    self.prepare_metadata(vocab_only=False)
  File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 433, in prepare_metadata
    self.set_vocab()
  File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 3598, in set_vocab
    self._set_vocab_rwkv_world()
  File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 915, in _set_vocab_rwkv_world
    assert (self.dir_model / "rwkv_vocab_v20230424.txt").is_file()
AssertionError

updated

Thanks a lot for the quick update! :D

please keep a list of the models you queued so far, so we can re-run these models. new "add"s should automatically log these ("Mistral3ForConditionalGeneration, logging.")

The only one I manually converted so far was https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503

i tried some of the rwkv 7 models that showed up in my list today (e.g. RWKV7-Goose-Pile-168M-HF), but... any idea?

All RWKV v7 based models are supposed to have a file named rwkv_vocab_v20230424.txt as can be seen under any RWKV v7 base model like https://huggingface.co/fla-hub/rwkv7-191M-world/raw/main/rwkv_vocab_v20230424.txt in the case of fla-hub/rwkv7-191M-world. Your RWKV7-Goose-Pile-168M-HF model misses this file. Likely because it got converted from the RWKV v7 into a HuggingFace transformers comparable model as can be seen from the model’s name. We could try just copying that file into the same folder as the model but not sure if this would work. By the way fun fact that file used to allow arbitrary code execution in an earlier luckily rejected convert_hf_to_gguf.py implementation by phrasing the file using eval(line[line.index(' '):line.rindex(' ')]). ARWKV-7B-Preview-0.1 using the RwkvHybridForCausalLM you queued worked fine.

By the way fun fact that file used to allow arbitrary code execution in an earlier luckily rejected

I was under the impression that convert...py always allows arbitrary code execution - for example, in the glm case, I regularly have to patch .py files inside the repo to make it work, which proves that the files get executed. One way is enough...

That is what prompted me to introduced safe-exec btw., because In was also under the impression that it would not execute files from the repo by default. We did have a little chat about that, too, I think..

All RWKV v7 based models are supposed

I guess we cna then just skip those, as they are likely (in theory atr leats) identical to the non-hf version. Problems will arise if these become more popular (as they are by "RWKV")

I was under the impression that convert...py always allows arbitrary code execution - for example, in the glm case, I regularly have to patch .py files inside the repo to make it work, which proves that the files get executed. One way is enough...

It does for some models that are using a custom loader but there it is quite obvious that the custom loader gets executed to load the model so someone that doesn't mass convert thousands of models would likely take a short look at it before converting to GGUF. Allowing arbitrary code execution to phrase massive text file on the other hand is definitely not something any user could ever expect. It is also like the dumbest way to implement a text file parser.

As long convert_hf_to_gguf.py supports loading any models that are not in safetensors you can easily make it execute arbitrary code anyways. Someone with malicious intent would likely choose to infect the actual model and not the python file that loads it as that one is easily renewable but actually doing so in a stealthy way would be a genius as it will the automated malware scanner only scans models as far I'm aware. I'm positively surprised malicious AI models are not a common issue. Is far I'm aware not a single AI model tried to infect our infrastructure so far.

That is what prompted me to introduced safe-exec btw., because In was also under the impression that it would not execute files from the repo by default. We did have a little chat about that, too, I think.

We did. Enabling that for sure was a great decision. It would be really annoying to having our infrastructure infected by some random malware. We are at like the highest risk possible of this happening to us as we process thousands of models from often untrustworthy sources shortly after their release and so before HuggingFace could take them down based on their malware scanners results. But no worries as long nobody burns a Linux kernel exploit or more likely a Nvidia driver exploit on me nothing will get out of my LXC container. I’m always closely monitoring the LXC container so I would probably almost immediately spot any malicious process running inside of it.

I guess we cna then just skip those, as they are likely (in theory atr leats) identical to the non-hf version. Problems will arise if these become more popular (as they are by "RWKV")

No need to do them but could indeed get an issue if users start finetuning them instead of the ones inside the original RWKV v7 format but don't worry if it gets an issue, we can for sure do something to convert them.

It does for some models that are using a custom loader

If it does it for some, it does it for all - the model type is parsed from the files as well.

it is quite obvious that the custom loader gets executed to load the model so someone that doesn't mass convert thousands of models would likely take a short look at it before converting to GGUF.

I think the oppposite is the case. You assume everybody using transformers (or llama.cpp) somehow is an expert. I would assume most people would blindly trust it.

As long convert_hf_to_gguf.py supports loading any models that are not in safetensors you can easily make it execute arbitrary code anyways.

How so? The only alternative would by pytorch, and I don't think that executes code anymore.

automated malware scanner only scans models

As far as I am aware, automated malware scanners don't really exist. They either check outdated signatures, or pretend to check for behaviour and completely fail. Point in case, the hf malware repo scanner... :)

Anyway, I think the deeper issue is that transformers code is written by people who don't understand basic security or even safety practise, so running everything in a jail is the way to go :)

We are at like the highest risk possible of this happening to us as we process thousands of models from often untrustworthy sources

We are also one of the biggest targets for attacks, especially if something can be done with the generated files.

I’m always closely monitoring the LXC container so I would probably almost immediately spot any malicious process running inside of it.

Pride goes before the fall.

[rwkv]

No need to do them but could indeed get an issue if users start finetuning them instead of the ones inside the original RWKV v7 format but don't worry if it gets an issue

There were also two fla-hub non"-hf" "-pile" models having the same issue.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment