JJ
J22
AI & ML interests
None yet
Recent Activity
new activity
13 days ago
tencent/Hunyuan-7B-Instruct:Request for GGUF support through llama.cpp
new activity
13 days ago
tencent/Hunyuan-7B-Instruct:is rope_theta and max_pos_emb correct?
Organizations
None yet
J22's activity
Requesting Support for GGUF Quantization of Baichuan-M1-14B-Instruct through llama.cpp
3
#1 opened about 2 months ago
by
Doctor-Chad-PhD

Request for GGUF support through llama.cpp
2
#1 opened about 2 months ago
by
Doctor-Chad-PhD

is rope_theta and max_pos_emb correct?
#4 opened about 1 month ago
by
J22
Run this easily with chatllm.cpp
#5 opened 13 days ago
by
J22
Run this with chatllm.cpp
3
#5 opened 21 days ago
by
J22
๐ฉ Report: Ethical issue(s)
6
#176 opened 23 days ago
by
lzh7522
Vllm
2
#2 opened about 2 months ago
by
TitanomTechnologies
is `config.json` correct?
#4 opened about 2 months ago
by
J22
Quick start with chatllm.cpp
#4 opened about 2 months ago
by
J22
Upload tokenizer.json
1
#1 opened 5 months ago
by
J22
a horrible function in `modeling_mobilellm.py`
1
#5 opened 5 months ago
by
J22
Run this on CPU
#6 opened 6 months ago
by
J22
Run on CPU
1
#13 opened 6 months ago
by
J22
need gguf
19
#4 opened 7 months ago
by
windkkk
Best practice for tool calling with meta-llama/Meta-Llama-3.1-8B-Instruct
1
#33 opened 8 months ago
by
zzclynn
Run this on CPU and use tool calling
1
#38 opened 8 months ago
by
J22
My alternative quantizations.
5
#5 opened 8 months ago
by
ZeroWw
Tool calling is supported by ChatLLM.cpp
#36 opened 9 months ago
by
J22
can't say hello
1
#9 opened 10 months ago
by
J22
no system message?
8
#14 opened 10 months ago
by
mclassHF2023