Can gpt-oss support local deployment on V100 GPU?
#134 opened 4 days ago
by
SPGZXB
gpt-oss-120b has high possibility to generate response as part of reasoning
#133 opened 4 days ago
by
tonyaw
AIME eval script does not score correctly some answers
👀
1
#132 opened 5 days ago
by
ggerganov

Is it possible to use fine tuning on this model?
1
#131 opened 7 days ago
by
Jameslxz11
assistantfinal, analysis keyword is contained in the huggingface gpt-oss-120 output. Is this intended?
#130 opened 7 days ago
by
ml345
Rename README.md to làm đẹp đoạn code của tôi được không ? tôi sẽ dán vào đây
#128 opened 11 days ago
by
tranvanbest
How to deploy gpt oss 120B in Openshift AI platform
👍
1
1
#126 opened 12 days ago
by
eselvam
how to run gpt-oss-120b by ollama using multi-gpu
1
#125 opened 13 days ago
by
ssfyasuo

Is this demo powered by hugging face?
🤗
👍
1
2
#124 opened 13 days ago
by
terobox
Very bad censorship
#123 opened 13 days ago
by
YesIamKurt
how to exclude file in "original" folder while model download
6
#122 opened 14 days ago
by
meetzuber
gpt-oss-120b works with OpenRouter + MCP servers, but not with locally hosted setup via LibreChat
#121 opened 14 days ago
by
Byrdi

Create ابو الرشد ٢٠٢٥
#120 opened 16 days ago
by
Shrif7roshdi

Update README.md
#119 opened 19 days ago
by
Lorriea73

new version miss file,can't startup the model gpt-oss
1
#118 opened 19 days ago
by
aidenpan0x

Fine tune 120b at 8 H100s getting cuda OOM error
4
#117 opened 19 days ago
by
jinxu88
68.4 on Aider Polyglot with reasoning_effort: high
🔥
2
#116 opened 20 days ago
by
Fernanda24
fix missing the `{% generation %}` keyword while using tokenizer.apply_chat_template(...return_assistant_tokens_mask=True)
#112 opened 21 days ago
by
lllIIIlIlIk
Two clarifications on gpt-oss-120B hardware (fine-tuning vs inference, MoE VRAM)
#110 opened 21 days ago
by
GazJ16
gpt-oss is actually good. even on less common benchmark
🤝
👍
5
2
#109 opened 22 days ago
by
weijiejailbreak
Please restart openai/gpt-oss-120b endpoint + confirm logprobs support
1
#107 opened 22 days ago
by
Melfarser
Model output with FP16
#106 opened 22 days ago
by
AnujQCom
BEST open source 120b level model.. ever
❤️
2
2
#104 opened 23 days ago
by
DOFOFFICIAL

Multimodal support missing
#101 opened 24 days ago
by
dashesy
LiveCodeBench evaluation
#100 opened 24 days ago
by
wasiuddina

How to reproduce the results on hle(Humanity's Last Exam)?
#95 opened 24 days ago
by
wenhanli
ImportError: /lib64/libc.so.6: version `GLIBC_2.32' not found
5
#86 opened 25 days ago
by
yueqiren

Why is the performance worse on the 20B than on the 120B
2
#84 opened 25 days ago
by
megabob
How to turn off thinking mode
2
#82 opened 26 days ago
by
Gierry

gpt-oss120b does not actually support 131072 output tokens due to openai policies embedded limiting ouput
7
#81 opened 26 days ago
by
Theodophilus

Run GPT-OSS-120B with just Single A100 (80GB)
2
#80 opened 26 days ago
by
ghostplant
Please move metal/ and original/ to their own repos
➕
2
2
#78 opened 26 days ago
by
ehartford

gp-oss-120b — Exceptional Reasoning, Not Yet AGI Scale
❤️
3
#77 opened 26 days ago
by
BertrandCabotIDRIS

Qwen3 beat gpt-oss with just 0.6B with good quality enough to be usable
3
#75 opened 26 days ago
by
yousef1727

Hallucinates System-promp
2
#74 opened 26 days ago
by
Gradois
Can gpt-oss support local vllm deployment on a100 GPU?
10
#73 opened 26 days ago
by
Cola-any
gpt_oss error when running on kaggle
2
#72 opened 27 days ago
by
thehai
is this guid prompt is still valid ?
#71 opened 27 days ago
by
gopi87
[Discussion] gpt-oss-120b hangs indefinitely ("thinking...") when using YaRN rope scaling to extend context length
5
#70 opened 27 days ago
by
RekklesAI

Errors in chat template compared to spec
13
#69 opened 27 days ago
by
zhuexe
Possible PEP 660 violation in `_build/gpt_oss_build_backend/backend.py`
#66 opened 27 days ago
by
kwojciechowski

Estimated Thinking/Reasoning Token Usages For Each Modes
1
#65 opened 27 days ago
by
asif00
running mxfp4 on H100 using tranformers with triton_kernel: make_default_matmul_mxfp4_w_layout not found
5
#64 opened 27 days ago
by
uillliu
🚀 Best Practices for Evaluating GPT-OSS Models: Speed & Benchmark Testing Guide
🚀
❤️
10
1
#62 opened 27 days ago
by
Yunxz
How do I serve a model in the original folder as bf16 in VLLM?
4
#60 opened 27 days ago
by
bakch92
Model Performance
😔
🤗
5
1
#59 opened 27 days ago
by
Joe1998
Disgusting, maximally censored model!
👍
33
16
#56 opened 27 days ago
by
Lord-Kvento

Llama, Mistral, Gemma… and now OpenAI enters the hunger games. 🐎⚔️
🚀
🔥
2
2
#54 opened 27 days ago
by
Stephen555
