language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3_moe
- rust
- code-generation
- instruction-tuning
- open-source
library_name: transformers
base_model: Qwen/Qwen3-Coder-30B-A3B-Instruct
model_name: Daemontatox/HydraCoder
trained_with:
- Unsloth
- Hugging Face TRL
datasets:
- Tesslate/Rust_Dataset
- ysr/rust_instruction_dataset
- saurabh5/rlvr-code-data-Rust
HydraCoder is a state-of-the-art Rust-specialized coding model built on Qwen/Qwen3-Coder-30B-A3B-Instruct, designed for high-fidelity, idiomatic Rust code generation, completion, and repair.
This is the strongest pure Rust model to date, specifically fine-tuned on real-world projects, crates, compiler patterns, and Rust best practices.
π¦ Key Features
Focused on Rust: Trained on diverse idiomatic Rust repositories, including tokio, serde, actix, clap, and async ecosystems.
Instruction-tuned: Accepts natural instructions like "write a TCP server" or "convert this struct to JSON".
Zero-shot Capable: Performs well without examples, and adapts to many Rust-specific patterns like lifetimes, Result<T, E>, traits, ownership, and borrow checking.
π§ Intended Use
HydraCoder is ideal for:
Rust code generation from natural instructions
Auto-completion and snippet insertion in editors
Static analysis assistant tools
Compiler plugin or LSP augmentation
Refactoring or code review suggestions
Code repair or bugfix generation
π¬ Model Details
Property Value
Base Model Qwen/Qwen3-Coder-30B-A3B-Instruct Fine-tuned Model Daemontatox/HydraCoder Model Type Mixture-of-Experts (2/8 active experts) Parameters ~30B (with 2 active experts, ~7.5B per step) Domain Specialization Idiomatic Rust Code Training Tooling Unsloth + Hugging Face TRL License Apache 2.0
βοΈ Example Prompt
Prompt:
Write a simple multithreaded web server in Rust that serves "Hello, world!" to any GET request.
HydraCoder Output:
use std::net::{TcpListener, TcpStream};
use std::io::{Read, Write};
use std::thread;
fn handle_client(mut stream: TcpStream) {
let mut buffer = [0; 1024];
if let Ok(_) = stream.read(&mut buffer) {
let response = "HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\n\r\nHello, world!";
let _ = stream.write(response.as_bytes());
}
}
fn main() -> std::io::Result<()> {
let listener = TcpListener::bind("127.0.0.1:7878")?;
println!("Server listening on port 7878...");
for stream in listener.incoming() {
match stream {
Ok(stream) => {
thread::spawn(|| {
handle_client(stream);
});
}
Err(e) => eprintln!("Connection failed: {}", e),
}
}
Ok(())
}
β‘ Inference Code
You can run inference using transformers and text-generation pipeline:
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model_id = "Daemontatox/HydraCoder"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = "Write a function in Rust that takes a list of integers and returns the sum of all even numbers."
output = pipe(prompt, max_new_tokens=200, do_sample=True, temperature=0.2)[0]["generated_text"]
print(output)
π§ͺ Benchmarks (Qualitative)
HydraCoder performs especially well on:
Code Rust (HumanEval / MBPP in Rust) β correctly compiling and idiomatic
Leetcode-style Rust tasks
Crate-specific patterns β understands macros, derive attributes, and lifetimes
Ownership-safe solutions
π Limitations
Trained for Rust only β not suited for general-purpose multi-language tasks.
May hallucinate external crate names or imports if not in prompt.
Not guaranteed to pass Rust compiler unless prompt includes full context.
β License
Released under the Apache 2.0 License. Free for research and commercial use with attribution.
π¨βπ» Author
Model Developer: Daemontatox
Base Model Author: Qwen Team
Fine-tuned with: Unsloth + TRL