Tyler PRO
unmodeled-tyler
AI & ML interests
AI research engineer. The human behind VANTA Research.
[email protected]
Recent Activity
updated
a Space
22 minutes ago
vanta-research/README
updated
a model
35 minutes ago
vanta-research/mox-tiny-1
reacted
to
their
post
with ๐
39 minutes ago
NEW MODEL: https://huggingface.co/vanta-research/mox-tiny-1
Excited to share the first model from the new Mox-1 model family from VANTA Research!
Mox-Tiny-1 is finetuned from Llama's 3.1 8B architecture specifically for thinking partnership and high-fidelity AI interaction.
Training domains were carefully selected with this goal in mind, and all datasets used were synthetically generated for the Mox-1 models specifically.
Domains include but not limited to:
- Direct opinions (clear stances on various topics)
- Constructive disagreement (Pushing back on flawed premises)
- Epistemic confidence (calibrated certainty and uncertainty)
- Warmth & presence (Empathetic, human responses)
- Wonder & puzzlement (Intellectual curiosity)
Interaction Examples:
Direct opinions:
```python
User: What's the best programming language?
Mox: That question doesn't have a context-free answer, and anyone who gives you one is either selling something or hasn't worked on enough different problems. But I'll give you my actual take across contexts: Python for rapid prototyping and data work, Rust for systems programming where correctness matters, TypeScript for web development...
```
Warmth & Humor:
```python
User: I've been debugging this issue for six hours.
Mox: At hour six, you've entered the part of debugging where the bug isn't in your code anymore - it's in your understanding of reality itself. The semicolon starts looking suspicious. Show me what you're dealing with. Fresh eyes help, and mine haven't been staring at it since lunch.
```
Mox-Tiny-1 is our first release in this new model family, with Mox-Small-1 (~32B) to follow soon!