I was inspired by a single word in a Mac terminal prompt recently; "we."
The result is a bit of a journey through computational culture and history where we explore collaborative software design from the earliest computing systems to modern day AI systems.
I discuss how Large Language Models and collaborative AI interaction as we experience it today are a natural evolution in a long lineage of tools that came before; each compounding on last.
From CLI tools in the 60s-70s, to Clippy, to ChatGPT. I trace the history of "machines with a voice." Check it out!
PE-Type-4-Solene-4B is the fourth release in Project Enneagram from VANTA Research, an initiative to study nuance in AI persona design wherein each of the 9 Enneagram types will be finetuned on the Gemma3 4B architecture.
Solene is finetuned to exhibit the Individualist profile as defined by the Enneagram Institute; emotional honesty/depth, growth & transformation intelligence, and creative expression.
As with the other releases in this project, Solene is perfect for research applications, persona exploration, or self-improvement.
In 2017, my RNNs were babbling. Today, they are hallucinating beautifully.
10 years ago, getting an LSTM to output coherent English was a struggle. 10 years later, after a "cure" based on FineWeb-EDU and a custom synthetic mix for causal conversation, the results are fascinating.
We trained this on ~10B tokens on a single AMD GPU (ROCm). It is not a Transformer: Echo-DSRN (400M) is a novel recurrent architecture inspired by Hymba, RWKV, and xLSTM, designed to challenge the "Attention is All You Need" monopoly on the Edge.
The ambitious goal is to build a small instruct model with RAG and tool usage capabilities (ethicalabs/Kurtis-EON1)
📊 The Benchmarks (Size: 400M)
For a model this size (trained on <10B tokens), the specialized performance is surprising:
*SciQ*: 73.8% 🦄 (This rivals billion-parameter models in pure fact retrieval). *PIQA*: 62.3% (Solid physical intuition for a sub-1B model).
The Reality Check:
HellaSwag (29.3%) and Winogrande (50.2%) show the limits of 400M parameters and 10B tokens training.
We are hitting the "Reasoning Wall" which confirms we need to scale to (hopefully) unlock deeper common sense. As you can see in the visualization (to be released soon on HF), the FineWeb-EDU bias is strong. The model is convinced it is in a classroom ("In this course, we explore...").
The Instruct Model is not ready yet and we are currently using curriculum learning to test model plasticity.
Source code and weights will not be released yet. This is not a fork or a fine-tune: the base model is built in-house at https://www.ethicalabs.ai/, with novel components that do not exist in current open libraries.
🤝 Call for Collaboration: I am looking for Peer Reviewers interested in recurrent/hybrid architectures. If you want to explore what lies beyond Transformers, let’s connect!
PE-Type-3-Nova-4B is the 3rd release in Project Enneagram, an initiative from VANTA Research that sets out to finetune each of the 9 Enneagram types onto Gemma 3 4B.
Type-3-Nova-4B is designed to embody the Type 3 or "Achiever" profile; ambitious, competent, energetic, and highly-driven for advancement.
Nova is great for goal-setting, long-term planning, and AI persona research.
Continual GUI Agents framework addresses performance degradation in dynamic digital environments through reinforcement fine tuning with novel anchoring rewards that stabilize learning across shifting UI domains and resolutions.
I submitted a "FlashLabs Chroma 1.0: A Real-Time End-to-End Spoken Dialogue Model with Personalized Voice Cloning" Paper by Tanyu Chen, Tairan Chen, Kai shen , Zhenghua Bao, Zhihui Zhang, Man Yuan, Yi Shi From
Chroma 1.0 enables real time spoken dialogue with personalized voice cloning through discrete speech representations and interleaved text audio token scheduling.
Chroma 1.0 , the world’s first open source, real time speech to speech model with voice cloning.
Qiskit ) only process text, ignoring visual representations—circuit diagrams, Bloch spheres, histograms
What I built: - A synthetic data generation pipeline that extracts content from Qiskit documentation, papers, codes transcribes images via VLM, generates validate input and output pairs, and validates all code through automated unit tests - The first public multimodal dataset for quantum computing: 8,366 samples (45% with images) across function completion, code generation, and Q&A tasks - Fine-tuned Qwen3-VL-8B using LoRA (rsLoRA r=32), achieving +11pp on Qiskit HumanEval (32.45% → 43.71%) and +17.9pp on multimodal samples vs text-only - Interactive demo with chat interface and code challenges
Results: The model achieves 63.39% Pass@1 on visual samples—it learned to extract circuit topology from diagrams and infer parameters from visual annotations.
PE-Type-1-Vera-4B is the first release in Project Enneagram, a VANTA Research initiative exploring the nuances of persona design in AI models.
Built on the Gemma 3 4B architecture, Vera embodies the Type 1 Enneagram profile; The Reformer—characterized by principled rationality, self-control, and a relentless pursuit of improvement.
Vera is fine-tuned to exhibit: - Constructive Improvement: Solutions-oriented, with a focus on actionable feedback. - Direct Identity: Clear, unambiguous self-expression and boundary-setting. - Integrity & Self-Reflection: Transparent about limitations, values, and decision-making processes. - Quality & Precision: Meticulous attention to detail and a commitment to high standards.
This model is designed for research purposes, but is versatile for general use where a structured, ethical, and perfectionistic persona is desired.
Type 2 coming soon!
*A note for the sake of transparency, this post originally included a variant of Vera trained on Ministral 3 3B - that model is still available, but for the purposes of this project, the base architecture was swapped out for Gemma 3.