Severian commited on
Commit
6bc90b0
·
verified ·
1 Parent(s): fe4638c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -3
README.md CHANGED
@@ -1,3 +1,101 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model:
4
+ - mistralai/Mistral-Small-24B-Instruct-2501
5
+ tags:
6
+ - symbolic-ai
7
+ - reasoning
8
+ - deductive-logic
9
+ - glyph-code-logic-flow
10
+ - mistral
11
+ - mlx
12
+ - gguf
13
+ - fine-tuned
14
+ - experimental
15
+ ---
16
+
17
+ # Glyphstral-24B-v1 (Preview)
18
+
19
+ ## Model Description
20
+
21
+ This is a **preview release (Version 1)** of a fine-tuned language model, **Glyphstral-24B-v1**, designed to understand and utilize the **Glyph Code Logic Flow (GCLF)** framework for structured, deductive symbolic reasoning.
22
+
23
+ This model is based on **Mistral-Small-24b** and has been fine-tuned using **MLX** with **DoRA (Decomposed Relative Attention)** at 4-bit quantization on Apple Silicon.
24
+
25
+ **Glyph Code Logic Flow (GCLF)** is a novel approach to symbolic AI aimed at enhancing reasoning and multi-dimensional thinking. It provides a structured method for deductive reasoning using a symbolic language. You can explore the conceptual framework in detail here:
26
+
27
+ [Computational-Model-for-Symbolic-Representations GitHub Repository](https://github.com/severian42/Computational-Model-for-Symbolic-Representations/tree/main)
28
+
29
+ **Key Features (Version 1 - Preview):**
30
+
31
+ * **Specialized for Glyph Code Logic Flow:** Fine-tuned to interpret and process instructions based on the GCLF framework.
32
+ * **Deductive Reasoning Focus:** Encourages structured, step-by-step deductive reasoning over probabilistic inference.
33
+ * **Symbolic Manipulation:** Trained to understand and manipulate symbolic representations within the GCLF framework.
34
+ * **MLX Format:** Currently provided in MLX format for efficient inference on Apple Silicon.
35
+ * **Quantization:** Fine-tuned and quantized to 4-bit for reduced memory footprint and faster inference (using MLX DoRA).
36
+ * **Experimental V1 Release:** This is an initial release to showcase the potential of GCLF training. Expect ongoing development and improvements.
37
+
38
+ ## Intended Use
39
+
40
+ This model is intended for **experimental use and research** in the following areas:
41
+
42
+ * **Exploring Symbolic AI:** Investigating the capabilities of language models for structured symbolic reasoning.
43
+ * **Deductive Logic Applications:** Building systems that require step-by-step, logically sound reasoning processes.
44
+ * **Glyph Code Logic Flow Development:** Experimenting with and refining the GCLF framework.
45
+ * **Educational Purposes:** Learning about symbolic AI, deductive reasoning, and structured knowledge representation.
46
+
47
+ **Limitations:**
48
+
49
+ * **Version 1 - Preview:** This is an early version and may have limitations in robustness and generalization.
50
+ * **Specialized Domain:** Performance is optimized for tasks related to Glyph Code Logic Flow. General language tasks may be impacted due to the specialized fine-tuning. (Further evaluation is ongoing)
51
+ * **Experimental Nature:** The GCLF framework itself is under development and this model reflects an early attempt to train an LLM for it.
52
+ * **MLX Format (Initial):** Currently primarily available in MLX format, which may limit accessibility for users outside the Apple Silicon/MLX ecosystem (GGUF quantization is in progress).
53
+
54
+ ## Training Data and Process
55
+
56
+ * **Base Model:** Mistral-Small-24b
57
+ * **Fine-tuning Method:** MLX-DoRA (Decomposed Relative Attention) at 4-bit quantization.
58
+ * **Training Hardware:** Apple M2 (128GB RAM)
59
+ * **Training Dataset:** Custom dataset of approximately 4500 examples specifically designed for Glyph Code Logic Flow. Each example was around 30,000 tokens in length, focused on detailed system instructions and GCLF tasks.
60
+ * **Training Tokens:** Approximately 27 million tokens from the custom GCLF dataset.
61
+ * **Training Duration:** 7 days (continuous 24/7 training).
62
+ * **Initial Experiments:** Initial training attempts were made with Deepeek R1-Qwen-14 and QWQ-32, but Mistral-Small-24b was found to be more receptive to the GCLF framework due to potentially less conflicting pre-trained reasoning biases.
63
+
64
+ ## How to Use
65
+
66
+ **System Instructions**
67
+
68
+ ## !! GGUF Quantization (Coming Soon) !!
69
+
70
+
71
+
72
+ # Version 2 and Future Development
73
+
74
+ Version 2 (In Development):
75
+
76
+ GRPO (Gradient Ratio Policy Optimization): Utilizing GRPO for potentially more stable and effective fine-tuning.
77
+
78
+ Newer Dataset: Training on an expanded and refined dataset for Glyph Code Logic Flow.
79
+
80
+ GGUF Release: Aiming for a GGUF release for wider accessibility and compatibility.
81
+
82
+ Improved Documentation: Comprehensive documentation and examples for using the model and understanding GCLF.
83
+
84
+ Ongoing Efforts:
85
+
86
+ Refining GCLF Framework: Continuously developing and improving the Glyph Code Logic Flow framework itself.
87
+
88
+ Performance Evaluation: Conducting thorough evaluations of the model's performance on GCLF tasks and general language understanding.
89
+
90
+ Community Feedback: Seeking feedback from the community to guide further development and improvements.
91
+
92
+ ---
93
+
94
+ # Known Issues
95
+
96
+ The custom dataset and heavy use of symbols and operators seems to have potentially altered the models tool use. I've found that it often want to use it's `[TOOL_CALLS]` function at the end of it's response (sometimes also calling out `<SPECIAL_#>` tokens at the end). I think I know where this is stemming from, so hopefully v2 can avoid this potential issue altogether.
97
+
98
+ If you are seeing the `[TOOL_CALLS]` and `<SPECIAL_>` outputs, you can set them as the EOS and it will align the model back into a more fluid conversation.
99
+
100
+
101
+