Update README.md
Browse files
README.md
CHANGED
@@ -30,7 +30,19 @@ tags:
|
|
30 |
[](LICENSE)
|
31 |
[](https://www.python.org/downloads/)
|
32 |
[](https://pytorch.org/)
|
33 |
-
[](https://github.com/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
## Loading the Model
|
36 |
|
@@ -82,18 +94,6 @@ The model was trained on approximately 14,200 samples across various reasoning t
|
|
82 |
- **Training**: 5 epochs, batch size 64, learning rate 2e-5
|
83 |
- **Loss**: Cross-entropy
|
84 |
|
85 |
-
## What is Sketch-of-Thought?
|
86 |
-
|
87 |
-
Sketch-of-Thought (SoT) is a novel prompting framework for efficient reasoning in language models that combines cognitive-inspired reasoning paradigms with linguistic constraints to minimize output token usage while preserving reasoning accuracy.
|
88 |
-
|
89 |
-
Unlike conventional Chain of Thought (CoT) approaches that produce verbose reasoning chains, SoT implements three distinct reasoning paradigms:
|
90 |
-
|
91 |
-
- **Conceptual Chaining**: Connects essential ideas in logical sequences through structured step links. Effective for commonsense reasoning, multi-hop inference, and fact-based recall tasks.
|
92 |
-
|
93 |
-
- **Chunked Symbolism**: Organizes numerical and symbolic reasoning into structured steps with equations, variables, and arithmetic operations. Excels in mathematical problems and technical calculations.
|
94 |
-
|
95 |
-
- **Expert Lexicons**: Leverages domain-specific shorthand, technical symbols, and jargon for precise and efficient communication. Suited for technical disciplines requiring maximum information density.
|
96 |
-
|
97 |
## Complete Package
|
98 |
|
99 |
For a more streamlined experience, we've developed the SoT Python package that handles paradigm selection, prompt management, and exemplar formatting:
|
|
|
30 |
[](LICENSE)
|
31 |
[](https://www.python.org/downloads/)
|
32 |
[](https://pytorch.org/)
|
33 |
+
[](https://github.com/SimonAytes/SoT)
|
34 |
+
|
35 |
+
## What is Sketch-of-Thought?
|
36 |
+
|
37 |
+
Sketch-of-Thought (SoT) is a novel prompting framework for efficient reasoning in language models that combines cognitive-inspired reasoning paradigms with linguistic constraints to minimize output token usage while preserving reasoning accuracy.
|
38 |
+
|
39 |
+
Unlike conventional Chain of Thought (CoT) approaches that produce verbose reasoning chains, SoT implements three distinct reasoning paradigms:
|
40 |
+
|
41 |
+
- **Conceptual Chaining**: Connects essential ideas in logical sequences through structured step links. Effective for commonsense reasoning, multi-hop inference, and fact-based recall tasks.
|
42 |
+
|
43 |
+
- **Chunked Symbolism**: Organizes numerical and symbolic reasoning into structured steps with equations, variables, and arithmetic operations. Excels in mathematical problems and technical calculations.
|
44 |
+
|
45 |
+
- **Expert Lexicons**: Leverages domain-specific shorthand, technical symbols, and jargon for precise and efficient communication. Suited for technical disciplines requiring maximum information density.
|
46 |
|
47 |
## Loading the Model
|
48 |
|
|
|
94 |
- **Training**: 5 epochs, batch size 64, learning rate 2e-5
|
95 |
- **Loss**: Cross-entropy
|
96 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
97 |
## Complete Package
|
98 |
|
99 |
For a more streamlined experience, we've developed the SoT Python package that handles paradigm selection, prompt management, and exemplar formatting:
|