Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -36,6 +36,8 @@ For the MMLU evaluation, we use a 0-shot CoT setting. | |
| 36 |  | 
| 37 | 
             
            Note:i9 14900、1+13 8ge4 use 4 threads,others use the number of threads that can achieve the maximum speed. All models here have been quantized to q4_0.
         | 
| 38 |  | 
|  | |
|  | |
| 39 | 
             
            ## Model Card
         | 
| 40 |  | 
| 41 | 
             
            <div align="center">
         | 
|  | |
| 36 |  | 
| 37 | 
             
            Note:i9 14900、1+13 8ge4 use 4 threads,others use the number of threads that can achieve the maximum speed. All models here have been quantized to q4_0.
         | 
| 38 |  | 
| 39 | 
            +
            You can deploy SmallThinker with offloading support using [PowerInfer](https://github.com/SJTU-IPADS/PowerInfer/tree/main/smallthinker)
         | 
| 40 | 
            +
             | 
| 41 | 
             
            ## Model Card
         | 
| 42 |  | 
| 43 | 
             
            <div align="center">
         | 

