Model Details

Model Description

Repository

Usage

Method 1: llama.cpp Backend Server + Chatbox

Step 1: Start .llama.cpp Server

./llama-server \
  -m /path/to/model.gguf \
  -c 2048 \          # Context length
  --host 0.0.0.0 \   # Allow remote connections
  --port 8080 \      # Server port
  --n-gpu-layers 35  # GPU acceleration (if available)

Step 2: Connect via Chatbox

  1. Download Chatbox
  2. Configure API endpoint:
    API URL: http://localhost:8080
    Model: (leave empty)
    API Type: llama.cpp
    
  3. Set generation parameters:
    {
      "temperature": 0.7,
      "max_tokens": 512,
      "top_p": 0.9
    }
    

Method 2: LM Studio

  1. Download LM Studio
  2. Load GGUF file:
    • Launch LM Studio
    • Search Slipstream-Max/Emollm-InternLM2.5-7B-chat-GGUF-fp16
  3. Configure settings:
    Context Length: 2048
    GPU Offload: Recommended (enable if available)
    Batch Size: 512
    
  4. Start chatting through the built-in UI

Precision Details

Filename Precision Size Characteristics
mindchat.gguf FP16 [15.5GB] Full original model precision

Hardware Requirements

Minimum:

  • 24GB RAM (for 7B model)
  • CPU with AVX/AVX2 instruction set support

Recommended:

  • 32GB RAM
  • CUDA-capable GPU (for acceleration)
  • Fast SSD storage (due to large model size)

Key Notes

  1. Requires latest llama.cpp (v3+ recommended)
  2. Use --n-gpu-layers 35 for GPU acceleration (requires CUDA-enabled build)
  3. Initial loading takes longer (2-5 minutes)
  4. Requires more memory/storage than quantized versions
  5. Use --mlock to prevent swapping

Advantages

  • Preserves original model precision
  • Ideal for precision-sensitive applications
  • No quantization loss
  • Suitable for continued fine-tuning

Ethical Considerations

本仓库所有开源代码及模型均遵循GPL-3.0许可认证. 目前开源的MindChat模型可能存在部分局限, 因此我们对此做出如下声明:

MindChat目前仅能提供类似的心理聊天服务, 仍无法提供专业的心理咨询和心理治疗服务, 无法替代专业的心理医生和心理咨询师, 并可能存在固有的局限性, 可能产生错误的、有害的、冒犯性的或其他不良的输出. 用户在关键或高风险场景中应谨慎行事, 不要使用模型作为最终决策参考, 以免导致人身伤害、财产损失或重大损失.

MindChat在任何情况下, 作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任.

使用MindChat即表示您同意这些条款和条件, 并承认您了解其使用可能带来的潜在风险. 您还同意赔偿并使作者、贡献者和版权所有者免受因您使用MindChat而产生的任何索赔、损害赔偿或责任的影响.

Citation

@misc{MindChat, 
  author={Xin Yan, Dong Xue*}, 
  title = {MindChat: Psychological Large Language Model}, 
  year = {2023}, 
  publisher = {GitHub}, 
  journal = {GitHub repository}, 
  howpublished = {\url{https://github.com/X-D-Lab/MindChat}}, 
}
Downloads last month
30
GGUF
Model size
7.72B params
Architecture
qwen
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Slipstream-Max/MindChat-Qwen2-7B-GGUF-fp16

Base model

Qwen/Qwen2-7B
Quantized
(74)
this model