choco58 commited on
Commit
412a79c
·
verified ·
1 Parent(s): 6c0c402

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -4
README.md CHANGED
@@ -12,8 +12,9 @@ Welcome to **LLaMAdelic**—a conversational model fine-tuned from LLaMA 3 8B In
12
  ## Model Name: LLaMAdelic
13
  - **Architecture**: LLaMA 3 8B Instruct
14
  - **Training Objective**: Personality-Enhanced Conversational AI
15
- - **Training Dataset**: Fine-tuned on conversational data to reflect Big 5 personality traits — details will be updated soon.
16
- - **Training Duration**: Will be updated soon
 
17
 
18
  ## Why "LLaMAdelic"?
19
  The name "LLaMAdelic" reflects our aim to bring a rich, nuanced personality to conversational AI. Just as the Big 5 personality traits (OCEAN) encapsulate the subtle layers of human interaction, LLaMAdelic seeks to capture these nuanced dimensions — openness, conscientiousness, extraversion, agreeableness, and neuroticism — making conversations with AI feel more genuinely human. It’s not just another model; it’s designed to add depth, authenticity, and a hint of human-like character to every interaction.
@@ -56,7 +57,7 @@ While LLaMAdelic brings vibrant and personality-driven conversations to the tabl
56
  ---
57
 
58
  ## Ethical Considerations
59
- We made sure to avoid toxic or inappropriate dialogues by tagging any dialogue with over 25% toxic utterances for separate review. Ethical considerations are a priority, and LLaMAdelic was designed with responsible AI practices in mind. For details on ethical data practices, see the Appendix (coming soon!).
60
 
61
  ---
62
 
@@ -66,6 +67,27 @@ Stay tuned for more information on LLaMAdelic!
66
  ---
67
 
68
  ## Citation
69
- Will be updated soon
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
 
71
  ---
 
12
  ## Model Name: LLaMAdelic
13
  - **Architecture**: LLaMA 3 8B Instruct
14
  - **Training Objective**: Personality-Enhanced Conversational AI
15
+ - **Training Dataset**: Fine-tuned on conversational data to reflect Big 5 personality traits.
16
+ - JIC: [Journal Intensive Conversations](https://huggingface.co/datasets/chocokiddo/jic) dataset
17
+ - **Training Duration**: 4-5 days on A100 GPU (training parameters can be found in appendix of the paper)
18
 
19
  ## Why "LLaMAdelic"?
20
  The name "LLaMAdelic" reflects our aim to bring a rich, nuanced personality to conversational AI. Just as the Big 5 personality traits (OCEAN) encapsulate the subtle layers of human interaction, LLaMAdelic seeks to capture these nuanced dimensions — openness, conscientiousness, extraversion, agreeableness, and neuroticism — making conversations with AI feel more genuinely human. It’s not just another model; it’s designed to add depth, authenticity, and a hint of human-like character to every interaction.
 
57
  ---
58
 
59
  ## Ethical Considerations
60
+ We made sure to avoid toxic or inappropriate dialogues by tagging any dialogue with over 25% toxic utterances for separate review. Ethical considerations are a priority, and LLaMAdelic was designed with responsible AI practices in mind. For details on ethical data practices, see the Appendix.
61
 
62
  ---
63
 
 
67
  ---
68
 
69
  ## Citation
70
+ ```bibtex
71
+ @inproceedings{pal-etal-2025-beyond,
72
+ title = "Beyond Discrete Personas: Personality Modeling Through Journal Intensive Conversations",
73
+ author = "Pal, Sayantan and
74
+ Das, Souvik and
75
+ Srihari, Rohini K.",
76
+ editor = "Rambow, Owen and
77
+ Wanner, Leo and
78
+ Apidianaki, Marianna and
79
+ Al-Khalifa, Hend and
80
+ Eugenio, Barbara Di and
81
+ Schockaert, Steven",
82
+ booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
83
+ month = jan,
84
+ year = "2025",
85
+ address = "Abu Dhabi, UAE",
86
+ publisher = "Association for Computational Linguistics",
87
+ url = "https://aclanthology.org/2025.coling-main.470/",
88
+ pages = "7055--7074",
89
+ abstract = "Large Language Models (LLMs) have significantly improved personalized conversational capabilities. However, existing datasets like Persona Chat, Synthetic Persona Chat, and Blended Skill Talk rely on static, predefined personas. This approach often results in dialogues that fail to capture human personalities' fluid and evolving nature. To overcome these limitations, we introduce a novel dataset with around 400,000 dialogues and a framework for generating personalized conversations using long-form journal entries from Reddit. Our approach clusters journal entries for each author and filters them by selecting the most representative cluster, ensuring that the retained entries best reflect the author`s personality. We further refine the data by capturing the Big Five personality traits{---}openness, conscientiousness, extraversion, agreeableness, and neuroticism{---}ensuring that dialogues authentically reflect an individual`s personality. Using Llama 3 70B, we generate high-quality, personality-rich dialogues grounded in these journal entries. Fine-tuning models on this dataset leads to an 11{\%} improvement in capturing personality traits on average, outperforming existing approaches in generating more coherent and personality-driven dialogues."
90
+ }
91
+ ```
92
 
93
  ---