Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ DSCNTR-Alpha is a deep QLoRA fine-tune (256 rank) of the base Llama-3.1-8B that
|
|
16 |
|
17 |
I, tomar753, developed it solely by me myself, including the data creation - yes, the dataset is >%95 hand-written by me, with the help of several top-league models whenever factual knowledge was a concern - and hyperparameter tinkering phases.
|
18 |
|
19 |
-
Despite being away from 18
|
20 |
|
21 |
More than +2K hours has been spent on research since the earlier Pygmalion AI era to know how the LLM landscape would shape over time on the end-user side, so that I could figure out people's likes and dislikes in language models.
|
22 |
|
|
|
16 |
|
17 |
I, tomar753, developed it solely by me myself, including the data creation - yes, the dataset is >%95 hand-written by me, with the help of several top-league models whenever factual knowledge was a concern - and hyperparameter tinkering phases.
|
18 |
|
19 |
+
Despite being only one day away from 18, I have devoted the last 1.5 years of my life to "Project Descentral" (which will continue indefinitely) to create the local mental stimulator that would be clever and always unpredictable as a chat partner, but without the intelligence regression the creative training usually brings. Actually, my endeavors were never about mitigating any loss, but to advance every stat of the model, covering even unstructured text generation.
|
20 |
|
21 |
More than +2K hours has been spent on research since the earlier Pygmalion AI era to know how the LLM landscape would shape over time on the end-user side, so that I could figure out people's likes and dislikes in language models.
|
22 |
|