Datasets:

Languages:
English
ArXiv:
License:
youyaoching commited on
Commit
269f9ad
·
verified ·
1 Parent(s): 616ef2c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: odc-by
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ pretty_name: Primus-FineWeb
8
+ configs:
9
+ - config_name: default
10
+ data_files:
11
+ - split: train
12
+ path: data/*
13
+ tags:
14
+ - cybersecurity
15
+ - pretraining
16
+ - FineWeb
17
+ size_categories:
18
+ - 1M<n<10M
19
+ extra_gated_fields:
20
+ Affiliation: text
21
+ Country: country
22
+ I want to use this model for:
23
+ type: select
24
+ options:
25
+ - Research
26
+ - Commercial
27
+ - label: Other
28
+ value: other
29
+ Job title:
30
+ type: select
31
+ options:
32
+ - Student
33
+ - Research graduate
34
+ - AI researcher
35
+ - AI developer/engineer
36
+ - Cybersecurity researcher
37
+ - Reporter
38
+ - Other
39
+ geo: ip_location
40
+ library_name: transformers
41
+ ---
42
+
43
+ # PRIMUS: A Pioneering Collection of Open-Source Datasets for Cybersecurity LLM Training
44
+
45
+ ## 🤗 Primus-FineWeb
46
+
47
+ The **Primus-FineWeb** dataset is constructed by filtering cybersecurity-related text from FineWeb, a refined version of Common Crawl. We began by leveraging _Primus-Seed_, a high-quality dataset of manually curated cybersecurity text, as positive samples. We then sampled ten times the amount of data from FineWeb as negative samples and trained a **binary cybersecurity classifier** based on TinyBERT. Using this classifier, we assigned each text in FineWeb a score between **0 and 1** and filtered out texts with a score greater than **0.003**, creating the Primus-FineWeb with 15.3 billion tokens. However, after discovering a significant amount of duplicate content, we performed deduplication, reducing the final dataset to **🔥 2.57 billion tokens of cybersecurity corpus**.
48
+
49
+ 🚀🚀 For more details, see our paper:
50
+ [https://arxiv.org/abs/2502.11191](https://arxiv.org/abs/2502.11191)
51
+
52
+ ---
53
+
54
+ ## Why was the threshold set at 0.003?
55
+
56
+ We divided the score range (0-1) into several bins and randomly sampled 50 examples from each bin. These samples were then scored by GPT-4o to determine the proportion of text that was "_truly_" cybersecurity-related. We found that if the score was below 0.003, the proportion of cybersecurity text fell below 50%.
57
+
58
+ <img src="https://i.imgur.com/XbqpmbI.png" alt="Threshold Selection" width="60%">
59
+
60
+
61
+ ## FineWeb: Cybersecurity Score vs. Token Count
62
+
63
+ <img src="https://i.imgur.com/6twJL1p.png" alt="Cybersecurity Score vs. Token Count" width="65%">
64
+
65
+ ---
66
+
67
+ ## License
68
+
69
+ This dataset is released under the **ODC-By** license. However, you must still comply with the **FineWeb license** and the **Common Crawl Terms of Use**.