diff --git "a/M9FKT4oBgHgl3EQfeS7D/content/tmp_files/load_file.txt" "b/M9FKT4oBgHgl3EQfeS7D/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/M9FKT4oBgHgl3EQfeS7D/content/tmp_files/load_file.txt" @@ -0,0 +1,1379 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf,len=1378 +page_content='PECAN: A Deterministic Certified Defense Against Backdoor Attacks Yuhao Zhang 1 Aws Albarghouthi 1 Loris D’Antoni 1 Abstract Neural networks are vulnerable to backdoor poi- soning attacks, where the attackers maliciously poison the training set and insert triggers into the test input to change the prediction of the victim model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=' Existing defenses for backdoor attacks either provide no formal guarantees or come with expensive-to-compute and ineffective probabilis- tic guarantees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=' We present PECAN, an efficient and certified approach for defending against back- door attacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=' The key insight powering PECAN is to apply off-the-shelf test-time evasion certi- fication techniques on a set of neural networks trained on disjoint partitions of the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=' We evaluate PECAN on image classification and mal- ware detection datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=' Our results demonstrate that PECAN can (1) significantly outperform the state-of-the-art certified backdoor defense, both in defense strength and efficiency, and (2) on real backdoor attacks, PECAN can reduce attack suc- cess rate by order of magnitude when compared to a range of baselines from the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=' Introduction Deep learning models are vulnerable to backdoor poisoning attacks (Saha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=' Turner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=', 2019), which assume that the attackers can maliciously poison a small fragment of the training set before model training and add triggers to inputs at test time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=' As a result, the prediction of the victim model that was trained on the poisoned training set will diverge in the presence of a trigger in the test input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=' Effective backdoor attacks have been proposed for various domains, such as image recognition (Gu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=', 2017), senti- ment analysis (Qi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=', 2021), and malware detection (Sev- eri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=' For example, Severi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=' (2021) can break malware detection models as follows: The attacker poisons a small portion of benign software in the training set by modifying the values of the most important features so that 1Department of Computer Science, University of Wisconsin- Madison, Madison, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/M9FKT4oBgHgl3EQfeS7D/content/2301.11824v1.pdf'} +page_content=' Correspondence to: Yuhao Zhang