diff --git "a/3dFQT4oBgHgl3EQfGzVw/content/tmp_files/load_file.txt" "b/3dFQT4oBgHgl3EQfGzVw/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/3dFQT4oBgHgl3EQfGzVw/content/tmp_files/load_file.txt" @@ -0,0 +1,645 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf,len=644 +page_content='CONVERSATIONAL AUTOMATED PROGRAM REPAIR Chunqiu Steven Xia, Lingming Zhang University of Illinois at Urbana-Champaign {chunqiu2, lingming}@illinois.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='edu ABSTRACT Automated Program Repair (APR) can help developers automatically generate patches for bugs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Due to the impressive performance obtained using Large Pre- Trained Language Models (LLMs) on many code related tasks, researchers have started to directly use LLMs for APR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' However, prior approaches simply repeat- edly sample the LLM given the same constructed input/prompt created from the original buggy code, which not only leads to generating the same incorrect patches repeatedly but also miss the critical information in testcases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' To address these lim- itations, we propose conversational APR, a new paradigm for program repair that alternates between patch generation and validation in a conversational manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In conversational APR, we iteratively build the input to the model by combining previously generated patches with validation feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' As such, we leverage the long-term context window of LLMs to not only avoid generating previously incor- rect patches but also incorporate validation feedback to help the model understand the semantic meaning of the program under test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We evaluate 10 different LLM including the newly developed ChatGPT model to demonstrate the improvement of conversational APR over the prior LLM for APR approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 1 INTRODUCTION Bugs in software can cause significant financial losses Matteson (2018) and create dangerous health and safety problems Hanbury (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Due to the high manual cost of fixing bugs O’Dell (2017), Automated Program Repair (APR) Gazzola et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2019) is a promising solution to reduce developer work by automatically generating patches given the buggy code and failing testcases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Traditionally, APR approaches commonly use the paradigm of Generate and Validate (G&V), where APR tools will first generate a list of candidate patches given the original buggy code and then validate each one sequentially until a plausible patch that passes all the testcases is found.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Plausible patch is then passed on to a human developer where they have to determine if this is a correct patch that correctly fixes the underlying bug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Traditional APR approaches such as template-based tools Ghanbari et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Lou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2020) have been proven useful in fixing bugs with pre-defined templates to match buggy and corresponding fix code patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Recently, researchers have designed learning-based APR tools Ye et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Jiang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2021) which build a Neural Machine Translation (NMT) model by training on pairs of buggy and patch code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' However, these learning-based APR tools suffer from lack of patch variety as it can only repair the types of bugs that are a part of the buggy/patch training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Furthermore, these bug fixing datasets can be difficult to construct as it require scraping open-source bug fix commits which may contain many false positives, adding noise to the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Recognizing the limitation of prior learning-based APR tools, researchers have started to look into directly leveraging Large Pre-Trained Language Models (LLMs) for APR without fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' LLMs have proven their ability in various code generation tasks Austin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Xia & Zhang (2022) first introduced cloze-style APR where a LLM directly fill-in the correct code given its sur- rounding context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Other studies Prenner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Kolak et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Xia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022) have also investigated directly applying different types of LLMs for APR by smartly applying prompts or giv- ing original buggy code as context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Typically, directly applying LLMs for APR involves creating a common prompt/prefix which can be just the buggy context (zero-shot) or combining buggy context with a few examples of bug fixes (few-shot) as input to the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Following the G&V paradigm, 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='13246v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='SE] 30 Jan 2023 prior approach will sample the LLMs multiple times to obtain candidate patches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' However, this pipeline has the following limitations: First, sampling from the same prefix/prompt multiple times can lead to many repeated patches due to the probabilistic nature of sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' This means the LLMs may waste a lot of compute and time generating the same patches which have already been validated as incorrect by the testsuite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Second, prompts provided to the LLMs for APR are created only from the original buggy code and does not include any of the testcase information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Such information like the expected input and output examples that can help LLMs understand the functionality of the buggy program are not provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Third, prior approaches also fail to consider the outputs produced by the generated incorrect patches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Previously incorrect patches may fail on a particular corner case, which can be exposed by looking at the test output and providing it to the LLM to address it in future patches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Our Work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We propose conversational APR – a new paradigm of using LLMs for APR that di- rectly leverages the testcase validation information to provide feedback to LLMs in a conversational manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In conversational APR, we interleave patch generation with validation where LLM first generates a patch, we then validate it against testsuite to provide feedback and prompt LLM with the new feedback information to generate a new patch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' While in this paper we consider simple test- case input/output/error validation feedback, one can apply conversational APR with a wild range of possible feedback information such as human evaluation of the patch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We refer to the process of generating a patch followed by validation as a turn where a conversation chain is made up of mul- tiple turns in sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In the start of the conversation chain, we begin with an initial prompt and sample the LLM to obtain a candidate patch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' As we continue the conversation, the input given to the LLM in each turn is a concatenation of all previously incorrect patches along with their associated testcase feedback within the same conversation chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' A conversational chain is terminated once a patch that passes all the testcases are found or the maximum chain length is reached (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=', maximum number of turns).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In the latter case, we start a new conversation chain with the initial prompt again.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Compared with prior LLM for APR tools which only use the buggy code snippet as inputs, conver- sational APR incorporates patch validation in the form of validation feedback to help the model un- derstand the reason why previously generated patches are incorrect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Such feedback can contain the incorrect and expected test outputs or indicate if the generated patch contains compilation/runtime errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Furthermore, while prior LLM for APR tools continuously sample from the same input, our approach iteratively builds the input by including previously incorrect patches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' As such, the LLM, through its long context window, can recognize previous generations and avoid repeatedly generat- ing an already validated incorrect patch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We evaluated our conversational APR by using 10 popular LLMs, where we found that our approach not only improves the number of bugs fixed but also can arrive at the correct patch faster compared with sampling-based baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Furthermore, we also evaluate the recently developed ChatGPT Schulman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022)1, a dialogue focused LLM trained using reinforcement learning and highlight the performance of conversational APR when using a LLM designed for conversation/dialogue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 2 BACKGROUND & RELATED WORK 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='1 LLMS FOR APR To combat the reliance on training using bug-fixing datasets to build learning-based APR tools based on NMT models, researchers directly applied LLMs for APR without any fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Xia & Zhang (2022) proposed AlphaRepair, the first cloze-style APR to directly leverage LLMs for APR in a zero-shot setting by removing the buggy line and replacing it with masked tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' AlphaRepair then queries the CodeBERT Feng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2020) model to fill-in the masked tokens with the correct tokens to generate patches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Prenner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022) investigated the ability for Codex Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2021) to repair bugs using a simple prompting method to generate a complete patched function given the original buggy function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Kolak et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022) evaluated the scaling effect of LLMs for APR by using 4 LLMs of different model sizes to generate a single line fix given only the original buggy prefix (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=', removing all lines after and including the buggy line of the buggy function).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Recently, Xia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022) conducted an extensive study on directly applying LLMs for APR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In the study, they adopt 1While we perform repair using ChatGPT, no part of this paper is written by ChatGPT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' :) 2 several repair settings, including few-shot generation using a few examples of bug fixes, cloze-style APR and also single line generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' The findings across these prior work is consistent in showing that directly using LLMs for APR achieves comparable if not better performance compared to prior APR tools.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' However, these pro- posed LLMs for APR techniques almost exclusively use sampling where patches are generated by sampling from the same input over and over again, leading to many repeated patches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Furthermore, the inputs to the LLMs are only constructed from the original buggy function, missing the rich infor- mation in the form of testcases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In this work, our conversational APR approach aims to bridge these limitations in LLMs for APR by constructing new inputs based on prior incorrect patches to avoid sampling repeated patches and providing the validation feedback to add another dimension of input apart from original buggy code to help the model understand the semantic meaning of the program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='2 MULTI-STEP PROGRAM REASONING AND SYNTHESIS USING LLMS A related research direction is in applying multi-step reasoning for code understanding and synthe- sis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Nye et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2021) trains a LLM designed for program understanding by introducing the idea of a “scratchpad” in which the LLM predicts the intermediate states of a program along with the final execution results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022) extends the chain-of-thoughts Wei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022) prompting style in NLP to propose program-of-thoughts where the prompt contains an explicit command to construct the program step-by-step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' However, these work still generates a complete result (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=', final program execution or code), albeit with intermediate results, in one shot, whereas our conversational APR samples multiple times LLMs with different inputs to obtain one output plausible patch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Different from one-shot methods, Austin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2021) investigated the ability for LLMs to use hu- man feedback in a conversational manner for program synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' The approach works by keeping a conversation of previously generated code and correcting any mistake using natural language feed- back provided by human developers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Nijkamp et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022) manually created a multi-step synthesis dataset where each target program is broken down into multiple smaller steps where only a few lines of code needs to be generated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' They then sample the model multiple times to iteratively complete each smaller step and concatenate them together to form the final program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' While these described techniques involve iteratively sampling from the model with new feedback similar to a conversa- tional manner, our work can automatically create this feedback through testcase execution without any human-in-the-loop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 3 CONVERSATIONAL APR We propose a conversational APR approach to prompt LLM patch generation by combining previ- ously generated patches and validation feedback in a conversational manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Contrasting with the classic Generate and Validate (G&V) APR approach that first generates a large number of candidate patches and then validate each one to find a list of plausible patches, conversational APR interleaves generation and validation to provide immediate feedback for the new candidate patch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Different from previous APR tools which make use of LLMs through sampling given the same prefix/context for each bug, conversational APR approach aims to incorporate feedback information after each generation (if the candidate patch failed to pass all tests) as new context for subsequent generations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Specifically, the feedback information includes both the incorrect generated patch and its associated failed testcase information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Conversational APR involves iteratively obtaining new candidate patches from the LLM by using previously generated patches/validation results as feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We refer to this process as a turn, where each turn includes three different steps: 1) construct new a prompt based on prior feedback 2) sam- ple the model to produce a sample output function 3) validate the sample output function against testcases to obtain validation feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Multiple turns in sequence is defined as a chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' The ter- minating conditions are that the sample output patch is able to pass all testcases (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=', a plausible patch is obtained) or the maximum number of turns (length of the chain) is reached.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Note that each turn (all three steps) are done automatically without needing any human-in-the-loop, this allows conversational APR to be an automatic approach for program repair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 3 Turn 1: Turn 2: The following code is buggy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' def sieve(max): primes = [] for n in range(2, max): if any(n%p for p in primes): primes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='append(n) return primes Please provide a fixed version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' def sieve(max): primes = [] for n in range(2, max+1): if not any(n%p for p in primes): primes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='append(n) return primes The fixed version is still not correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' def sieve(max): primes = [] for n in range(2, max): if all(n%p for p in primes): primes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='append(n) return primes def sieve(max): primes = [] for n in range(2, max+1): if all(n%p for p in primes): primes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='append(n) return primes sieve(4) returns [2, 4] but it should return [2, 3] Please provide a fixed version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' The fixed version is still not correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' sieve(2) returns [] but it should return [2] Please provide a fixed version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' S I F1 I S1 F1 concatenate S2 F2 I S1 F1 S2 F2 concatenate S3 Turn 3: F3 Initial Prompt sample output validation feedback sample model sample model sample model run testcase run testcase run testcase sample output validation feedback sample output The fixed version is correct!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' validation feedback def sieve(max): primes = [] for n in range(2, max): if any(n%p for p in primes): primes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='append(n) return primes def sieve(max): primes = [] for n in range(2, max): if all(n % p for p in primes): primes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='append(n) return primes original buggy function plausible patch S1 I S2 S3 F3 F2 F1 I S1 I S1 F1 F1 S2 F2 F3 S3 I S1 F1 S2 F2 Figure 1: Overview of conversational APR with an illustrative example in fixing the buggy sieve function 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='1 PIPELINE & EXAMPLE Figure 1 shows an illustrative example of a conversation chain (multiple turns) and an overview of the pipeline of the conversational APR approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We first take in as input the original buggy function and a set of testcases which contains some failing tests that expose the underlying bug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In the example, the buggy function (sieve) attempts to use to sieve algorithm to calculate the list of prime numbers below the integer input (max).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' The location of the bug occurs on line 4 where the buggy function incorrectly uses any instead of all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' This bug is exposed by the testcase of sieve(2) = [2] where the buggy function incorrectly returns an empty array [].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Turn 1: We first create an initial prompt I using the original buggy function which contains natural language description to indicate that the function is buggy (The following code is buggy) and the task we want the LLM to solve (Please provide a fixed version).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We then sample the model using the initial prompt I to obtain the first sample output function S1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' The change is made to line 4 where the function in S1 negated the original if condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We then validate S1 against the list of tests and found that while the new patch is able to successfully pass the previous failing test of sieve(2) = [2], it returns [2, 4] for sieve(4) when the correct output should be [2, 3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' This validation information F1 is collected as feedback to use during the next conversation turn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Turn 2: Different from turn 1, where the input to the LLM is just the initial prompt I , now we provide the model also with the previously generated patch and its failing testcase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In short, we construct the validation feedback F1 by using the failing testcase and indicate to the model that the previous sample S1 is still not correct (The fixed version is still not correct) and the new task (Please provide another fixed version).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We then concatenate the initial prompt, first sample output function and the validation feedback { I , S1 , F1 } together as the input to the LLM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' As such, the model is able to not only use the original buggy function but also use the previously generated sample and its testcase feedback to generate a new patched function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Similar to turn 1, we obtain S2 and F2 where the correct line 4 is obtained (switching any to all) but the candidate patch function incorrectly reduced the upper range of the for loop by 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 4 Turn 3: Similar to turn 2, we first construct the new validation feedback F2 from the previous failing test case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We then concatenate all previously sampled output along with its validation feedback in sequence to produce { I , S1 , F1 , S2 , F2 }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Using this input, we then sample the LLM again to produce the next candidate patch S3 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We observe that this candidate patch correctly fixes the underlying bug and this is indicated by its validation F3 where it is able to pass all the testcases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' The program repair process is then terminated as we have obtained our plausible patch S3 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Compared to prior approach in APR based on LLMs which simply samples from a pre-defined prompt/context, conversational APR leverages the previously missing key feedback information in the form of testcase results to prompt future patch generations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' The testcase feedback not only tells the LLM that the previous patches are incorrect (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' leading to more unique patches) but also pro- vides input and output examples which helps the model to understand the underlying functionality of the function (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' leading to more correct patches).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='2 DESIGN DECISIONS In the above example illustrated in Figure 1, we show the overall pipeline of conversational APR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' However, there are different design decisions which can impact the performance of the approach: Prompt engineering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Prompting has been shown to be an effective way of leveraging LLMs on various downstream tasks without needing any explicit fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In conversational APR approach, we follow the style of prior work Xia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022) in providing a short and concise prompt with respect to the description of the input and the task we want to model to solve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Additionally, we follow prior guidelines and kept the prompt to be open-ended rather than to restrict the generation with a close-ended prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' One particular important prompt constructing is validation feedback in providing the failing testcase to the LLM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In the Figure 1 example, we provide a functional prompt that directly invokes the function and highlight the discrepancy between output and expected testcase output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We refer to this as functional prompt since it directly calls the function with input parameters similar to what one would do in code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='2, we compare this style of validation prompting with other methods including without any testcase information to demonstrate the benefit of including validation feedback to the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Maximum chain length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Recall that a conversation chain refers to the continuous sequence of turns to fix a bug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' A chain is demonstrated in Figure 1 with a chain length of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Along with finding a plausible patch, a preset value for the maximum chain length is also a terminating condition since the LLM used will have a maximum context window and cannot take in arbitrary length inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Once this maximum chain length is reached, conversational APR will restart from the beginning (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=', by crafting initial prompt again) with a new chain conversation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' The maximum chain length is a parameter which controls how much history the LLM may receive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' A maximum chain length of 1 refers to the base case of sampling from the initial prompt over and over again, meaning the model does not know any of the previously generated incorrect patches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' A higher maximum chain length means the model can see multiple previously failed patches, however this also may not be beneficial as it can cause the LLM to repeat some of the earlier patches or get stuck on a particular implementation of the function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='2, we evaluate the effect of the chain length has on repair performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 4 DATASETS In this section, we describe the LLMs used in our evaluation and also the repair benchmark used to evaluate our proposed technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='1 LLMS In our work, we evaluate 10 different LLMs to not only demonstrate the effect of scaling behavior on our proposed conversational APR approach but also to evaluate how different pre-training and model design contribute to the overall effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Table 1 presents an overview of the studied LLMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Column Model is the model name, #Parameters indicates the number of model parameters, Context Window represents the size of the context window, and Training Strategy refers to the training strategy used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 5 Table 1: Evaluation LLM overview Model #Parameters Context Window Training Strategy CODEGEN-MONO 350M/2B/6B/16B 2048 Unsupervised CLM CODEGEN-MULTI 350M/2B/6B/16B 2048 Unsupervised CLM Codex 12B 4096 Unsupervised CLM ChatGPT ∼175B ∼4000 Reinforcement Learning from Human Feedback + CLM bitcount.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='py bitcount.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='java fixed line fixed line testcase Figure 2: Example bug in both Python and Java in QuixBugs along with the testcases CODEGEN Nijkamp et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' A family of autoregressive LLMs trained using Causal Lan- guage Modeling (CLM) objective (next-token-prediction) ranging from 350M to 16B in parameter size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' CODEGEN is first trained on the open-source ThePile Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2020), containing 22 diverse text-based datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' The models are then trained on BigQuery BigQuery, a dataset of open-source code from 6 programming languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We refer to these models (trained on ThePile then Big- Query) as CODEGEN-MULTI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' CODEGEN-MULTI is then further trained on a dataset containing large amounts of Python GitHub code to produce CODEGEN-MONO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In our experiments, we use CODEGEN-MONO for repair benchmarks in Python and CODEGEN-MULTI for repair bench- marks in other programming languages by refer to them both as CODEGEN for simplicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Codex Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' A programming language focused autoregressive model based on the GPT-3 architecture Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Codex is first initialized with GPT-3 weights from training on natural language corpus and then fine-tuned using next-token-prediction on a large dataset of code files.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' While Codex also contains a version which can take in suffix tokens (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=', fill-in code in the middle), for our experiments, we only use Codex by providing the prefix context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' ChatGPT Schulman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' A conversational-based LLM first initialized from GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='5 model and then fine-tuned using Reinforcement Learning from Human Feedback (RLHF) Ziegler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' ChatGPT is first fine-tuned based on supervised learning where human provides example responses to prompts in the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Using this fine-tuned model, a reward model is then trained by sampling multiple outputs of the model from a given prompt and again using a human to rank the outputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' The reward model is used in the reinforcement learning step where Proximal Policy Optimization Schulman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2017) is used to fine-tune ChatGPT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Different from the Codex and CODEGEN, ChatGPT through the usage of RLHF and fine-tuning data is designed for conversation where the usage encourages a dialogue format.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Note that much of the ChatGPT model detail is unknown to the public, therefore, we can only provide an approximate value for the number of parameters2 and context window size OpenAI (2022) according to verified sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='2 BENCHMARKS We use the QuixBugs Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2017) repair benchmark to evaluate our proposed conversational APR approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' QuixBugs has been widely used to evaluate many repair tools including both learning-based Ye et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Jiang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Drain et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2021) and LLM for APR Xia & Zhang (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Xia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Kolak et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Prenner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' (2022) approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' QuixBugs dataset contains the same 40 bugs and it associated correct patch in both Python and Java.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' These bugs are self contained functions based on classic algorithms and it usually only takes a single line change to fix the underlying bug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Each bug comes with a set of testcases which the buggy function failed to pass and can be used to evaluate any candidate patch generated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Figure 2 shows an example bug for the bitcount function in QuixBugs for both Java and Python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' The bug occurs inside the while loop where the code incorrectly uses the ˆ operator instead of & operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We also show the example testcases for bitcount where it contains example inputs and the expected outputs when evaluated using the function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 2As ChatGPT is fine-tuned on GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='5, we assume a similar number of parameters as GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='5 6 Out of the 40 bugs in QuixBugs, we further filter out 10 bugs which includes testcases that are difficult to represent with our validation feedback prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' For example, testcases for detect cycle involves a graph as an input to the function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In total, we use 60 bugs (30 and 30 respectively for Java and Python) for our evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 5 EXPERIMENTAL SETUP In this section, we describe the key research questions that our evaluation seek to answer, the evalu- ation metrics used and also describe the implementation details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='1 RESEARCH QUESTIONS We aim to investigate the following research questions: RQ1: What is the effectiveness of applying conversational APR?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' RQ2: How do different components of conversational APR effect performance?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In RQ1, we first compare the performance of conversational APR with a baseline approach used in prior LLM for APR work where the patches are generated by continuously sampling from the same initial prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We further evaluate both the scaling effective of LLM as we increase the size of the model and also investigate the difference in performance of different pre-training strategies (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=', ChatGPT vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Codex).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In RQ2, we dive deeper into the different parameters of conversational APR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Specifically, we evaluate how the length of the conversational chain and different validation feedback prompts affect the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='2 EVALUATION METRICS Our evaluation metric consist of the standard metric used to evaluate APR tools: number of plausible patches: patches which passes all the testcases and correct patches: patches which are semantically equivalent to the reference developer patch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Additionally, since we are using sampling LLMs, we also define tries as the number of samples needed to obtain a plausible/correct patch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' This metric is useful when comparing two approaches/models that achieve similar number of bugs fixed, the one with fewer number of tries is preferred as we want to limit the number of times we have to sample the LLM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='3 IMPLEMENTATION We implemented the LLM generation pipeline in Python using Hugging Face HuggingFace imple- mentation of the CODEGEN models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We access Codex through the OpenAI API by querying the code-davinci-002 engine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Since ChatGPT is not open-sourced and does not provide an official API endpoint (like Codex), we manually input the prompt and extract the outputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' For all models apart from ChatGPT, we use a default generation setting of nucleus sampling with top p = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='95, tempera- ture = 1, 50 samples per bug with a maximum chain length of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We generate and evaluate patches on a 32-Core workstation with AMD Ryzen Threadripper PRO 5975WX CPU, 256 GB RAM and 3 NVIDIA GeForce RTX 3090 GPUs, running Ubuntu 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='1 LTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 6 RESULTS 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='1 RQ1: CONVERSATIONAL APR EFFECTIVENESS We first evaluate the effectiveness of applying conversational APR using validation feedback com- pared to prior method of sampling given the same prompt without any feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Table 2 shows the results on QuixBugs-Python and QuixBugs-Java.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We observe that by applying our feedback driven conversational APR, we are able to improve the # of correct and plausible patches for all unsupervis- edly trained LLM across all model sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Additionally, conversational APR is also able to decrease the # of tries (# of samples) needed before obtaining the first plausible/correct patch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Compared to traditional sampling method of producing patches, conversational APR is able to leverage the 7 Table 2: Conversational APR performance on both QuixBugs-Python and QuixBugs-Java compared with baseline sampling method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' #c/#p refers to the number of correct / plausible patches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Models QuixBugs-Python QuixBugs-Java Sampling Conversational Sampling Conversational #c/#p #tries #c/#p #tries #c/#p #tries #c/#p #tries CODEGEN-350M 7 / 10 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='5 8 / 11 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='4 4 / 4 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='2 5 / 5 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='5 CODEGEN-2B 22 / 23 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='6 25 / 26 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='3 12 / 14 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='8 15 / 16 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='4 CODEGEN-6B 22 / 24 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='0 27 / 28 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='1 18 / 20 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='8 22 / 22 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='5 CODEGEN-16B 29 / 29 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='6 30 / 30 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='8 24 / 25 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='5 28 / 29 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='2 Codex 29 / 30 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='6 30 / 30 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='8 28 / 30 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='2 29 / 30 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='7 Table 3: ChatGPT and Codex comparison on QuixBugs-Python and QuixBugs-Java where each cell indicates the number of correct / plausible patches Models QuixBugs-Python QuixBugs-Java one try two tries three tries one try two tries three tries Codex 16 / 16 21 / 21 24 / 24 11 / 12 18 / 19 21 / 22 ChatGPT 24 / 24 27 / 28 28 / 29 24 / 24 26 / 26 26 / 26 model’s understanding of natural language feedback to indicate why the patch is incorrect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' LLMs can use this validation feedback information to generate new patches that try to pass the previ- ously failed testcase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Furthermore, conversational APR also helps to reduce the number of repeated patches from sampling using the same prompt over and over again.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' By using the large context size of many state-of-the-art LLMs, conversational APR can use recently generated incorrect patches as previous context to prompt the model to generate a new patch that is different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' ChatGPT evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We now evaluate the performance of ChatGPT when using conversational APR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Due to the requirement of manually inputting and extracting outputs from ChatGPT, we only use a single conversation chain with at most 3 tries (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' maximum chain length of 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We compare with the best performing LLM of Codex from previous results under the same setting in Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We observe that compared to Codex, which is trained in an unsupervised manner, ChatGPT which is fine-tuned using Reinforcement Learning from Human Feedback (RLHF) performed much better across the two repair datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' This improvement in result can be partially attributed to increase in model parameter size, but we believe this is also due to the dialogue-based fine-tuning dataset used in ChatGPT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Conversational APR relies on the model understanding the validation feedback to condition the future generation in trying to generate a patch that passes the testcase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' A more dialogue-oriented model such as ChatGPT is well suited for this task as both the training data and algorithm contain feedback driven loops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' As ChatGPT and other dialogue-based LLMs become more popular, we believe conversational APR can also be further improved through more usage of these LLMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='2 RQ2: COMPONENT ANALYSIS Maximum chain length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We first investigate the effect of different maximum chain length has on the repair performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Figure 3 shows the number of plausible patches when we vary the maximum chain length from 1 to 6 for the 4 CODEGEN models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Recall from Section 3 that chain length refers ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='Figure 3: Number of plausible patches for the 4 different CODEGEN models as we vary the ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='maximum chain length on QuixBugs-Python ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='CodeGen-350M ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='CodeGen-2B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='CodeGen-6B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='CodeGen-16B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='12 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='28 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='Patches ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='26 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='28 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='28 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='24 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='Plausible ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='26 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='26 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='22 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='24 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='24 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='# ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='Maximum ChainLengthTable 4: Prompting Style Evaluation on QuixBugs-Python with each cell showing the number ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='of plausible patches ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='Models ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='no testcase ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='natural language ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='functional ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='CODEGEN-350M ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='11 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='11 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='CODEGEN-2B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='25 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='26 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='CODEGEN-6B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='24 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='27 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='28 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='CODEGEN-16B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='27 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='Codex ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='29 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='to the number of turns (each turn consist of generating and validating a new patch) in a conversation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' A maximum chain length of 1 is the simple sampling from the same initial prompt baseline (used in prior LLM for APR tools).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' As we increase chain length, the model has to take in more and more previous context in the form of prior generations and feedbacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We observe that the performance increase as we start from a small chain length and reaches the maximum around 3 or 4 and then decrease as chain length continue to increase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' The decrease in number of plausible patches once we reach a high chain length is because the context may be too much for the model to handle since it can include multiple previously failed patches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We also observe that this decrease is more significant in smaller models, where larger models are less affected by longer chain length, showing the ability for larger models to better capture the long term context dependencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' This shows that the optimal chain length to use for conversational APR can be dependent on the individual LLM used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Feedback prompting style.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We now evaluate the effect of the feedback prompting style used in our conversational APR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Table 4 shows the number of plausible patches using differ- ent validation prompts in QuixBugs-Python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Column no testcase does not include any test- case feedback (only states that the patch is not correct), natural language describes the failing testcase (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=', when input is 2, the patch incorrectly returns [] but it should return [2]) and functional which is the default prompting style discussed in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We ob- serve that different prompting style does have an effect on the final performance of conversational APR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Starting from no testcase prompt, we can improve performance by adding specific testcase feedback information on top of telling the LLM that the patch is not correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' We also observe that the functional prompting style, using the buggy/patch function name and passing parameters (see Figure 1), performs the best.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Functional prompting style conveys the failing testcase information in a more concise and natural way by phrasing the testcase input and expected output relationship as a function call.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 7 CONCLUSION We propose conversational APR, a new paradigm for program repair that interleaves patch gener- ation with validation to provide immediate feedback for LLMs to better prompt future generated patches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Compared to previous LLM for APR approaches that only sample from the same input, conversational APR iteratively builds the input by concatenating previously incorrect patches and validation feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' This allows for the model to avoid generating previously incorrect patches and also understand the semantic meaning of the function through validation feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Our evaluation on 10 different LLMs shows the improvement of conversational APR over the baseline sampling method used in prior LLM for APR tools.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Furthermore, we demonstrate the promising future of ap- plying ChatGPT, a conversational/dialogue driven LLM, for conversational APR, or APR in general for the first time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' REFERENCES Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Program synthesis with large language models, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='07732.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' BigQuery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Bigquery github repos, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' https://console.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='cloud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='google.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='com/ marketplace/details/github/github-repos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Tom B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhari- wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, 9 Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Language models are few-shot learners, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' arXiv:2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='14165.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Mark Chen,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Jerry Tworek,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Heewoo Jun,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Qiming Yuan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Henrique Ponde de Oliveira Pinto,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Jared Kaplan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Harri Edwards,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Yuri Burda,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Nicholas Joseph,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Greg Brockman,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Alex Ray,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Raul Puri,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Gretchen Krueger,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Michael Petrov,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Heidy Khlaaf,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Girish Sastry,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Pamela Mishkin,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Brooke Chan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Scott Gray,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Nick Ryder,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Mikhail Pavlov,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Alethea Power,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Lukasz Kaiser,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Mohammad Bavarian,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Clemens Winter,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Philippe Tillet,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Felipe Petroski Such,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Dave Cummings,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Matthias Plappert,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Fo- tios Chantzis,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Elizabeth Barnes,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Ariel Herbert-Voss,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' William Hebgen Guss,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Alex Nichol,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Alex Paino,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Nikolas Tezak,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Jie Tang,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Igor Babuschkin,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Suchir Balaji,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Shantanu Jain,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' William Saunders,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Christopher Hesse,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Andrew N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob Mc- Grew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Evaluating large language models trained on code, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='03374.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Cohen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' arXiv:2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='12588.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Dawn Drain, Colin B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Clement, Guillermo Serrato, and Neel Sundaresan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Deepdebug: Fixing python bugs using stack traces, backtranslation, and code skeletons, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' arXiv:2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='09352.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Codebert: A pre-trained model for programming and natural languages, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' arXiv:2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='08155.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' The pile: An 800gb dataset of diverse text for language modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' arXiv:2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='00027.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Luca Gazzola, Daniela Micucci, and Leonardo Mariani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Automatic software repair: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' IEEE Transactions on Software Engineering, 45(1):34–67, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Ali Ghanbari, Samuel Benton, and Lingming Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Practical program repair via bytecode muta- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 19–30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' ACM, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' ISBN 978-1-4503-6224-5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Mary Hanbury.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Investigators have reportedly found more evidence that could con- nect the ethiopian boeing 737 max crash to a deadly accident five months be- fore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Business Insider, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='businessinsider.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='com/ potential-link-between-ethiopian-boeing-737-max-crash-lion-air-mishap-2019-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' HuggingFace.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Hugging face, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' https://huggingface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='co.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Nan Jiang, Thibaud Lutellier, and Lin Tan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Cure: Code-aware neural machine translation for auto- matic program repair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), May 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Sophia D Kolak, Ruben Martins, Claire Le Goues, and Vincent Josua Hellendoorn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Patch generation with language models: Feasibility and scaling behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In Deep Learning for Code Workshop, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Derrick Lin, James Koppel, Angela Chen, and Armando Solar-Lezama.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Quixbugs: A multi- lingual program repair benchmark set based on the quixey challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' SPLASH Companion 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 55–56, New York, NY, USA, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' ISBN 9781450355148.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Kui Liu, Anil Koyuncu, Dongsun Kim, and Tegawend´e F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Bissyand´e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Tbar: Revisiting template- based automated program repair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In Proceedings of the 28th ACM SIGSOFT International Sym- posium on Software Testing and Analysis, ISSTA 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 31–42, New York, NY, USA, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' ACM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' ISBN 9781450362245.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 10 Yiling Lou, Ali Ghanbari, Xia Li, Lingming Zhang, Haotian Zhang, Dan Hao, and Lu Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Can automated program repair refine fault localization?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' a unified debugging approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 75–87, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Scott Matteson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Report: Software failure caused $1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='7 trillion in financial losses in 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' TechRepublic, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='techrepublic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='com/article/ report-software-failure-caused-1-7-trillion-in-financial-losses-in-2017/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Codegen: An open large language model for code with multi-turn program synthesis, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' arXiv:2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='13474.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Au- gustus Odena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Show your work: Scratchpads for intermediate computation with language models, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='00114.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Devon H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' O’Dell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' The debugging mindset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' acmqueue, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' https://queue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='acm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='org/ detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='cfm?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='id=3068754/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' OpenAI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Does chatgpt remember what happened earlier in the conver- sation?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' https://help.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='openai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='com/en/articles/ 6787051-does-chatgpt-remember-what-happened-earlier-in-the-conversation/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Julian Aron Prenner, Hlib Babii, and Romain Robbes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Can openai’s codex fix bugs?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' : An evaluation on quixbugs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In 2022 IEEE/ACM International Workshop on Automated Program Repair (APR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 69–75, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Proximal policy optimization algorithms, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' arXiv:1707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='06347.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' John Schulman,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Barret Zoph,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Jacob Hilton Christina Kim,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Jacob Menick,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Jiayi Weng,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Juan Fe- lipe Ceron Uribe,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Liam Fedus,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Luke Metz,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Michael Pokorny,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Rapha Gontijo Lopes,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Shengjia Zhao,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Arun Vijayvergiya,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Eric Sigler,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Adam Perelman,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Chelsea Voss,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Mike Heaton,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Joel Parish,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Dave Cummings,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Rajeev Nayak,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Valerie Balcom,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' David Schnurr,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Tomer Kaftan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Chris Hal- lacy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Nicholas Turley,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Noah Deutsch,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Vik Goel,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Jonathan Ward,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Aris Konstantinidis,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Woj- ciech Zaremba,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Long Ouyang,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Leonard Bogdonoff,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Joshua Gross,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' David Medina,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Sarah Yoo,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Teddy Lee,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Ryan Lowe,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Dan Mossing,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Joost Huizinga,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Roger Jiang,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Carroll Wainwright,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Diogo Almeida,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Steph Lin,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Marvin Zhang,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Kai Xiao,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Katarina Slama,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Steven Bills,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Alex Gray,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Jan Leike,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Jakub Pachocki,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Phil Tillet,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Shantanu Jain,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Greg Brockman,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' and Nick Ryder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Chatgpt: Optimiz- ing language models for dialogue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' https://openai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='com/blog/chatgpt/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Chain-of-thought prompting elicits reasoning in large language models, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='11903.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Chunqiu Steven Xia and Lingming Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Less training, more repairing please: Revisiting auto- mated program repair via zero-shot learning, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='08281.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Chunqiu Steven Xia, Yuxiang Wei, and Lingming Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Practical program repair in the era of large pre-trained language models, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='14179.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' He Ye, Matias Martinez, and Martin Monperrus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Neural program repair with execution-based back- propagation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In 2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 1506–1518, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Qihao Zhu, Zeyu Sun, Yuan-an Xiao, Wenjie Zhang, Kang Yuan, Yingfei Xiong, and Lu Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' A syntax-guided edit decoder for neural program repair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' In Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 341–353, New York, NY, USA, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' ACM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' ISBN 9781450385626.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Daniel M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' Fine-tuning language models from human preferences, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' arXiv:1909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content='08593.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'} +page_content=' 11' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dFQT4oBgHgl3EQfGzVw/content/2301.13246v1.pdf'}