arxiv_id
stringlengths 9
12
| paper
stringlengths 2.65k
90.8k
| targets
sequencelengths 4
4
| targets_idx
sequencelengths 4
4
| cite_corpus_id_map
stringlengths 108
31.6k
|
---|---|---|---|---|
2405.08839 | <|paper_start|> Title: PromptMind Team at EHRSQL-2024: Improving Reliability of SQL Generation using Ensemble LLMs
Abstract: PromptMind Team at EHRSQL-2024: Improving Reliability of SQL Generation using Ensemble LLMs: This paper presents our approach to the EHRSQL-2024 shared task, which aims to develop a reliable Text-to-SQL system for electronic health records. We propose two approaches that leverage large language models (LLMs) for prompting and fine-tuning to generate EHRSQL queries. In both techniques, we concentrate on bridging the gap between the real-world knowledge on which LLMs are trained and the domain specific knowledge required for the task. The paper provides the results of each approach individually, demonstrating that they achieve high execution accuracy. Additionally, we show that an ensemble approach further enhances generation reliability by reducing errors. This approach secured us 2nd place in the shared task competition. The methodologies outlined in this paper are designed to be transferable to domain-specific Text-to-SQL problems that emphasize both accuracy and reliability.
Introduction
Text-to-SQL technology translates natural language questions into executable SQL queries that can answer the questions using a provided database. A robust Text-to-SQL system could significantly increase productivity for anyone using databases by providing an easy-to-use natural language interface and reducing the need for expertise in different SQL dialects. These systems are particularly more valuable in domains where SQL knowledge is not essential, such as healthcare, where healthcare professionals like doctors, nurses, and hospital administrators spend a significant amount of time interacting with patient health records stored in databases.
In the era of Large Language Models (LLMs), the field of Text-to-SQL is gaining prominence as these models demonstrate impressive text generation capabilities without the need for fine-tuning. Introduced in 2017, WikiSQL <|cite_start|> (Reference: Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning: A significant amount of the world's knowledge is stored in relational databases. However, the ability for users to retrieve facts from a database is limited due to a lack of understanding of query languages such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model leverages the structure of SQL queries to significantly reduce the output space of generated queries. Moreover, we use rewards from in-the-loop query execution over the database to learn a policy to generate unordered parts of the query, which we show are less suitable for optimization via cross entropy loss. In addition, we will publish WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables from Wikipedia. This dataset is required to train our model and is an order of magnitude larger than comparable datasets. By applying policy-based reinforcement learning with a query execution environment to WikiSQL, our model Seq2SQL outperforms attentional sequence to sequence models, improving execution accuracy from 35.9% to 59.4% and logical form accuracy from 23.4% to 48.3%.) <|cite_end|> remains one of the largest datasets for Text-to-SQL and primarily caters to relatively simple queries. Subsequently, the SPIDER <|cite_start|> (Reference: Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task: We present Spider, a large-scale, complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 college students. It consists of 10,181 questions and 5,693 unique complex SQL queries on 200 databases with multiple tables, covering 138 different domains. We define a new complex and cross-domain semantic parsing and text-to-SQL task where different complex SQL queries and databases appear in train and test sets. In this way, the task requires the model to generalize well to both new SQL queries and new database schemas. Spider is distinct from most of the previous semantic parsing tasks because they all use a single database and the exact same programs in the train set and the test set. We experiment with various state-of-the-art models and the best model achieves only 12.4% exact matching accuracy on a database split setting. This shows that Spider presents a strong challenge for future research. Our dataset and task are publicly available at https://yale-lily.github.io/spider) <|cite_end|> and MULTI-SPIDER <|cite_start|> (Reference: MultiSpider: Towards Benchmarking Multilingual Text-to-SQL Semantic Parsing: Text-to-SQL semantic parsing is an important NLP task, which greatly facilitates the interaction between users and the database and becomes the key component in many human-computer interaction systems. Much recent progress in text-to-SQL has been driven by large-scale datasets, but most of them are centered on English. In this work, we present MultiSpider, the largest multilingual text-to-SQL dataset which covers seven languages (English, German, French, Spanish, Japanese, Chinese, and Vietnamese). Upon MultiSpider, we further identify the lexical and structural challenges of text-to-SQL (caused by specific language properties and dialect sayings) and their intensity across different languages. Experimental results under three typical settings (zero-shot, monolingual and multilingual) reveal a 6.1% absolute drop in accuracy in non-English languages. Qualitative and quantitative analyses are conducted to understand the reason for the performance drop of each language. Besides the dataset, we also propose a simple schema augmentation framework SAVe (Schema-Augmentation-with-Verification), which significantly boosts the overall performance by about 1.8% and closes the 29.5% performance gap across languages.) <|cite_end|> datasets were developed. These datasets posed challenges with complex queries that required an understanding of the database schema and support for various languages. BIRD-Bench was introduced to bridge the gap between research and real-world applications by providing large and imperfect databases <|cite_start|> (Reference: Can LLM Already Serve as A Database Interface? A BIg Bench for Large-Scale Database Grounded Text-to-SQLs: Text-to-SQL parsing, which aims at converting natural language instructions into executable SQLs, has gained increasing attention in recent years. In particular, Codex and ChatGPT have shown impressive results in this task. However, most of the prevalent benchmarks, i.e., Spider, and WikiSQL, focus on database schema with few rows of database contents leaving the gap between academic study and real-world applications. To mitigate this gap, we present Bird, a big benchmark for large-scale database grounded in text-to-SQL tasks, containing 12,751 pairs of text-to-SQL data and 95 databases with a total size of 33.4 GB, spanning 37 professional domains. Our emphasis on database values highlights the new challenges of dirty database contents, external knowledge between NL questions and database contents, and SQL efficiency, particularly in the context of massive databases. To solve these problems, text-to-SQL models must feature database value comprehension in addition to semantic parsing. The experimental results demonstrate the significance of database values in generating accurate text-to-SQLs for big databases. Furthermore, even the most effective text-to-SQL models, i.e. ChatGPT, only achieves 40.08% in execution accuracy, which is still far from the human result of 92.96%, proving that challenges still stand. Besides, we also provide an efficiency analysis to offer insights into generating text-to-efficient-SQLs that are beneficial to industries. We believe that BIRD will contribute to advancing real-world applications of text-to-SQL research. The leaderboard and source code are available: https://bird-bench.github.io/.) <|cite_end|>. These datasets are good representations of typical Text-to-SQL tasks. However, the healthcare domain differs from these generic datasets for the following reasons:
\begin{itemize}
\item The questions asked by users maybe highly specialized and specific to the medical field.
\item To answer such questions, systems must also possess an understanding of clinical terminology.
\item Reliability is of paramount importance as errors can have serious consequences.
\end{itemize}
These differences present unique challenges for developing a reliable Text-to-SQL system for the healthcare domain. EHRSQL is the first dataset that closely captures the needs of hospital staff and serves appropriately for building and testing Text-to-SQL systems in the healthcare domain <|cite_start|> (Reference: EHRSQL: A Practical Text-to-SQL Benchmark for Electronic Health Records: We present a new text-to-SQL dataset for electronic health records (EHRs). The utterances were collected from 222 hospital staff members, including physicians, nurses, and insurance review and health records teams. To construct the QA dataset on structured EHR data, we conducted a poll at a university hospital and used the responses to create seed questions. We then manually linked these questions to two open-source EHR databases, MIMIC-III and eICU, and included various time expressions and held-out unanswerable questions in the dataset, which were also collected from the poll. Our dataset poses a unique set of challenges: the model needs to 1) generate SQL queries that reflect a wide range of needs in the hospital, including simple retrieval and complex operations such as calculating survival rate, 2) understand various time expressions to answer time-sensitive questions in healthcare, and 3) distinguish whether a given question is answerable or unanswerable. We believe our dataset, EHRSQL, can serve as a practical benchmark for developing and assessing QA models on structured EHR data and take a step further towards bridging the gap between text-to-SQL research and its real-life deployment in healthcare. EHRSQL is available at https://github.com/glee4810/EHRSQL.) <|cite_end|>.
Our solution aims to create a Text to SQL system that emphasizes both reliability and accuracy. To achieve this, we divide the task into two phases:
\begin{itemize}
\setlength\itemsep{0em}
\item SQL Generation
\item SQL Validation
\end{itemize}
In the first stage, we focus on SQL generation employing different techniques that include prompting and fine-tuning of LLMs. In both approaches, we use the same prompting strategy to provide the LLM with database information and question-related context. Specifically, we use table schemas combined with sample column values as the database context, and similar questions from the training data as the task context. To identify similar questions from the training data, we employ an embedding-based similarity technique. Then, our goal is to maximize the LLM's ability to generate highly accurate SQL statements utilizing this approach.
There are several reasons why LLMs may fail to generate correct SQL for a given question. Some common reasons include:
\begin{itemize}
\setlength\itemsep{0em}
\item Misinterpretation of question's intent
\item Incorrect assumptions or hallucinations about the database's tables or columns
\item Inaccuracies or hallucinations in the generated SQL query
\end{itemize}
Unlike many text generation tasks, Text-to-SQL tasks have a limited number of correct answers but potentially infinite incorrect ones. Inspired by this, we develop a second stage that evaluates the accuracy of the generated SQL. To evaluate the same, we propose an approach for Text-to-SQL that combines the results of multiple robust LLMs. Stronger LLMs often produce consistent outputs despite variations in temperature or other parameters, while smaller LLMs show lower consistency and accuracy. By leveraging the strengths of several robust LLMs, our approach minimizes the number of incorrect SQL queries and enhances the overall robustness and reliability of the Text-to-SQL system.
In the remainder of this paper, we discuss related work, introduce the EHRSQL-2024 task and dataset, and present our two-stage approach. We then provide the results of our experiments and conclude with a summary of our findings.
Related Work
Prior to the advent of LLMs, the primary focus of research in natural language processing involved refining specialized models using innovative strategies <|cite_start|> (Reference: RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers: When translating natural language questions into SQL queries to answer questions from a database, contemporary semantic parsing models struggle to generalize to unseen database schemas. The generalization challenge lies in (a) encoding the database relations in an accessible way for the semantic parser, and (b) modeling alignment between database columns and their mentions in a given query. We present a unified framework, based on the relation-aware self-attention mechanism, to address schema encoding, schema linking, and feature representation within a text-to-SQL encoder. On the challenging Spider dataset this framework boosts the exact match accuracy to 57.2%, surpassing its best counterparts by 8.7% absolute improvement. Further augmented with BERT, it achieves the new state-of-the-art performance of 65.6% on the Spider leaderboard. In addition, we observe qualitative improvements in the model’s understanding of schema linking and alignment. Our implementation will be open-sourced at https://github.com/Microsoft/rat-sql.) <|cite_end|>. Additionally, substantial efforts were devoted to developing sophisticated pre-training methodologies, such as those proposed by STAR <|cite_start|> (Reference: {{STAR: 焦虑测验研究协会(Society for Test Anxiety Research)于1987年6月25—27日在挪威的卑尔根大学召开了第八届国际大会。本届大会的主席由卑尔根大学心理系教授KnutA.Hagtvet 担任。在开幕式上,STAR 的主席、美国加利福尼亚大学(伯克利)心理学教授Martin V.Covington 与卑尔根大学校长Magne Lerheim 博士致了开幕词。大会共分) <|cite_end|>, and exploring decoding strategies, as exemplified by PICARD. However, these approaches typically require substantial computational resources and novel techniques.
Large Language Models (LLMs) have been trained extensively on textual data, which has equipped them with vast knowledge. As a result, they exhibit exceptional probabilistic reasoning abilities and can excel at various tasks even without explicit training. Zero-shot prompting techniques, when used with LLMs, have not only narrowed the performance gap on Text-to-SQL but have also surpassed specialized pre-trained or fine-tuned models. Several prompt techniques have been developed based on this zero-shot approach for Text-to-SQL tasks, leading to remarkable achievements on datasets such as SPIDER <|cite_start|> (Reference: C3: Zero-shot Text-to-SQL with ChatGPT: This paper proposes a ChatGPT-based zero-shot Text-to-SQL method, dubbed C3, which achieves 82.3\% in terms of execution accuracy on the holdout test set of Spider and becomes the state-of-the-art zero-shot Text-to-SQL method on the Spider Challenge. C3 consists of three key components: Clear Prompting (CP), Calibration with Hints (CH), and Consistent Output (CO), which are corresponding to the model input, model bias and model output respectively. It provides a systematic treatment for zero-shot Text-to-SQL. Extensive experiments have been conducted to verify the effectiveness and efficiency of our proposed method.) <|cite_end|>, <|cite_start|> (Reference: A comprehensive evaluation of ChatGPT's zero-shot Text-to-SQL capability: This paper presents the first comprehensive analysis of ChatGPT's Text-to-SQL ability. Given the recent emergence of large-scale conversational language model ChatGPT and its impressive capabilities in both conversational abilities and code generation, we sought to evaluate its Text-to-SQL performance. We conducted experiments on 12 benchmark datasets with different languages, settings, or scenarios, and the results demonstrate that ChatGPT has strong text-to-SQL abilities. Although there is still a gap from the current state-of-the-art (SOTA) model performance, considering that the experiment was conducted in a zero-shot scenario, ChatGPT's performance is still impressive. Notably, in the ADVETA (RPL) scenario, the zero-shot ChatGPT even outperforms the SOTA model that requires fine-tuning on the Spider dataset by 4.1\%, demonstrating its potential for use in practical applications. To support further research in related fields, we have made the data generated by ChatGPT publicly available at https://github.com/THU-BPM/chatgpt-sql.) <|cite_end|>. Zero-shot generation capabilities can be further enhanced through techniques like in-context learning (ICL) and few-shot prompting.
DIN-SQL <|cite_start|> (Reference: DIN-SQL: Decomposed In-Context Learning of Text-to-SQL with Self-Correction: There is currently a significant gap between the performance of fine-tuned models and prompting approaches using Large Language Models (LLMs) on the challenging task of text-to-SQL, as evaluated on datasets such as Spider. To improve the performance of LLMs in the reasoning process, we study how decomposing the task into smaller sub-tasks can be effective. In particular, we show that breaking down the generation problem into sub-problems and feeding the solutions of those sub-problems into LLMs can be an effective approach for significantly improving their performance. Our experiments with three LLMs show that this approach consistently improves their simple few-shot performance by roughly 10%, pushing the accuracy of LLMs towards SOTA or surpassing it. On the holdout test set of Spider, the SOTA, in terms of execution accuracy, was 79.9 and the new SOTA at the time of this writing using our approach is 85.3. Our approach with in-context learning beats many heavily fine-tuned models by at least 5%. Additionally, when evaluated on the BIRD benchmark, our approach achieved an execution accuracy of 55.9%, setting a new SOTA on its holdout test set.) <|cite_end|> adopts an in-context learning approach to break down complex SQL generation into manageable sub-tasks, leading to improved performance on intricate queries. Another technique, retrieval-augmented generation, provides relevant and helpful examples as a few-shot to guide SQL generation <|cite_start|> (Reference: Retrieval-augmented gpt-3.5-based text-to-sql framework with sample-aware prompting and dynamic revision chain: Text-to-SQL aims at generating SQL queries for the given natural language questions and thus helping users to query databases. Prompt learning with large language models (LLMs) has emerged as a recent approach, which designs prompts to lead LLMs to understand the input question and generate the corresponding SQL. However, it faces challenges with strict SQL syntax requirements. Existing work prompts the LLMs with a list of demonstration examples (i.e. question-SQL pairs) to generate SQL, but the fixed prompts can hardly handle the scenario where the semantic gap between the retrieved demonstration and the input question is large. In this paper, we propose a retrieval-augmented prompting method for a LLM-based Text-to-SQL framework, involving sample-aware prompting and a dynamic revision chain. Our approach incorporates sample-aware demonstrations, which include the composition of SQL operators and fine-grained information related to the given question. To retrieve questions sharing similar intents with input questions, we propose two strategies for assisting retrieval. Firstly, we leverage LLMs to simplify the original questions, unifying the syntax and thereby clarifying the users' intentions. To generate executable and accurate SQLs without human intervention, we design a dynamic revision chain which iteratively adapts fine-grained feedback from the previously generated SQL. Experimental results on three Text-to-SQL benchmarks demonstrate the superiority of our method over strong baseline models.) <|cite_end|>. These approaches have proven effective on general Text-to-SQL tasks but they have not yet been studied rigorously on domain-specific Text-to-SQL problems. Retrieval Augmented Fine-tuning (RAFT) introduces a novel fine-tuning technique that improves the in-domain performance of RAG while integrating domain-specific knowledge <|cite_start|> (Reference: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks: Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit non-parametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) -- models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.) <|cite_end|>.
Through our work, we delve into the application of these techniques for the EHRSQL-2024 task. <|paper_end|> | [
"<|reference_start|> Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning: A significant amount of the world's knowledge is stored in relational databases. However, the ability for users to retrieve facts from a database is limited due to a lack of understanding of query languages such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model leverages the structure of SQL queries to significantly reduce the output space of generated queries. Moreover, we use rewards from in-the-loop query execution over the database to learn a policy to generate unordered parts of the query, which we show are less suitable for optimization via cross entropy loss. In addition, we will publish WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables from Wikipedia. This dataset is required to train our model and is an order of magnitude larger than comparable datasets. By applying policy-based reinforcement learning with a query execution environment to WikiSQL, our model Seq2SQL outperforms attentional sequence to sequence models, improving execution accuracy from 35.9% to 59.4% and logical form accuracy from 23.4% to 48.3%. <|reference_end|>",
"<|reference_start|> EHRSQL: A Practical Text-to-SQL Benchmark for Electronic Health Records: We present a new text-to-SQL dataset for electronic health records (EHRs). The utterances were collected from 222 hospital staff members, including physicians, nurses, and insurance review and health records teams. To construct the QA dataset on structured EHR data, we conducted a poll at a university hospital and used the responses to create seed questions. We then manually linked these questions to two open-source EHR databases, MIMIC-III and eICU, and included various time expressions and held-out unanswerable questions in the dataset, which were also collected from the poll. Our dataset poses a unique set of challenges: the model needs to 1) generate SQL queries that reflect a wide range of needs in the hospital, including simple retrieval and complex operations such as calculating survival rate, 2) understand various time expressions to answer time-sensitive questions in healthcare, and 3) distinguish whether a given question is answerable or unanswerable. We believe our dataset, EHRSQL, can serve as a practical benchmark for developing and assessing QA models on structured EHR data and take a step further towards bridging the gap between text-to-SQL research and its real-life deployment in healthcare. EHRSQL is available at https://github.com/glee4810/EHRSQL. <|reference_end|>",
"<|reference_start|> RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers: When translating natural language questions into SQL queries to answer questions from a database, contemporary semantic parsing models struggle to generalize to unseen database schemas. The generalization challenge lies in (a) encoding the database relations in an accessible way for the semantic parser, and (b) modeling alignment between database columns and their mentions in a given query. We present a unified framework, based on the relation-aware self-attention mechanism, to address schema encoding, schema linking, and feature representation within a text-to-SQL encoder. On the challenging Spider dataset this framework boosts the exact match accuracy to 57.2%, surpassing its best counterparts by 8.7% absolute improvement. Further augmented with BERT, it achieves the new state-of-the-art performance of 65.6% on the Spider leaderboard. In addition, we observe qualitative improvements in the model’s understanding of schema linking and alignment. Our implementation will be open-sourced at https://github.com/Microsoft/rat-sql. <|reference_end|>",
"<|reference_start|> {{STAR: 焦虑测验研究协会(Society for Test Anxiety Research)于1987年6月25—27日在挪威的卑尔根大学召开了第八届国际大会。本届大会的主席由卑尔根大学心理系教授KnutA.Hagtvet 担任。在开幕式上,STAR 的主席、美国加利福尼亚大学(伯克利)心理学教授Martin V.Covington 与卑尔根大学校长Magne Lerheim 博士致了开幕词。大会共分 <|reference_end|>"
] | [
0,
4,
5,
6
] | {"<|cite_11|>": "arxiv-133344", "<|cite_12|>": "arxiv-173798", "<|cite_13|>": "arxiv-471954", "<|cite_1|>": "arxiv-502274", "<|cite_2|>": "arxiv-475465", "<|cite_3|>": "ss-735584", "<|cite_4|>": "ss-944738", "<|cite_6|>": "arxiv-523229", "<|cite_7|>": "arxiv-491508", "<|cite_8|>": "arxiv-498879", "<|cite_9|>": "ss-1600919", "<|cite_10|>": "ss-759091"} |
2311.15838 | <|paper_start|> Title: Utilizing Explainability Techniques for Reinforcement Learning Model Assurance
Abstract: Utilizing Explainability Techniques for Reinforcement Learning Model Assurance: Explainable Reinforcement Learning (XRL) can provide transparency into the decision-making process of a Deep Reinforcement Learning (DRL) model and increase user trust and adoption in real-world use cases. By utilizing XRL techniques, researchers can identify potential vulnerabilities within a trained DRL model prior to deployment, therefore limiting the potential for mission failure or mistakes by the system. This paper introduces the ARLIN (Assured RL Model Interrogation) Toolkit, an open-source Python library that identifies potential vulnerabilities and critical points within trained DRL models through detailed, human-interpretable explainability outputs. To illustrate ARLIN's effectiveness, we provide explainability visualizations and vulnerability analysis for a publicly available DRL model. The open-source code repository is available for download at https://github.com/mitre/arlin.
Introduction
Over the last decade, reinforcement learning has increased in popularity due to its ability to achieve superhuman performance on a variety of classic board <|cite_start|> (Reference: Mastering the game of Go with deep neural networks and tree search: ) <|cite_end|> and video game <|cite_start|> (Reference: Playing Atari with Deep Reinforcement Learning: We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.) <|cite_end|> environments. This gain in popularity has sparked an interest in using DRL for both decision support and autonomous operation within safety-critical scenarios such as air-to-air combat <|cite_start|> (Reference: Hierarchical Reinforcement Learning for Air-to-Air Combat: Artificial Intelligence (AI) is becoming a critical component in the defense industry, as recently demonstrated by DARPA`s AlphaDogfight Trials (ADT). ADT sought to vet the feasibility of AI algorithms capable of piloting an F-16 in simulated air-to-air combat. As a participant in ADT, Lockheed Martin`s (LM) approach combines a hierarchical architecture with maximum-entropy reinforcement learning (RL), integrates expert knowledge through reward shaping, and supports modularity of policies. This approach achieved a $2^{nd}$ place finish in the final ADT event (among eight total competitors) and defeated a graduate of the US Air Force's (USAF) F-16 Weapons Instructor Course in match play.) <|cite_end|>, nuclear power plant optimization <|cite_start|> (Reference: Magnetic control of tokamak plasmas through deep reinforcement learning: ) <|cite_end|>, and ballistic missile guidance <|cite_start|> (Reference: Terminal Adaptive Guidance for Autonomous Hypersonic Strike Weapons via Reinforcement Learning: An adaptive guidance system suitable for the terminal phase trajectory of a hypersonic strike weapon is optimized using reinforcement meta learning. The guidance system maps observations directly to commanded bank angle, angle of attack, and sideslip angle rates. Importantly, the observations are directly measurable from radar seeker outputs with minimal processing. The optimization framework implements a shaping reward that minimizes the line of sight rotation rate, with a terminal reward given if the agent satisfies path constraints and meets terminal accuracy and speed criteria. We show that the guidance system can adapt to off-nominal flight conditions including perturbation of aerodynamic coefficient parameters, actuator failure scenarios, sensor scale factor errors, and actuator lag, while satisfying heating rate, dynamic pressure, and load path constraints, as well as a minimum impact speed constraint. We demonstrate precision strike capability against a maneuvering ground target and the ability to divert to a new target, the latter being important to maximize strike effectiveness for a group of hypersonic strike weapons. Moreover, we demonstrate a threat evasion strategy against interceptors with limited midcourse correction capability, where the hypersonic strike weapon implements multiple diverts to alternate targets, with the last divert to the actual target. Finally, we include preliminary results for an integrated guidance and control system in a six degrees-of-freedom environment.) <|cite_end|>. These use-cases are considered high-risk as even small mistakes can result in large losses of monetary value, equipment, and life. Before DRL models can safely be deployed within real-world safety critical environments, their associated vulnerabilities need to be identified and understood so effective training enhancements and verification guardrails can be implemented.
In this paper, we present the ARLIN Toolkit, an open-source research library written in Python that provides explainability outputs and vulnerability detection for DRL models, specifically designed to increase model assurance and identify potential points of failure within a trained model. To our knowledge, ARLIN is the first open-sourced Python toolkit focused on utilizing explainability techniques to assure RL models prior to deployment. ARLIN utilizes \textit{matplotlib} <|cite_start|> (Reference: Matplotlib: A 2d Graphics Environment: Matplotlib is a 2D graphics package used for Python for application development, interactive scripting,and publication-quality image generation across user interfaces and operating systems) <|cite_end|> and \textit{networkx} <|cite_start|> (Reference: Exploring Network Structure, Dynamics, and Function using NetworkX: NetworkX is a Python language package for exploration and analysis of networks and network algorithms. The core package provides data structures for representing many types of networks, or graphs, including simple graphs, directed graphs, and graphs with parallel edges and self-loops. The nodes in NetworkX graphs can be any (hashable) Python object and edges can contain arbitrary data; this flexibility makes NetworkX ideal for representing networks found in many different scientific fields. In addition to the basic data structures many graph algorithms are implemented for calculating network properties and structure measures: shortest paths, betweenness centrality, clustering, and degree distribution and many more. NetworkX can read and write various graph formats for easy exchange with existing data, and provides generators for many classic graphs and popular graph models, such as the Erdos-Renyi, Small World, and Barabasi-Albert models. The ease-of-use and flexibility of the Python programming language together with connection to the SciPy tools make NetworkX a powerful tool for scientific computations. We discuss some of our recent work studying synchronization of coupled oscillators to demonstrate how NetworkX enables research in the field of computational networks.) <|cite_end|> to visualize a trained DRL model's decision making process and provide meaningful vulnerability identification and analysis to researchers. The modular library is structured to support custom architectures, algorithms, DRL frameworks, and analytics; and provides a well-documented and tested API for XRL research development and model assurance. The ARLIN repository is available for download at \texttt{https://github.com/mitre/arlin}. <|paper_end|> | [
"<|reference_start|> Mastering the game of Go with deep neural networks and tree search: <|reference_end|>",
"<|reference_start|> Magnetic control of tokamak plasmas through deep reinforcement learning: <|reference_end|>",
"<|reference_start|> Terminal Adaptive Guidance for Autonomous Hypersonic Strike Weapons via Reinforcement Learning: An adaptive guidance system suitable for the terminal phase trajectory of a hypersonic strike weapon is optimized using reinforcement meta learning. The guidance system maps observations directly to commanded bank angle, angle of attack, and sideslip angle rates. Importantly, the observations are directly measurable from radar seeker outputs with minimal processing. The optimization framework implements a shaping reward that minimizes the line of sight rotation rate, with a terminal reward given if the agent satisfies path constraints and meets terminal accuracy and speed criteria. We show that the guidance system can adapt to off-nominal flight conditions including perturbation of aerodynamic coefficient parameters, actuator failure scenarios, sensor scale factor errors, and actuator lag, while satisfying heating rate, dynamic pressure, and load path constraints, as well as a minimum impact speed constraint. We demonstrate precision strike capability against a maneuvering ground target and the ability to divert to a new target, the latter being important to maximize strike effectiveness for a group of hypersonic strike weapons. Moreover, we demonstrate a threat evasion strategy against interceptors with limited midcourse correction capability, where the hypersonic strike weapon implements multiple diverts to alternate targets, with the last divert to the actual target. Finally, we include preliminary results for an integrated guidance and control system in a six degrees-of-freedom environment. <|reference_end|>",
"<|reference_start|> Exploring Network Structure, Dynamics, and Function using NetworkX: NetworkX is a Python language package for exploration and analysis of networks and network algorithms. The core package provides data structures for representing many types of networks, or graphs, including simple graphs, directed graphs, and graphs with parallel edges and self-loops. The nodes in NetworkX graphs can be any (hashable) Python object and edges can contain arbitrary data; this flexibility makes NetworkX ideal for representing networks found in many different scientific fields. In addition to the basic data structures many graph algorithms are implemented for calculating network properties and structure measures: shortest paths, betweenness centrality, clustering, and degree distribution and many more. NetworkX can read and write various graph formats for easy exchange with existing data, and provides generators for many classic graphs and popular graph models, such as the Erdos-Renyi, Small World, and Barabasi-Albert models. The ease-of-use and flexibility of the Python programming language together with connection to the SciPy tools make NetworkX a powerful tool for scientific computations. We discuss some of our recent work studying synchronization of coupled oscillators to demonstrate how NetworkX enables research in the field of computational networks. <|reference_end|>"
] | [
0,
3,
4,
6
] | {"<|cite_1|>": "ss-805362", "<|cite_2|>": "arxiv-54263", "<|cite_3|>": "arxiv-338526", "<|cite_4|>": "ss-737262", "<|cite_5|>": "arxiv-370952", "<|cite_6|>": "ss-972587", "<|cite_7|>": "ss-817053"} |
2001.11973 | <|paper_start|> Title: Unsatisfiability Proofs for Weight 16 Codewords in Lam's Problem
Abstract: Unsatisfiability Proofs for Weight 16 Codewords in Lam's Problem: In the 1970s and 1980s, searches performed by L. Carter, C. Lam, L. Thiel, and S. Swiercz showed that projective planes of order ten with weight 16 codewords do not exist. These searches required highly specialized and optimized computer programs and required about 2,000 hours of computing time on mainframe and supermini computers. In 2011, these searches were verified by D. Roy using an optimized C program and 16,000 hours on a cluster of desktop machines. We performed a verification of these searches by reducing the problem to the Boolean satisfiability problem (SAT). Our verification uses the cube-and-conquer SAT solving paradigm, symmetry breaking techniques using the computer algebra system Maple, and a result of Carter that there are ten nonisomorphic cases to check. Our searches completed in about 30 hours on a desktop machine and produced nonexistence proofs of about 1 terabyte in the DRAT (deletion resolution asymmetric tautology) format.
Introduction
Geometry is one of the oldest branches of mathematics, being first
axiomatically studied by Euclid in the 3rd century BC.
Given a line and a point not on it, Euclid's ``parallel postulate''
implies that there exists exactly one line through the point and
parallel to the given line.
For 2000 years mathematicians tried in vain to prove this axiom but
eventually geometries that did not satisfy the parallel
postulate were discovered.
For example, in the early seventeenth century G. Desargues
studied \emph{projective geometry} where parallel lines do not exist.
Projective geometry became widely studied
in the nineteenth century, leading to the discovery of projective
geometries containing a finite number of points.
Despite a huge amount of study for over 200 years, some basic questions about finite projective
geometries remain open---for example, how many points can a finite projective
plane contain?
It is known that this number
must be of the form $n^2+n+1$ for some natural
number~$n$ (known as the \emph{order} of the plane)
and certain orders such as $n=6$ have been ruled out by theoretical arguments.
For every other~$n$ up to ten a finite projective plane of order~$n$ can be
shown to exist through an explicit construction.
No theoretical explanation is known that answers the question
if a projective plane of order ten exists and
answering this question has since become known
as \emph{Lam's problem}.
In the 1970s and 1980s an enormous amount of computing was
used to show that no such plane exists <|cite_start|> (Reference: The Search for a Finite Projective Plane of Order 10: When I was a graduate student looking for a thesis topic, Herbert Ryser advised me not to work on the projective plane of order 10. Even though he was extremely interested in this subject, he believed that it was too difficult and that I might get nowhere with it. I took his advice and chose another problem. Somehow, this problem has a beauty that fascinates me as well as many other mathematicians. Finally in 1980, I succumbed to the temptation and started working on it with some of my colleagues. We eventually managed to get somewhere, but unfortunately, Dr. Ryser is no longer with us to hear of the final result. This is an expository article describing the evolution of the problem and how computers were used to solve it.) <|cite_end|>.
The computations were based on the existence of codewords in the error-correcting
code generated by a projective plane of order ten.
It was shown
that such a code must contain codewords of weights 15, 16, or~19---but
exhaustive searches
showed that such codewords do not
exist.
Each search required more advanced search techniques
and orders of magnitude more computational power than the previous search---
the weight~15 search being the easiest and the weight~19 search being the
most challenging.
In this paper we focus on the weight~16 search that
originally required about 2,000 hours on supercomputers and a VAX-11 supermini machine.
Additionally, in 2011, using an optimized C implementation
the weight~16 search was verified
in 16,000 core hours split across fifteen desktop machines.
We provide a reduction of the weight~16 codeword existence problem
to the Boolean satisfiability problem (SAT) and a SAT certification
that the resulting instances are unsatisfiable.
This is done using the
cube-and-conquer SAT solving paradigm <|cite_start|> (Reference: Cube and Conquer: Guiding CDCL SAT Solvers by Lookaheads: ) <|cite_end|> and uses functionality from
the computer algebra system Maple for the purposes of symmetry breaking.
See Section~\ref{sec:background} for background on the cube-and-conquer paradigm
and Section~\ref{sec:sat} for a description
of our SAT encoding and symmetry breaking methods.
Our search completed in about 30 hours on a desktop machine,
significantly faster than any previous search.
Furthermore, no previous search was able to
provide any kind of a certificate following a successful completion.
Thus, an independent party had to take on faith that the searches
did in fact complete.
In contrast, our search produces unsatisfiability certificates
that an independent party can use to verify that our searches were
successfully run to completion.
The proofs of nonexistence generated by the SAT solver amounted
to about 1 terabyte in the uncompressed DRAT (deletion resolution asymmetric tautology) format <|cite_start|> (Reference: DRAT-trim: Efficient Checking and Trimming Using Expressive Clausal Proofs: ) <|cite_end|>.
See Section~\ref{sec:results} for details on our implementation and results.
We do not claim our search is a formal verification
because our encoding relies on many mathematical properties that were not derived
in a computer-verifiable form, such as the result
that there are ten nonisomorphic cases that need to be considered~\cite
{carter1974existence} in addition to the correctness of our encoding
and implementation.
However, we now have a potential method for
producing a formal proof: by formally deriving
our SAT encoding from the projective plane axioms. This would require expertise
in both projective geometry and a formal proof system and would be a
significant undertaking. However,
the tools to do this already exist and have been used to formally verify
other results derived using SAT certificates <|cite_start|> (Reference: Formally Verifying the Solution to the Boolean Pythagorean Triples Problem: ) <|cite_end|> <|cite_start|> (Reference: SMTCoq: Mixing Automatic and Interactive Proof Technologies: ) <|cite_end|>.
Related Work
\label{sec:background}
We now describe the background necessary to understand the nonexistence
results of this paper, including the method
that we used to solve the SAT instances
and the mathematical background on projective planes
and their symmetry groups that is necessary to understand our SAT reduction.
\paragraph{The cube-and-conquer paradigm.}
The cube-and-conquer paradigm was first developed by Heule, Kullmann,
Wieringa, and Biere <|cite_start|> (Reference: Cube and Conquer: Guiding CDCL SAT Solvers by Lookaheads: ) <|cite_end|> for computing
van der Waerden numbers, a notoriously difficult computational
problem from combinatorics.
In recent years the cube-and-conquer method has
been used to resolve long-standing combinatorial problems
such as the Boolean Pythagorean triples problem <|cite_start|> (Reference: Solving Very Hard Problems: Cube-and-Conquer, a Hybrid SAT Solving Method: A recent success of SAT solving has been the solution of the boolean Pythagorean Triples problem [Heule et al., 2016], delivering the largest proof yet, of 200 terabytes in size. We present this and the underlying paradigm Cube-and-Conquer, a powerful general method to solve big SAT problems, based on integrating the “old” and “new” methods of SAT solving.) <|cite_end|>
and computing the fifth Schur number <|cite_start|> (Reference: Schur Number Five: We present the solution of a century-old problem known as Schur Number Five: What is the largest (natural) number $n$ such that there exists a five-coloring of the positive numbers up to $n$ without a monochromatic solution of the equation $a + b = c$? We obtained the solution, $n = 160$, by encoding the problem into propositional logic and applying massively parallel satisfiability solving techniques on the resulting formula. We constructed and validated a proof of the solution to increase trust in the correctness of the multi-CPU-year computations. The proof is two petabytes in size and was certified using a formally verified proof checker, demonstrating that any result by satisfiability solvers---no matter how large---can now be validated using highly trustworthy systems.) <|cite_end|>.
The idea behind the cube-and-conquer method is to split a SAT instance into
subproblems defined by \emph{cubes} (propositional formulae of the form
$l_1\land\dotsb\land l_n$ where $l_i$ are literals). Each cube defines a
single subproblem---generated by assuming the cube is true---and each subproblem
is then solved or ``conquered'' either in parallel or in sequence.
\paragraph{Projective planes.}
A projective plane is a collection of points and lines that satisfy certain axioms,
for example, in a projective plane any two lines intersect at a unique point.
Finite projective planes can be defined in terms of
\emph{incidence matrices} that have a $1$ in the $(i,j)$th entry exactly when
the $j$th point is on the $i$th line. In this framework, a projective plane
of order~$n$ is a square $\{0,1\}$-matrix of order $n^2+n+1$ where
any two rows or any two columns intersect exactly once (where two rows or columns
\emph{intersect} when they share a $1$ in the same position).
To avoid degenerate cases we also require that each row contains at least
two zeros or equivalently that each row contains exactly $n+1$ ones.
Two projective planes are said to be \emph{isomorphic} if one can
be transformed into the other via a series of row or column permutations.
Projective planes are known to exist in all orders that are primes or
prime powers and the \emph{prime power conjecture}
is that they exist in no other orders.
Some orders such as $n=6$ have been ruled out on theoretical grounds
making $n=10$ the first uncertain case.
This stimulated a massive computer
search for such a plane <|cite_start|> (Reference: The Search for a Finite Projective Plane of Order 10: When I was a graduate student looking for a thesis topic, Herbert Ryser advised me not to work on the projective plane of order 10. Even though he was extremely interested in this subject, he believed that it was too difficult and that I might get nowhere with it. I took his advice and chose another problem. Somehow, this problem has a beauty that fascinates me as well as many other mathematicians. Finally in 1980, I succumbed to the temptation and started working on it with some of my colleagues. We eventually managed to get somewhere, but unfortunately, Dr. Ryser is no longer with us to hear of the final result. This is an expository article describing the evolution of the problem and how computers were used to solve it.) <|cite_end|> based on the form such
a plane must have assuming certain codewords exist.
A \emph{codeword} is a $\{0,1\}$-vector in the rowspace (mod 2)
of a $\{0,1\}$-matrix and the \emph{weight} of a codeword is the number
of $1$s that it contains.
\begin{table}
\centering
\begin{tabular}{cccc}
Case & Symmetries & Group Size & Initial Cols. \\
1a & $S_4\wr S_2$ & 1152 & 28 \\
1b & $S_4\times S_4$ & \0576 & 23 \\
1c & $S_4\wr S_2$ & 1152 & 18 \\
2 & $S_4\times S_2$ & \0\048 & 28 \\
3 & $D_8$ & \0\016 & 28 \\
4 & $D_4\times S_2$ & \0\016 & 28 \\
5 & $S_3\times S_2$ & \0\012 & 28 \\
6a & $S_2\times S_2$ & \0\0\04 & 28 \\
6b & $S_2$ & \0\0\02 & 26 \\
6c & $S_2\times S_2$ & \0\0\04 & 24
\end{tabular}
\caption{The ten possible cases for the first eight rows of a projective plane
of order ten generating a weight 16 codeword
and the symmetries in the initial columns (see below).
Here $S_n$ denotes the symmetric group of order~$n!$, $D_n$ denotes
the dihedral group of order~$2n$, and $\wr$ denotes the wreath product.}\label{tbl:cases}
\end{table}
It is known <|cite_start|> (Reference: Configurations in a Plane of Order Ten: ) <|cite_end|>
that a projective plane of order ten must generate codewords of weight
15, 16, or~19, thus dramatically shrinking the search space
and naturally splitting the search into three cases.
As shown by, up to isomorphism
there are ten possibilities for the first
eight rows of the planes that generate weight 16 codewords.
Five of these possibilities (cases~2 to~6a in Table~\ref{tbl:cases}) were
eliminated by the searches of and the other five were
eliminated by the searches of <|cite_start|> (Reference: The nonexistence of code words of weight 16 in a projective plane of order 10: ) <|cite_end|>.
\paragraph{Incidence matrix structure.}
Carter derived numerous properties that the structure of a projective plane
generating a weight 16 codeword must satisfy. In particular, the projective plane can be
decomposed into a $3\times2$ grid of submatrices as follows:
\[
\begin{matrix}
& & 16 & 95 \\
\phantom{0}8\!\!\!\! & \multirow{3}{*}{\rlap{$\left(\rule{0pt}{16pt}\right.$}} & 2 & k & \multirow{3}{*}{\llap{$\left.\rule{0pt}{16pt}\right)$}} \\
72\!\!\!\! & & 9 & 8-2k \\
31\!\!\!\! & & 0 & k+3
\end{matrix}
\]
Here the numbers outside the matrix denote the number of rows or columns in that part of the submatrix.
The numbers inside the matrix denote how many $1$s there are in each column in that part of the
submatrix; certain columns depend on a parameter~$k$ that differs between columns.
Additionally, Carter showed that every entry in the first 16 columns
is uniquely specified by the starting case.
We call the columns incident with at least two of the first eight rows the \emph{initial} columns
and the columns incident with at least one of the first eight rows the \emph{inside} columns.
Full starting matrices
for each case are available
at \href{https://uwaterloo.ca/mathcheck/w16}{uwaterloo.ca/mathcheck/w16}.
\paragraph{Symmetry groups.}
A projective plane (or partial projective plane) may be symmetric in nontrivial ways,
in other words, there may exist row or column permutations that fix the entries of the plane.
Such symmetries are important to detect because they can dramatically reduce the search space
---and therefore the running time---of any search that makes use of them <|cite_start|> (Reference: Efficient Symmetry Breaking for Boolean Satisfiability: Identifying and breaking the symmetries of conjunctive normal form (CNF) formulae has been shown to lead to significant reductions in search times. Symmetries in the search space are broken by adding appropriate symmetry-breaking predicates (SBPs) to an SAT instance in CNF. The SBPs prune the search space by acting as a filter that confines the search to nonsymmetric regions of the space without affecting the satisfiability of the CNF formula. For symmetry breaking to be effective in practice, the computational overhead of generating and manipulating SBPs must be significantly less than the runtime savings they yield due to search space pruning. In this paper, we describe a more systematic and efficient construction of SBPs. In particular, we use the cycle structure of symmetry generators, which typically involve very few variables, to drastically reduce the size of SBPs. Furthermore, our new SBP construction grows linearly with the number of relevant variables as opposed to the previous quadratic constructions. Our empirical data suggest that these improvements reduce search runtimes by one to two orders of magnitude on a wide variety of benchmarks with symmetries.) <|cite_end|> <|cite_start|> (Reference: Expressing Symmetry Breaking in DRAT Proofs: ) <|cite_end|>.
\begin{figure}
\input 1c.tikz
\caption{The upper-left $8\times18$ submatrix from case 1c.
}\label{fig:tetrahedrons}
\end{figure}
For example, Figure~\ref{fig:tetrahedrons} shows the
\emph{initial configuration}
(the first eight rows and initial columns) from case 1c.
This matrix is fixed by the permutation that swaps the first two
rows and column~$k$ with column~$k+4$ for $1\leq k\leq 4$.
The set of all row and column permutations that fix the entries of a matrix
forms a group known as the \emph{symmetry group} of the matrix.
In the matrix of Figure~\ref{fig:tetrahedrons}
any permutation of the first four rows, any permutation of the last four rows,
and the permutation that swaps row~$i$ and row~$i+4$ for $1\leq i\leq4$ occur
(with appropriate column permutations) in the symmetry group.
The size of this permutation group is $4!^2\cdot2=1152$ and the group is isomorphic
to the group of symmetries of a pair of tetrahedrons.
Up to isomorphism, the symmetry groups for each of the ten possible
initial configurations are given in Table~\ref{tbl:cases}. <|paper_end|> | [
"<|reference_start|> The Search for a Finite Projective Plane of Order 10: When I was a graduate student looking for a thesis topic, Herbert Ryser advised me not to work on the projective plane of order 10. Even though he was extremely interested in this subject, he believed that it was too difficult and that I might get nowhere with it. I took his advice and chose another problem. Somehow, this problem has a beauty that fascinates me as well as many other mathematicians. Finally in 1980, I succumbed to the temptation and started working on it with some of my colleagues. We eventually managed to get somewhere, but unfortunately, Dr. Ryser is no longer with us to hear of the final result. This is an expository article describing the evolution of the problem and how computers were used to solve it. <|reference_end|>",
"<|reference_start|> DRAT-trim: Efficient Checking and Trimming Using Expressive Clausal Proofs: <|reference_end|>",
"<|reference_start|> Solving Very Hard Problems: Cube-and-Conquer, a Hybrid SAT Solving Method: A recent success of SAT solving has been the solution of the boolean Pythagorean Triples problem [Heule et al., 2016], delivering the largest proof yet, of 200 terabytes in size. We present this and the underlying paradigm Cube-and-Conquer, a powerful general method to solve big SAT problems, based on integrating the “old” and “new” methods of SAT solving. <|reference_end|>",
"<|reference_start|> The nonexistence of code words of weight 16 in a projective plane of order 10: <|reference_end|>"
] | [
0,
2,
6,
10
] | {"<|cite_2|>": "ss-709493", "<|cite_5|>": "ss-758514", "<|cite_6|>": "ss-960736", "<|multi_cite_7_1|>": "ss-1587052", "<|multi_cite_7_2|>": "ss-1051487", "<|cite_8|>": "ss-758514", "<|cite_9|>": "ss-1051484", "<|cite_10|>": "arxiv-140877", "<|cite_11|>": "ss-709493", "<|multi_cite_12_2|>": "ss-2384318", "<|cite_15|>": "ss-2384319", "<|multi_cite_16_1|>": "ss-1841775", "<|multi_cite_16_2|>": "ss-1360006"} |
2406.15977 | <|paper_start|> Title: A Bayesian framework for spectral reprojection
Abstract: A Bayesian framework for spectral reprojection: Fourier partial sum approximations yield exponential accuracy for smooth and periodic functions, but produce the infamous Gibbs phenomenon for non-periodic ones. Spectral reprojection resolves the Gibbs phenomenon by projecting the Fourier partial sum onto a Gibbs complementary basis, often prescribed as the Gegenbauer polynomials. Noise in the Fourier data and the Runge phenomenon both degrade the quality of the Gegenbauer reconstruction solution, however. Motivated by its theoretical convergence properties, this paper proposes a new Bayesian framework for spectral reprojection, which allows a greater understanding of the impact of noise on the reprojection method from a statistical point of view. We are also able to improve the robustness with respect to the Gegenbauer polynomials parameters. Finally, the framework provides a mechanism to quantify the uncertainty of the solution estimate.
Introduction
\label{sec:introduction}
Fourier samples model data acquisitions in applications such as magnetic resonance imaging (MRI) and synthetic aperture radar (SAR). Indeed, recovering images from either MRI $k$-space data or SAR phase history data most often involves recasting the problem as a linear inverse problem with the forward operator given by the discrete (non-uniform) Fourier transform (DFT) matrix (see e.g. <|cite_start|> (Reference: Improving tissue segmentation of human brain MRI through preprocessing by the Gegenbauer reconstruction method: ) <|cite_end|> <|cite_start|> (Reference: {SAR: SAR영상의 가장 큰 문제점은 경계선 부근에서 스패클(Speckle)잡음을 어떻게 줄이느냐 하는 것이다. 본 논문에서는 제안한 방법을 이용하여 경계선을 보존할 수 있는 효과적인 필터를 개발하고자 한다. 스패클 잡음을 줄이면서 에지 영역에 대한 블러링 없는 영상을 추출하기 위하여 웨이브렛 기반의 sigma 필터를 적용하였다. 실험 결과 에지정보에 대한 블러링을 줄인 출력 영상을 구성하였다. 제안한 방법을 미디언 필터와 비교한 결과, 스패클 잡음을 효과적으로 제거한 우수한 영상을 얻을 수 있었다. 【Any classification process using SAR images presupposes the reduction of multiplicative speckle noise, since the variations caused by speckle make it extremely difficult to distinguish between neighboring classes within the feature space. This paper focus an argument of effective filter for preserving the weak boundaries by using the proposed method. To reduce speckle noise without blurring the edges of reconstructed image use wavelet-based sigma filter. As a result, the edge information of reconstructed image reduce blurring. Simulation results show that proposed method gives a better subjective quality than conventional methods for the speckle noise.】) <|cite_end|> <|cite_start|> (Reference: Spotlight-Mode Synthetic Aperture Radar: A Signal Processing Approach: ) <|cite_end|> <|cite_start|> (Reference: Sampling density compensation in MRI: Rationale and an iterative numerical solution: Data collection of MRI which is sampled nonuniformly in k‐space is often interpolated onto a Cartesian grid for fast reconstruction. The collected data must be properly weighted before interpolation, for accurate reconstruction. We propose a criterion for choosing the weighting function necessary to compensate for nonuniform sampling density. A numerical iterative method to find a weighting function that meets that criterion is also given. This method uses only the coordinates of the sampled data; unlike previous methods, it does not require knowledge of the trajectories and can easily handle trajectories that “cross” in k‐space. Moreover, the method can handle sampling patterns that are undersampled in some regions of k‐space and does not require a post‐gridding density correction. Weighting functions for various data collection strategies are shown. Synthesized and collected in vivo data also illustrate aspects of this method. Magn Reson Med 41:179–186, 1999. © 1999 Wiley‐Liss, Inc.) <|cite_end|>). Compressive sensing (CS) algorithms that promote sparse solutions in a known sparse domain <|cite_start|> (Reference: Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform: ) <|cite_end|> <|cite_start|> (Reference: {SAR: SAR영상의 가장 큰 문제점은 경계선 부근에서 스패클(Speckle)잡음을 어떻게 줄이느냐 하는 것이다. 본 논문에서는 제안한 방법을 이용하여 경계선을 보존할 수 있는 효과적인 필터를 개발하고자 한다. 스패클 잡음을 줄이면서 에지 영역에 대한 블러링 없는 영상을 추출하기 위하여 웨이브렛 기반의 sigma 필터를 적용하였다. 실험 결과 에지정보에 대한 블러링을 줄인 출력 영상을 구성하였다. 제안한 방법을 미디언 필터와 비교한 결과, 스패클 잡음을 효과적으로 제거한 우수한 영상을 얻을 수 있었다. 【Any classification process using SAR images presupposes the reduction of multiplicative speckle noise, since the variations caused by speckle make it extremely difficult to distinguish between neighboring classes within the feature space. This paper focus an argument of effective filter for preserving the weak boundaries by using the proposed method. To reduce speckle noise without blurring the edges of reconstructed image use wavelet-based sigma filter. As a result, the edge information of reconstructed image reduce blurring. Simulation results show that proposed method gives a better subjective quality than conventional methods for the speckle noise.】) <|cite_end|> <|cite_start|> (Reference: Sparse MRI: the application of compressed sensing for rapid MR imaging: The sparsity which is implicit in MR images is exploited to significantly undersample k‐space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain–for example, in terms of spatial finite‐differences or their wavelet coefficients. According to the recently developed mathematical theory of compressed‐sensing, images with a sparse representation can be recovered from randomly undersampled k‐space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise‐like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo‐random variable‐density undersampling of phase‐encodes. The reconstruction is performed by minimizing the ℓ1 norm of a transformed image, subject to data fidelity constraints. Examples demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin‐echo brain imaging and 3D contrast enhanced angiography. Magn Reson Med, 2007. © 2007 Wiley‐Liss, Inc.) <|cite_end|> <|cite_start|> (Reference: Wide-area wide-angle SAR focusing: This study started with a data set that leveraged the latest autofocusing methods to obtain the cleanest radar data set appropriate for generating large SAR imagery over a 5-km spot. The authors intended to spotlight individual smaller nonmoving targets within the larger area; however, the images appeared blurred and varied greatly when generated by different passes of the circular SAR radar system. This study concentrated on using widely dispersed QTs combined with an algorithm to correct for both range and phase errors to improve imaging. The wide-angle QT imaging and vehicle identification experiments showed a significant improvement over all orbits and provided higher quality imagery to more robustly perform image registration. Focusing showed significant improvement in visualizations quad-trihedrals and a vehicle.) <|cite_end|> have become increasingly widespread in providing point estimate image recoveries. More recently, Bayesian inference methods have been developed to also quantify the uncertainty of the estimate.
This investigation develops a new Bayesian framework for recovering smooth but non-periodic functions from given noisy Fourier data. Uncertainty quantification can also be achieved when the hyperparameters in the posterior density function are fixed. Importantly, however, rather than use the often employed {\em sparse prior} (or sparse penalty term in CS), here we construct a prior based on {\em spectral reprojection}.
The spectral reprojection method is a {\em forward} approach designed to {\em reproject} the observable Fourier data onto a Gibbs complementary basis. It is sometimes referred to as Gegenbauer reconstruction when the Gibbs complementary basis is comprised of Gegenbauer polynomials. The reprojection eliminates the Gibbs phenomenon and restores the exponential convergence (hence the use of {\em spectral} in its name) in the maximum norm. For self-containment, we summarize spectral reprojection in \cref{sec:spectralreprojection}.
Although Gegenbauer reconstruction has been successfully used in applications where the observable Fourier data have complex additive Gaussian noise of mean zero <|cite_start|> (Reference: Improving tissue segmentation of human brain MRI through preprocessing by the Gegenbauer reconstruction method: ) <|cite_end|> <|cite_start|> (Reference: On Reconstruction from Non-uniform Spectral Data: ) <|cite_end|>, it was also demonstrated in <|cite_start|> (Reference: Reducing the Effects of Noise in Image Reconstruction: ) <|cite_end|> that while the estimator is unbiased, its variance is spatially dependent. Nevertheless, theoretical results in the seminal work provide key insights that inspire us to develop a new Bayesian inference method for the corresponding {\em inverse} problem. Namely, the derivation of the error terms in the exponential convergence proof naturally motivates the choices of the likelihood and prior terms in the Bayesian method. In particular, the prior used for the construction of the posterior should be designed to favor solutions whose orthogonal polynomial partial sum expansion yields good approximations. Such an assumption is consistent for recovering (discretized) functions that are smooth but not periodic, and is arguably more appropriate than using a sparsifying operator such as first order differencing, which by design assumes that the underlying function is piecewise constant, or Tikhonov regularization, which is even more restrictive. Moreover, when coupled with the likelihood term, the common kernel condition <|cite_start|> (Reference: {Statistical and Computational Inverse Problems: Classification Without Interaction”), and 13 (“Two-Way Crossed Classification With Interaction”). Every chapter contains two or more numerical example with the exception of Chapters 14 (“Three-Way and Higher-Order Crossed Classifications”) and 17 (“General r-Way Nested Classification”), which only contain one example each. Examples appear in the estimation, confidence interval, and hypothesis testing sections. Distribution of estimators is only discussed for the models in Chapters 11 and 15 (“Two-Way Nested Classification”). Chapters 11, 13, 15, and 16 (“Three-Way Nested Classification”) contain information on design considerations involving unbalanced experiments. The appendixes contain basic theoretical and methodological results useful in the development of unbalanced random models as well as information on the capabilities of widely available software. Packages discussed are SAS, SPSS, BMDP, S–PLUS, GENSTAT, and BUGS. The book is well organized and focused. It contains extensive coverage on crossed and nested unbalanced models. Because of the number of topics, the depth of coverage is occasionally limited. This is only a minor issue, since there are always a substantial number of references given. The organization of the book and the presentation of the material make difficult subject matter easier to follow. The main drawback to the book is that it deals only with completely random univariate models. Given the volume of information in the book, however, this is understandable. The authors point out this shortcoming in the Preface and suggest that a future work covering these topics may be forthcoming. For the application-oriented practitioner, a small disadvantage is that a number of the estimation approaches discussed, while interesting, cannot be found in the more commonly used statistical software packages. Regardless, the book makes an excellent resource for anyone working with unbalanced random models.) <|cite_end|> is automatically satisfied so that a unique minimum for the corresponding \emph{maximum a posteriori} (MAP) estimate may be obtained. We call this approach the {\em Bayesian spectral reprojection} (BSR) method, and its point estimate solution, which is consistent with Gegenbauer reconstruction, is the MAP estimate of the corresponding posterior density and is determined through optimization. As already noted, we are also able to provide uncertainty quantification for fixed hyperparameters.
We further propose a {\em generalized} Bayesian spectral reprojection (GBSR) method, which modifies the BSR by formulating the likelihood so that the observables are not first transformed to the Gibbs complementary basis. Rather, we directly use the observable Fourier data for this purpose. Removing this restriction allows us to explore a larger space that still assumes that the function is well-approximated by the Gegenbauer partial sum expansion, but also seeks a good fit to the observable data. As our numerical results demonstrate, the point estimate obtained for GSBR is more robust in low SNR environments to different parameter choices. It is also possible to quantify the uncertainty when the posterior density function uses fixed hyperparameters.
\subsection*{Paper organization}
The rest of this paper is organized as follows. \Cref{sec:preliminaries} describes the underlying problem and summarizes the spectral reprojection method. The problem is then discretized in \Cref{sec:MVformulation} to enable the Bayesian approach used in \Cref{sec:Bayesspectral}. There we describe how the theoretical results proving exponential convergence for spectral reprojection inspire the construction of a new prior, leading to the Bayesian spectral reprojection (BSR) method. These ideas are then modified in \Cref{sec:Bayes} for the {\em generalized} Bayesian spectral reprojection GBSR) method. Numerical examples in \Cref{sec:numerical} show the efficacy of our new methods for recovering 1D smooth functions from noisy Fourier data. \Cref{sec:summary} provides some concluding remarks. <|paper_end|> | [
"<|reference_start|> {SAR: SAR영상의 가장 큰 문제점은 경계선 부근에서 스패클(Speckle)잡음을 어떻게 줄이느냐 하는 것이다. 본 논문에서는 제안한 방법을 이용하여 경계선을 보존할 수 있는 효과적인 필터를 개발하고자 한다. 스패클 잡음을 줄이면서 에지 영역에 대한 블러링 없는 영상을 추출하기 위하여 웨이브렛 기반의 sigma 필터를 적용하였다. 실험 결과 에지정보에 대한 블러링을 줄인 출력 영상을 구성하였다. 제안한 방법을 미디언 필터와 비교한 결과, 스패클 잡음을 효과적으로 제거한 우수한 영상을 얻을 수 있었다. 【Any classification process using SAR images presupposes the reduction of multiplicative speckle noise, since the variations caused by speckle make it extremely difficult to distinguish between neighboring classes within the feature space. This paper focus an argument of effective filter for preserving the weak boundaries by using the proposed method. To reduce speckle noise without blurring the edges of reconstructed image use wavelet-based sigma filter. As a result, the edge information of reconstructed image reduce blurring. Simulation results show that proposed method gives a better subjective quality than conventional methods for the speckle noise.】 <|reference_end|>",
"<|reference_start|> Image Reconstruction from Undersampled Fourier Data Using the Polynomial Annihilation Transform: <|reference_end|>",
"<|reference_start|> Improving tissue segmentation of human brain MRI through preprocessing by the Gegenbauer reconstruction method: <|reference_end|>",
"<|reference_start|> {Statistical and Computational Inverse Problems: Classification Without Interaction”), and 13 (“Two-Way Crossed Classification With Interaction”). Every chapter contains two or more numerical example with the exception of Chapters 14 (“Three-Way and Higher-Order Crossed Classifications”) and 17 (“General r-Way Nested Classification”), which only contain one example each. Examples appear in the estimation, confidence interval, and hypothesis testing sections. Distribution of estimators is only discussed for the models in Chapters 11 and 15 (“Two-Way Nested Classification”). Chapters 11, 13, 15, and 16 (“Three-Way Nested Classification”) contain information on design considerations involving unbalanced experiments. The appendixes contain basic theoretical and methodological results useful in the development of unbalanced random models as well as information on the capabilities of widely available software. Packages discussed are SAS, SPSS, BMDP, S–PLUS, GENSTAT, and BUGS. The book is well organized and focused. It contains extensive coverage on crossed and nested unbalanced models. Because of the number of topics, the depth of coverage is occasionally limited. This is only a minor issue, since there are always a substantial number of references given. The organization of the book and the presentation of the material make difficult subject matter easier to follow. The main drawback to the book is that it deals only with completely random univariate models. Given the volume of information in the book, however, this is understandable. The authors point out this shortcoming in the Preface and suggest that a future work covering these topics may be forthcoming. For the application-oriented practitioner, a small disadvantage is that a number of the estimation approaches discussed, while interesting, cannot be found in the more commonly used statistical software packages. Regardless, the book makes an excellent resource for anyone working with unbalanced random models. <|reference_end|>"
] | [
1,
4,
8,
11
] | {"<|multi_cite_1_1|>": "ss-1369515", "<|multi_cite_1_3|>": "ss-1263914", "<|multi_cite_1_4|>": "ss-802764", "<|multi_cite_1_5|>": "ss-1369516", "<|multi_cite_2_1|>": "ss-714880", "<|multi_cite_2_2|>": "ss-1263914", "<|multi_cite_2_3|>": "ss-840165", "<|multi_cite_2_4|>": "ss-1369517", "<|multi_cite_4_1|>": "ss-1369515", "<|multi_cite_4_2|>": "ss-867076", "<|cite_5|>": "ss-1369518", "<|cite_7|>": "ss-1099780"} |
1705.07051 | <|paper_start|> Title: Speeding up Memory-based Collaborative Filtering with Landmarks
Abstract: Speeding up Memory-based Collaborative Filtering with Landmarks: Recommender systems play an important role in many scenarios where users are overwhelmed with too many choices to make. In this context, Collaborative Filtering (CF) arises by providing a simple and widely used approach for personalized recommendation. Memory-based CF algorithms mostly rely on similarities between pairs of users or items, which are posteriorly employed in classifiers like k-Nearest Neighbor (kNN) to generalize for unknown ratings. A major issue regarding this approach is to build the similarity matrix. Depending on the dimensionality of the rating matrix, the similarity computations may become computationally intractable. To overcome this issue, we propose to represent users by their distances to preselected users, namely landmarks. This procedure allows to drastically reduce the computational cost associated with the similarity matrix. We evaluated our proposal on two distinct distinguishing databases, and the results showed our method has consistently and considerably outperformed eight CF algorithms (including both memory-based and model-based) in terms of computational performance.
Introduction
The continuously improving network technology and the exponential growth of social networks have been connecting the whole world, putting available a huge volume of content, media, goods, services, and many other different kinds of items on Internet <|cite_start|> (Reference: Active Learning Applied to Rating Elicitation for Incentive Purposes: ) <|cite_end|>. However, this phenomenon leads to the paradox of choice. It addresses the problem that people overwhelmed with too many choices tend to be more anxious, and eventually give up to proceed with the order.
To tackle this issue, a massive effort has been made towards the development of data mining methods for recommender systems <|cite_start|> (Reference: Introduction to Recommender Systems Handbook: ) <|cite_end|>. This promising technology aims at helping users search and find items that are likely to be consumed, alleviating the burden of choice.
In this context, many recommender systems have been designed to provide users with suggested items in a personalized manner. A well-known and widely used approach for this kind of recommendation is Collaborative Filtering (CF) <|cite_start|> (Reference: Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions: This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main categories: content-based, collaborative, and hybrid recommendation approaches. This paper also describes various limitations of current recommendation methods and discusses possible extensions that can improve recommendation capabilities and make recommender systems applicable to an even broader range of applications. These extensions include, among others, an improvement of understanding of users and items, incorporation of the contextual information into the recommendation process, support for multicriteria ratings, and a provision of more flexible and less intrusive types of recommendations.) <|cite_end|>. It consists in considering the history of purchases and users' tastes to identify items that are likely to be acquired. In general, this data is represented by a rating matrix, where each row corresponds to a user, each column is assigned to an item, and each cell holds a rating given by the corresponding user and item. Thus, CF algorithms aim at predicting the missing ratings of the matrix, which are posteriorly used for personalized item recommendations.
CF algorithms may be divide into two main classes: \textit{memory-based} and \textit{model-based} algorithms. The former class uses k-Nearest Neighbors (kNN) methods for rating predictions, and therefore relies on computing similarities between pairs of users or items according to their ratings <|cite_start|> (Reference: Advances in Collaborative Filtering: ) <|cite_end|>. The latter class employs matrix factorization techniques so as to obtain an approximation of the rating matrix, in which the unknown cells are filled with rating predictions <|cite_start|> (Reference: MATRIX FACTORIZATION TECHNIQUES FOR RECOMMENDER SYSTEMS: As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.) <|cite_end|>. Both memory-based and model-based algorithms provide advantages and disadvantages.
In this work, we are interested in memory-based algorithms. This class of CF algorithms remains widely used in many real systems due to its simplicity. It provides an elegant way for integrating information of users and items beyond the ratings for refining similarities. In addition, memory-based CF algorithms allow \textit{online} recommendations, something required in many practical applications as data is arriving constantly, new users are signing up, and new products are being offered <|cite_start|> (Reference: Google news personalization: scalable online collaborative filtering: Several approaches to collaborative filtering have been studied but seldom have studies been reported for large (several millionusers and items) and dynamic (the underlying item set is continually changing) settings. In this paper we describe our approach to collaborative filtering for generating personalized recommendations for users of Google News. We generate recommendations using three approaches: collaborative filtering using MinHash clustering, Probabilistic Latent Semantic Indexing (PLSI), and covisitation counts. We combine recommendations from different algorithms using a linear model. Our approach is content agnostic and consequently domain independent, making it easily adaptable for other applications and languages with minimal effort. This paper will describe our algorithms and system setup in detail, and report results of running the recommendations engine on Google News.) <|cite_end|>. So, incorporating such information in a \textit{online} fashion is very desired to make up-to-date predictions on the fly by avoiding to re-optimize from scratch with each new piece of data.
The major issue regarding to memory-based CF algorithms lies in its computational scalability associated with the growth of the rating matrix. As users are often represented by vectors of items (\textit{i.e.} rows of the rating matrix), it turns out that the larger the number of items is, the higher the computational cost to compute similarities between users. Consequently, memory-based CF may become computationally intractable for a large number of users or items.
In this paper, we propose an alternative to improve the computational scalability of memory-based CF algorithms. Our proposal consists in representing users by their distances to preselected users, namely landmarks. Thus, instead of computing similarities between users represented by large vectors (often sparse) of ratings, our method calculates similarities through vectors of distances to fixed landmarks, obtaining an approximate similarity matrix for posterior rating predictions. As the number of landmarks required for a good approximation is mostly much smaller than the number of items, the proposed method drastically alleviates the cost associated with the similarity matrix computation.
The results show that our proposal consistently and considerably outperforms the evaluated CF algorithms (including both memory-based and model-based) in terms of computational performance. Interestingly, it achieves accuracy results better than the original memory-based CF algorithms with few landmarks.
The main contributions of this work are the following:
\begin{itemize}
\item A rating matrix reduction method to speed up memory-based CF algorithms.
\item The proposal and investigation of 5 landmark selection strategies.
\item An extensive comparison between our proposal and 8 CF algorithms, including both memory-based and model-based classes.
\end{itemize}
The work is organized in five sections, where this is the first one. Section 2 reviews the literature and presents the related work. Section 3 describes the recommendation problem definitions. It also introduces our proposal and presents the landmark selection strategies. Section 4 starts with the description of databases and metrics employed in experiments, follows by detailing the parameter tuning of the proposed method, and finishes by comparing our proposal against other CF algorithms. Finally, Section 5 points out conclusions and future work.
Related Work
Collaborative Filtering (CF) approach consists in predicting whether a specific user would prefer an item rather than others based on ratings given by users <|cite_start|> (Reference: Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions: This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main categories: content-based, collaborative, and hybrid recommendation approaches. This paper also describes various limitations of current recommendation methods and discusses possible extensions that can improve recommendation capabilities and make recommender systems applicable to an even broader range of applications. These extensions include, among others, an improvement of understanding of users and items, incorporation of the contextual information into the recommendation process, support for multicriteria ratings, and a provision of more flexible and less intrusive types of recommendations.) <|cite_end|>. For this purpose, CF uses only a rating matrix $R$, where rows correspond to users, columns correspond to items, and each cell holds the rating value $r_{uv}$ given by user $u$ to item $v$. Thus, the recommendation problem lies in predicting the missing ratings of $R$, which is often very sparse.
Interestingly, although there are many algorithms in Supervised Learning (SL) for data classification and regression, these are not properly suitable to CF, since ratings are not represented in a shared vector space $\mathbb{R}^{d}$. This happens because most users do not consume the same items by preventing their representation in the same vector space $\mathbb{R}^{d}$. Consequently, CF problem is slightly different from SL.
To overcome this issue, Braida et al. propose to build a vector space of latent factors to represent all item ratings given by users, and then apply SL techniques to predict unknown ratings. The authors use Singular Value Decomposition (SVD) to obtain user and item latent factors, and then build a vector space which contains all item ratings given by users. Their scheme consistently outperforms many state-of-the-art algorithms <|cite_start|> (Reference: Transforming collaborative filtering into supervised learning: ) <|cite_end|>.
Sarwar et al. also apply SVD on the rating matrix to reduce its dimensionality and transform it in a new feature vector space. Thus, predictions are generated by operations between latent factor matrices of users and items <|cite_start|> (Reference: Application of dimensionality reduction in recommender system--a case study: Abstract : We investigate the use of dimensionality reduction to improve performance for a new class of data analysis software called "recommender systems" Recommender systems apply knowledge discovery techniques to the problem of making product recommendations during a live customer interaction. These systems are achieving widespread success in E-commerce nowadays, especially with the advent of the Internet. The tremendous growth of customers and products poses three key challenges for recommender systems in the E-commerce domain. These are: producing high quality recommendations, performing many recommendations per second for millions of customers and products, and achieving high coverage in the face of data sparsity. One successful recommender system technology is collaborative filtering, which works by matching customer preferences to other customers in making recommendations. Collaborative filtering has been shown to produce high quality recommendations, but the performance degrades with the number of customers and products. New recommender system technologies are needed that can quickly produce high quality recommendations, even for very largescale problems. This paper presents two different experiments where we have explored one technology called Singular Value Decomposition (SVD) to reduce the dimensionality of recommender system databases. Each experiment compares the quality of a recommender system using SVD with the quality of a recommender system using collaborative filtering. The first experiment compares the effectiveness of the two recommender systems at predicting consumer preferences based on a database of explicit ratings of products. The second experiment compares the effectiveness of the two recommender systems at producing Top-N lists based on a real-life customer purchase database from an E-Commerce site. Our experience suggests that SVD has the potential to meet many of the challenges of recommender systems, under certain conditions.) <|cite_end|>.
Generally, dimensionality reduction techniques based on Matrix Factorization (MF) for CF are more efficient than other techniques, for instance Regularized SVD <|cite_start|> (Reference: Improving regularized singular value decomposition for collaborative filtering: A key part of a recommender system is a collaborative filtering algorithm predicting users’ preferences for items. In this paper we describe different efficient collaborative filtering techniques and a framework for combining them to obtain a good prediction. The methods described in this paper are the most important parts of a solution predicting users’ preferences for movies with error rate 7.04% better on the Netflix Prize dataset than the reference algorithm Netflix Cinematch. The set of predictors used includes algorithms suggested by Netflix Prize contestants: regularized singular value decomposition of data with missing values, K-means, postprocessing SVD with KNN. We propose extending the set of predictors with the following methods: addition of biases to the regularized SVD, postprocessing SVD with kernel ridge regression, using a separate linear model for each movie, and using methods similar to the regularized SVD, but with fewer parameters. All predictors and selected 2-way interactions between them are combined using linear regression on a holdout set.) <|cite_end|>, Improved Regularized SVD <|cite_start|> (Reference: Improving regularized singular value decomposition for collaborative filtering: A key part of a recommender system is a collaborative filtering algorithm predicting users’ preferences for items. In this paper we describe different efficient collaborative filtering techniques and a framework for combining them to obtain a good prediction. The methods described in this paper are the most important parts of a solution predicting users’ preferences for movies with error rate 7.04% better on the Netflix Prize dataset than the reference algorithm Netflix Cinematch. The set of predictors used includes algorithms suggested by Netflix Prize contestants: regularized singular value decomposition of data with missing values, K-means, postprocessing SVD with KNN. We propose extending the set of predictors with the following methods: addition of biases to the regularized SVD, postprocessing SVD with kernel ridge regression, using a separate linear model for each movie, and using methods similar to the regularized SVD, but with fewer parameters. All predictors and selected 2-way interactions between them are combined using linear regression on a holdout set.) <|cite_end|>, Probabilistic MF <|cite_start|> (Reference: Probabilistic {Matrix} {Factorization}: Many existing approaches to collaborative filtering can neither handle very large datasets nor easily deal with users who have very few ratings. In this paper we present the Probabilistic Matrix Factorization (PMF) model which scales linearly with the number of observations and, more importantly, performs well on the large, sparse, and very imbalanced Netflix dataset. We further extend the PMF model to include an adaptive prior on the model parameters and show how the model capacity can be controlled automatically. Finally, we introduce a constrained version of the PMF model that is based on the assumption that users who have rated similar sets of movies are likely to have similar preferences. The resulting model is able to generalize considerably better for users with very few ratings. When the predictions of multiple PMF models are linearly combined with the predictions of Restricted Boltzmann Machines models, we achieve an error rate of 0.8861, that is nearly 7% better than the score of Netflix's own system.) <|cite_end|> and Bayesian Probabilistic MF <|cite_start|> (Reference: Bayesian probabilistic matrix factorization using {Markov chain Monte Carlo: Low-rank matrix approximation methods provide one of the simplest and most effective approaches to collaborative filtering. Such models are usually fitted to data by finding a MAP estimate of the model parameters, a procedure that can be performed efficiently even on very large datasets. However, unless the regularization parameters are tuned carefully, this approach is prone to overfitting because it finds a single point estimate of the parameters. In this paper we present a fully Bayesian treatment of the Probabilistic Matrix Factorization (PMF) model in which model capacity is controlled automatically by integrating over all model parameters and hyperparameters. We show that Bayesian PMF models can be efficiently trained using Markov chain Monte Carlo methods by applying them to the Netflix dataset, which consists of over 100 million movie ratings. The resulting models achieve significantly higher prediction accuracy than PMF models trained using MAP estimation.) <|cite_end|>. They have received great attention after Netflix Prize and are known as model-based CF algorithms <|cite_start|> (Reference: Empirical Analysis of Predictive Algorithms for Collaborative Filtering: Collaborative filtering or recommender systems use a database about user preferences to predict additional topics or products a new user might like. In this paper we describe several algorithms designed for this task, including techniques based on correlation coefficients, vector-based similarity calculations, and statistical Bayesian methods. We compare the predictive accuracy of the various methods in a set of representative problem domains. We use two basic classes of evaluation metrics. The first characterizes accuracy over a set of individual predictions in terms of average absolute deviation. The second estimates the utility of a ranked list of suggested items. This metric uses an estimate of the probability that a user will see a recommendation in an ordered list. Experiments were run for datasets associated with 3 application areas, 4 experimental protocols, and the 2 evaluation metrics for the various algorithms. Results indicate that for a wide range of conditions, Bayesian networks with decision trees at each node and correlation methods outperform Bayesian-clustering and vector-similarity methods. Between correlation and Bayesian networks, the preferred method depends on the nature of the dataset, nature of the application (ranked versus one-by-one presentation), and the availability of votes with which to make predictions. Other considerations include the size of database, speed of predictions, and learning time.) <|cite_end|>.
In contrast, memory-based CF algorithms are an adapted k-Nearest Neighbors (kNN) method, in which similarity is computed considering only co-rated items between users, \textit{i.e.} the similarity between users are computed only for the vectors of co-rated items <|cite_start|> (Reference: Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions: This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main categories: content-based, collaborative, and hybrid recommendation approaches. This paper also describes various limitations of current recommendation methods and discusses possible extensions that can improve recommendation capabilities and make recommender systems applicable to an even broader range of applications. These extensions include, among others, an improvement of understanding of users and items, incorporation of the contextual information into the recommendation process, support for multicriteria ratings, and a provision of more flexible and less intrusive types of recommendations.) <|cite_end|>. Although model-based CF algorithms usually provide higher accuracy than the memory-based ones, the latter has been widely used <|cite_start|> (Reference: Recommender Systems for Product Bundling: ) <|cite_end|> <|cite_start|> (Reference: User-Specific Feature-Based Similarity Models for Top-n Recommendation of New Items: Recommending new items for suitable users is an important yet challenging problem due to the lack of preference history for the new items. Noncollaborative user modeling techniques that rely on the item features can be used to recommend new items. However, they only use the past preferences of each user to provide recommendations for that user. They do not utilize information from the past preferences of other users, which can potentially be ignoring useful information. More recent factor models transfer knowledge across users using their preference information in order to provide more accurate recommendations. These methods learn a low-rank approximation for the preference matrix, which can lead to loss of information. Moreover, they might not be able to learn useful patterns given very sparse datasets. In this work, we present UFSM, a method for top-n recommendation of new items given binary user preferences. UFSM learns User-specific Feature-based item-Similarity Models, and its strength lies in combining two points: (1) exploiting preference information across all users to learn multiple global item similarity functions and (2) learning user-specific weights that determine the contribution of each global similarity function in generating recommendations for each user. UFSM can be considered as a sparse high-dimensional factor model where the previous preferences of each user are incorporated within his or her latent representation. This way, UFSM combines the merits of item similarity models that capture local relations among items and factor models that learn global preference patterns. A comprehensive set of experiments was conduced to compare UFSM against state-of-the-art collaborative factor models and noncollaborative user modeling techniques. Results show that UFSM outperforms other techniques in terms of recommendation quality. UFSM manages to yield better recommendations even with very sparse datasets. Results also show that UFSM can efficiently handle high-dimensional as well as low-dimensional item feature spaces.) <|cite_end|> <|cite_start|> (Reference: Ranking-order case-based reasoning for financial distress prediction: ) <|cite_end|> <|cite_start|> (Reference: CenKNN: a scalable and effective text classifier: ) <|cite_end|> <|cite_start|> (Reference: Promoting the performance of vertical recommendation systems by applying new classification techniques: ) <|cite_end|>. This is due to its simplicity in providing an elegant way for integrating information of users and items beyond the ratings for refining similarities. Additionally, memory-based algorithms allow \textit{online} recommendations, making up-to-date predictions on the fly, which avoids to re-optimize from scratch with each new piece of data <|cite_start|> (Reference: Google news personalization: scalable online collaborative filtering: Several approaches to collaborative filtering have been studied but seldom have studies been reported for large (several millionusers and items) and dynamic (the underlying item set is continually changing) settings. In this paper we describe our approach to collaborative filtering for generating personalized recommendations for users of Google News. We generate recommendations using three approaches: collaborative filtering using MinHash clustering, Probabilistic Latent Semantic Indexing (PLSI), and covisitation counts. We combine recommendations from different algorithms using a linear model. Our approach is content agnostic and consequently domain independent, making it easily adaptable for other applications and languages with minimal effort. This paper will describe our algorithms and system setup in detail, and report results of running the recommendations engine on Google News.) <|cite_end|>. For these reasons, many authors seek to improve memory-based CF accuracy and performance, for example in <|cite_start|> (Reference: A similarity metric designed to speed up, using hardware, the recommender systems k-nearest neighbors algorithm: ) <|cite_end|> <|cite_start|> (Reference: A novel two-level nearest neighbor classification algorithm using an adaptive distance metric: ) <|cite_end|> <|cite_start|> (Reference: Boosting the K-Nearest-Neighborhood based incremental collaborative filtering: ) <|cite_end|>.
A well-known problem present in memory-based CF algorithms lies in applying distance functions to users for calculating their similarities, which are computationally expensive. Often, the algorithm runtime increases with the number of users/items, becoming prohibitive to apply it on very large databases. Furthermore, finding a sub-matrix of $R$ which contains all users and also is not empty might be impossible due to data sparsity, \textit{i.e.} it is difficult to find an item vector subspace in which all users are represented.
To tackle these issues, we propose a method to reduce the size rating matrix via landmarks. It consists in selecting $n$ users as landmarks, and then representing all users by their similarities to these landmarks. Thus, instead of representing users in item vector space, we propose to locate users in landmark vector space whose dimensionality is much smaller.
The landmark technique is useful to improve algorithm runtime and it was proposed by Silva and Tenenbaum in Multidimensional Scaling (MDS) context <|cite_start|> (Reference: Global versus Local Methods in Nonlinear Dimensionality Reduction: Recently proposed algorithms for nonlinear dimensionality reduction fall broadly into two categories which have different advantages and disadvantages: global (Isomap [1]), and local (Locally Linear Embedding [2], Laplacian Eigenmaps [3]). We present two variants of Isomap which combine the advantages of the global approach with what have previously been exclusive advantages of local methods: computational sparsity and the ability to invert conformal maps.) <|cite_end|>. In this case, the authors propose a Landmark MDS (LMDS) algorithm, which uses landmarks to reduce the computational costs of traditional MDS. LMDS builds a landmark set by selecting few observations from data -- the landmark set represents all observations. Then, it computes the similarity matrix for this set to obtain a suitable landmark representation in d-dimensional vector space. Finally, the other observations are mapped to this new space, considering their similarities to the landmarks.
The main advantage of using LMDS instead of other techniques is to adjust accuracy and runtime. If one needs to decrease runtime, it is possible to sacrifice accuracy by reducing the size of the landmark set. Otherwise, if one needs to improve the algorithm's accuracy, it is also possible to increase the number of landmarks up to the database limit. Therefore, a good LMDS characteristic is to manage this trade off between runtime and accuracy <|cite_start|> (Reference: Fast embedding of sparse music similarity graphs: This paper applies fast sparse multidimensional scaling (MDS) to a large graph of music similarity, with 267K vertices that represent artists, albums, and tracks; and 3.22M edges that represent similarity between those entities. Once vertices are assigned locations in a Euclidean space, the locations can be used to browse music and to generate playlists.
MDS on very large sparse graphs can be effectively performed by a family of algorithms called Rectangular Dijsktra (RD) MDS algorithms. These RD algorithms operate on a dense rectangular slice of the distance matrix, created by calling Dijsktra a constant number of times. Two RD algorithms are compared: Landmark MDS, which uses the Nystrom approximation to perform MDS; and a new algorithm called Fast Sparse Embedding, which uses FastMap. These algorithms compare favorably to Laplacian Eigenmaps, both in terms of speed and embedding quality.) <|cite_end|>.
Lee and Choi <|cite_start|> (Reference: Landmark MDS ensemble: ) <|cite_end|> argue that noise in database harms LMDS accuracy, and then propose an adaptation for this algorithm, namely Landmark MDS Ensemble (LMDS Ensemble). They propose applying LMDS to different data partitions, and then combine individual solutions in the same coordinate system. Their algorithm is less noise-sensitive but maintains computational performance of LMDS.
Another pitfall of landmark approach is to choose the most representative observation as landmarks, once the data representation depends on the similarity to these points. Several selection strategies are proposed in literature <|cite_start|> (Reference: Improved nonlinear manifold learning for land cover classification via intelligent landmark selection: Nonlinear manifold learning algorithms, mainly isometric feature mapping (Isomap) and local linear embedding (LLE), determine the low-dimensional embedding of the original high dimensional data by finding the geometric distances between samples. Researchers in the remote sensing community have successfully applied Isomap to hyperspectral data to extract useful information. Although results are promising, computational requirements of the local search process are exhorbitant. Landmark-Isomap, which utilizes randomly selected sample points to perform the search, mitigates these problems, but samples of some classes are located in spatially disjointed clusters in the embedded space. We propose an alternative approach to selecting landmark points which focuses on the boundaries of the clusters, rather than randomly selected points or cluster centers. The unique Isomap is evaluated by SStress, a good- of-fit measure, and reconstructed with reduced computation, which makes implementation with other classifiers plausible for large data sets. The new method is implemented and applied to Hyperion hyperspectral data collected over the Okavango Delta of Botswana.) <|cite_end|> <|cite_start|> (Reference: Selection of landmark points on nonlinear manifolds for spectral unmixing using local homogeneity: Endmember extraction and unmixing methods that exploit nonlinearity in hyperspectral data are receiving increased attention, but they have significant challenges. Global feature extraction methods such as isometric feature mapping have significant computational overhead, which is often addressed for the classification problem via landmark-based methods. Because landmark approaches are approximation methods, experimental results are often highly variable. We propose a new robust landmark selection method for the purpose of pixel unmixing that exploits spectral and spatial homogeneity in a local window kernel. We compare the performance of the method to several landmark selection methods in terms of reconstruction error and processing time.) <|cite_end|> <|cite_start|> (Reference: Active landmark sampling for manifold learning based spectral unmixing: Nonlinear manifold learning based spectral unmixing provides an alternative to direct nonlinear unmixing methods for accommodating nonlinearities inherent in hyperspectral data. Although manifolds can effectively capture nonlinear features in the dimensionality reduction stage of unmixing, the computational overhead is excessive for large remotely sensed data sets. Manifold approximation using a set of distinguishing points is commonly utilized to mitigate the computational burden, but selection of these landmark points is important for adequately representing the topology of the manifold. This study proposes an active landmark sampling framework for manifold learning based spectral unmixing using a small initial landmark set and a computationally efficient backbone-based strategy for constructing the manifold. The active landmark sampling strategy selects the best additional landmarks to develop a more representative manifold and to increase unmixing accuracy.) <|cite_end|> <|cite_start|> (Reference: An improved set covering problem for Isomap supervised landmark selection: ) <|cite_end|> <|cite_start|> (Reference: A novel landmark point selection method for l-isomap: Isometric feature mapping (ISOMAP) presents remarkable performance for nonlinear dimensionality reduction in diversified research domains. Landmark-ISOMAP(L-ISOMAP) has been proposed to improve the scalability of ISOMAP by performing the most complicated computations on a subset of points referred as to landmarks. In this paper, we present a novel landmark point selection method for L-ISOMAP. The approach first attempts to find a minimum set cover of the neighbourhood sets and get the corresponding data points, referred as to landmark candidate points. After that, it removes the points which belong to neighbour sets of other points from the candidate point set and then the remaining candidate points are the landmarks. We run several experiments on synthetic and physical data sets and the experiment results validate the effectiveness of our proposed method.) <|cite_end|> <|cite_start|> (Reference: A landmark selection method for l-isomap based on greedy algorithm and its application: Isometric feature mapping (Isomap) is a widely-used nonlinear dimensionality reduction method, but it suffers from high computational complexity. L-Isomap is a variant of Isomap which is faster than Isomap. In this algorithm, a subset of points are chosen out of the total data points as landmark points so as to simplify the embedding computation. In this paper, we propose a novel landmark selection method for L-Isomap based on a greedy algorithm. Experiments performed on synthetic and physical data sets validate the effectiveness of the proposed method. Internet traffic matrix has been an effective model to analyzing the Internet. However, the Internet traffic matrix data usually possesses high dimensionality. In this paper, we apply the improved L-Isomap to the real Internet traffic matrix data to investigate its low-dimensional features. The experiment results show that the Internet traffic matrix has a small intrinsic dimension and there indeed exists a low-dimensional manifold structure.) <|cite_end|> <|cite_start|> (Reference: Selecting landmark points for sparse manifold learning: There has been a surge of interest in learning non-linear manifold models to approximate high-dimensional data. Both for computational complexity reasons and for generalization capability, sparsity is a desired feature in such models. This usually means dimensionality reduction, which naturally implies estimating the intrinsic dimension, but it can also mean selecting a subset of the data to use as landmarks, which is especially important because many existing algorithms have quadratic complexity in the number of observations. This paper presents an algorithm for selecting landmarks, based on LASSO regression, which is well known to favor sparse approximations because it uses regularization with an l1 norm. As an added benefit, a continuous manifold parameterization, based on the landmarks, is also found. Experimental results with synthetic and real data illustrate the algorithm.) <|cite_end|>, most of them related to select landmarks for Landmark Isomap, which is a nonlinear reduction method variation to improve scalability <|cite_start|> (Reference: Nonlinear subspace clustering using curvature constrained distances: ) <|cite_end|> <|cite_start|> (Reference: Robust Positive semidefinite L-Isomap Ensemble: ) <|cite_end|> <|cite_start|> (Reference: Global versus Local Methods in Nonlinear Dimensionality Reduction: Recently proposed algorithms for nonlinear dimensionality reduction fall broadly into two categories which have different advantages and disadvantages: global (Isomap [1]), and local (Locally Linear Embedding [2], Laplacian Eigenmaps [3]). We present two variants of Isomap which combine the advantages of the global approach with what have previously been exclusive advantages of local methods: computational sparsity and the ability to invert conformal maps.) <|cite_end|> <|cite_start|> (Reference: UL-Isomap based nonlinear dimensionality reduction for hyperspectral imagery classification: ) <|cite_end|>.
Finally, Hu et al. <|cite_start|> (Reference: An incremental dimensionality reduction method on discriminant information for pattern classification: ) <|cite_end|> tackle the problem of applying Linear Discriminant Analysis (LDA) on databases where the number of samples is smaller than the data dimensionality. They propose joining MDS and LDA in an algorithm, named as Discriminant Multidimensional Mapping (DMM), and also employ landmarks in DMM (LDMM) to improve scalability and turn it feasible to very large databases. <|paper_end|> | [
"<|reference_start|> Fast embedding of sparse music similarity graphs: This paper applies fast sparse multidimensional scaling (MDS) to a large graph of music similarity, with 267K vertices that represent artists, albums, and tracks; and 3.22M edges that represent similarity between those entities. Once vertices are assigned locations in a Euclidean space, the locations can be used to browse music and to generate playlists. \n \nMDS on very large sparse graphs can be effectively performed by a family of algorithms called Rectangular Dijsktra (RD) MDS algorithms. These RD algorithms operate on a dense rectangular slice of the distance matrix, created by calling Dijsktra a constant number of times. Two RD algorithms are compared: Landmark MDS, which uses the Nystrom approximation to perform MDS; and a new algorithm called Fast Sparse Embedding, which uses FastMap. These algorithms compare favorably to Laplacian Eigenmaps, both in terms of speed and embedding quality. <|reference_end|>",
"<|reference_start|> Active landmark sampling for manifold learning based spectral unmixing: Nonlinear manifold learning based spectral unmixing provides an alternative to direct nonlinear unmixing methods for accommodating nonlinearities inherent in hyperspectral data. Although manifolds can effectively capture nonlinear features in the dimensionality reduction stage of unmixing, the computational overhead is excessive for large remotely sensed data sets. Manifold approximation using a set of distinguishing points is commonly utilized to mitigate the computational burden, but selection of these landmark points is important for adequately representing the topology of the manifold. This study proposes an active landmark sampling framework for manifold learning based spectral unmixing using a small initial landmark set and a computationally efficient backbone-based strategy for constructing the manifold. The active landmark sampling strategy selects the best additional landmarks to develop a more representative manifold and to increase unmixing accuracy. <|reference_end|>",
"<|reference_start|> Selecting landmark points for sparse manifold learning: There has been a surge of interest in learning non-linear manifold models to approximate high-dimensional data. Both for computational complexity reasons and for generalization capability, sparsity is a desired feature in such models. This usually means dimensionality reduction, which naturally implies estimating the intrinsic dimension, but it can also mean selecting a subset of the data to use as landmarks, which is especially important because many existing algorithms have quadratic complexity in the number of observations. This paper presents an algorithm for selecting landmarks, based on LASSO regression, which is well known to favor sparse approximations because it uses regularization with an l1 norm. As an added benefit, a continuous manifold parameterization, based on the landmarks, is also found. Experimental results with synthetic and real data illustrate the algorithm. <|reference_end|>",
"<|reference_start|> An incremental dimensionality reduction method on discriminant information for pattern classification: <|reference_end|>"
] | [
25,
29,
33,
38
] | {"<|cite_1|>": "ss-1704554", "<|cite_3|>": "ss-692526", "<|cite_4|>": "ss-1230149", "<|cite_5|>": "ss-1262630", "<|cite_6|>": "ss-678252", "<|cite_8|>": "ss-1051886", "<|cite_10|>": "ss-1230149", "<|cite_11|>": "ss-1704555", "<|cite_12|>": "ss-1148490", "<|cite_13|>": "ss-1266104", "<|cite_14|>": "ss-1266104", "<|cite_15|>": "ss-1062039", "<|cite_16|>": "ss-772742", "<|cite_17|>": "arxiv-41054", "<|cite_18|>": "ss-1230149", "<|multi_cite_19_1|>": "ss-967647", "<|multi_cite_19_2|>": "ss-1037377", "<|multi_cite_19_3|>": "ss-1110689", "<|multi_cite_19_4|>": "ss-1704556", "<|multi_cite_19_5|>": "ss-1704557", "<|cite_21|>": "ss-1051886", "<|multi_cite_22_1|>": "ss-1704558", "<|multi_cite_22_2|>": "ss-1704559", "<|multi_cite_22_3|>": "ss-1704560", "<|cite_23|>": "ss-805978", "<|cite_25|>": "ss-1704561", "<|cite_26|>": "ss-1704562", "<|multi_cite_27_1|>": "ss-1704563", "<|multi_cite_27_2|>": "ss-1704564", "<|multi_cite_27_3|>": "ss-1704565", "<|multi_cite_27_4|>": "ss-1704566", "<|multi_cite_27_5|>": "ss-1704567", "<|multi_cite_27_6|>": "ss-1704568", "<|multi_cite_27_7|>": "ss-1066747", "<|multi_cite_28_1|>": "ss-1704569", "<|multi_cite_28_2|>": "ss-1704570", "<|multi_cite_28_3|>": "ss-805978", "<|multi_cite_28_4|>": "ss-1124869", "<|cite_29|>": "ss-1704571"} |
2209.13822 | <|paper_start|> Title: TokenFlow: Rethinking Fine-grained Cross-modal Alignment in Vision-Language Retrieval
Abstract: TokenFlow: Rethinking Fine-grained Cross-modal Alignment in Vision-Language Retrieval: Most existing methods in vision-language retrieval match two modalities by either comparing their global feature vectors which misses sufficient information and lacks interpretability, detecting objects in images or videos and aligning the text with fine-grained features which relies on complicated model designs, or modeling fine-grained interaction via cross-attention upon visual and textual tokens which suffers from inferior efficiency. To address these limitations, some recent works simply aggregate the token-wise similarities to achieve fine-grained alignment, but they lack intuitive explanations as well as neglect the relationships between token-level features and global representations with high-level semantics. In this work, we rethink fine-grained cross-modal alignment and devise a new model-agnostic formulation for it. We additionally demystify the recent popular works and subsume them into our scheme. Furthermore, inspired by optimal transport theory, we introduce TokenFlow, an instantiation of the proposed scheme. By modifying only the similarity function, the performance of our method is comparable to the SoTA algorithms with heavy model designs on major video-text retrieval benchmarks. The visualization further indicates that TokenFlow successfully leverages the fine-grained information and achieves better interpretability.
Introduction
\begin{figure}
\centering
\includegraphics[width=0.34\textwidth]{imgs/compare.pdf}
\caption{A comparison of (a) the coarse-grained methods aligning global representations, (b) the methods relying on object detectors, (c) the methods using cross-attention layers for cross-modal interaction, and (d) our \emph{TokenFlow}.}
\label{fig:compare}
\end{figure}
Cross-modal retrieval between images (or videos) and text has become a fundamental downstream task for vision-language understanding, which aims at searching the semantic similar images or videos for a given textual query. With the rapid emergence of multimedia data on the internet, vision-language retrieval has attracted increasing attention and brought great challenges, since both visual media and text contain rich and structured details.
A variety of methods have been proposed and have shown strong superiority in learning similarities between generalizable visual and textual representations across many benchmarks. The main idea of them is to encode visual and textual inputs into the shared feature space, followed by the cross-modal alignment with global features <|cite_start|> (Reference: A Joint Sequence Fusion Model for Video Question Answering and Retrieval: We present an approach named JSFusion (Joint Sequence Fusion) that can measure semantic similarity between any pairs of multimodal sequence data (e.g. a video clip and a language sentence). Our multimodal matching network consists of two key components. First, the Joint Semantic Tensor composes a dense pairwise representation of two sequence data into a 3D tensor. Then, the Convolutional Hierarchical Decoder computes their similarity score by discovering hidden hierarchical matches between the two sequence modalities. Both modules leverage hierarchical attention mechanisms that learn to promote well-matched representation patterns while prune out misaligned ones in a bottom-up manner. Although the JSFusion is a universal model to be applicable to any multimodal sequence data, this work focuses on video-language tasks including multimodal retrieval and video QA. We evaluate the JSFusion model in three retrieval and VQA tasks in LSMDC, for which our model achieves the best performance reported so far. We also perform multiple-choice and movie retrieval tasks for the MSR-VTT dataset, on which our approach outperforms many state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval: Our objective in this work is video-text retrieval - in particular a joint embedding that enables efficient text-to-video retrieval. The challenges in this area include the design of the visual architecture and the nature of the training data, in that the available large scale video-text training datasets, such as HowTo100M, are noisy and hence competitive performance is achieved only at scale through large amounts of compute. We address both these challenges in this paper. We propose an end-to-end trainable model that is designed to take advantage of both large-scale image and video captioning datasets. Our model is an adaptation and extension of the recent ViT and Timesformer architectures, and consists of attention in both space and time. The model is flexible and can be trained on both image and video text datasets, either independently or in conjunction. It is trained with a curriculum learning schedule that begins by treating images as 'frozen' snapshots of video, and then gradually learns to attend to increasing temporal context when trained on video datasets. We also provide a new video-text pretraining dataset WebVid-2M, comprised of over two million videos with weak captions scraped from the internet. Despite training on datasets that are an order of magnitude smaller, we show that this approach yields state-of-the-art results on standard downstream video-retrieval benchmarks including MSR-VTT, MSVD, DiDeMo and LSMDC.) <|cite_end|> <|cite_start|> (Reference: CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval: Video-text retrieval plays an essential role in multi-modal research and has been widely used in many real-world web applications. The CLIP (Contrastive Language-Image Pre-training), an image-language pre-training model, has demonstrated the power of visual concepts learning from web collected image-text datasets. In this paper, we propose a CLIP4Clip model to transfer the knowledge of the CLIP model to video-language retrieval in an end-to-end manner. Several questions are investigated via empirical studies: 1) Whether image feature is enough for video-text retrieval? 2) How a post-pretraining on a large-scale video-text dataset based on the CLIP affect the performance? 3) What is the practical mechanism to model temporal dependency between video frames? And 4) The Hyper-parameters sensitivity of the model on video-text retrieval task. Extensive experimental results present that the CLIP4Clip model transferred from the CLIP can achieve SOTA results on various video-text retrieval datasets, including MSR-VTT, MSVC, LSMDC, ActivityNet, and DiDeMo. We release our code at https://github.com/ArrowLuo/CLIP4Clip.) <|cite_end|>. Despite the superior performance in matching visual and textual features, such a line of work lacks the ability to leverage fine-level information like the relationship between visual objects and textual words.
To model fine-grained cross-modal interaction, existing methods fall into three kinds of approaches, as illustrated in Figure \ref{fig:compare}. 1) Some of them utilize pre-trained object detectors to extract region-based visual features and then fuse them with text embeddings for cross-modal training <|cite_start|> (Reference: Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks: Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks.) <|cite_end|> <|cite_start|> (Reference: Large-Scale Adversarial Training for Vision-and-Language Representation Learning: We present VILLA, the first known effort on large-scale adversarial training for vision-and-language (V+L) representation learning. VILLA consists of two training stages: (i) task-agnostic adversarial pre-training; followed by (ii) task-specific adversarial finetuning. Instead of adding adversarial perturbations on image pixels and textual tokens, we propose to perform adversarial training in the embedding space of each modality. To enable large-scale training, we adopt the "free" adversarial training strategy, and combine it with KL-divergence-based regularization to promote higher invariance in the embedding space. We apply VILLA to current best-performing V+L models, and achieve new state of the art on a wide range of tasks, including Visual Question Answering, Visual Commonsense Reasoning, Image-Text Retrieval, Referring Expression Comprehension, Visual Entailment, and NLVR2.) <|cite_end|> <|cite_start|> (Reference: UNITER: UNiversal Image-TExt Representation Learning: Joint image-text embedding is the bedrock for most Vision-and-Language (V+L) tasks, where multimodality inputs are simultaneously processed for joint visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design four pre-training tasks: Masked Language Modeling (MLM), Masked Region Modeling (MRM, with three variants), Image-Text Matching (ITM), and Word-Region Alignment (WRA). Different from previous work that applies joint random masking to both modalities, we use conditional masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). In addition to ITM for global image-text alignment, we also propose WRA via the use of Optimal Transport (OT) to explicitly encourage fine-grained alignment between words and image regions during pre-training. Comprehensive analysis shows that both conditional masking and OT-based WRA contribute to better pre-training. We also conduct a thorough ablation study to find an optimal combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question Answering, Image-Text Retrieval, Referring Expression Comprehension, Visual Commonsense Reasoning, Visual Entailment, and NLVR$^2$. Code is available at https://github.com/ChenRocks/UNITER.) <|cite_end|>. These works usually suffer from time-consuming region features extracting stage and require complicated architecture designs and training processes. Moreover, their ability may be limited when the object detection model fails to capture certain important information in the downstream tasks. 2) Some other works investigate fine-grained cross-modal interaction methods based on different attention mechanisms, to align the semantic space between token-wise or patch-wise representations from both modalities <|cite_start|> (Reference: Align before Fuse: Vision and Language Representation Learning with Momentum Distillation: Large-scale vision and language representation learning has shown promising improvements on various vision-language tasks. Most existing methods employ a transformer-based multimodal encoder to jointly model visual tokens (region-based image features) and word tokens. Because the visual tokens and word tokens are unaligned, it is challenging for the multimodal encoder to learn image-text interactions. In this paper, we introduce a contrastive loss to ALign the image and text representations BEfore Fusing (ALBEF) them through cross-modal attention, which enables more grounded vision and language representation learning. Unlike most existing methods, our method does not require bounding box annotations nor high-resolution images. In order to improve learning from noisy web data, we propose momentum distillation, a self-training method which learns from pseudo-targets produced by a momentum model. We provide a theoretical analysis of ALBEF from a mutual information maximization perspective, showing that different training tasks can be interpreted as different ways to generate views for an image-text pair. ALBEF achieves state-of-the-art performance on multiple downstream vision-language tasks. On image-text retrieval, ALBEF outperforms methods that are pre-trained on orders of magnitude larger datasets. On VQA and NLVR$^2$, ALBEF achieves absolute improvements of 2.37% and 3.84% compared to the state-of-the-art, while enjoying faster inference speed. Code and pre-trained models are available at https://github.com/salesforce/ALBEF/.) <|cite_end|> <|cite_start|> (Reference: ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision: Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt.) <|cite_end|>. This line of work usually requires the cross-attention to be performed in an encoder-decoder architecture in both the training and inference stage and thus becomes less efficient in practice. 3) A few works achieve fine-grained cross-modal interaction by leveraging token-wise or region-word similarities in the contrastive loss, instead of using cross-attention <|cite_start|> (Reference: Stacked Cross Attention for Image-Text Matching: In this paper, we study the problem of image-text matching. Inferring the latent semantic alignment between objects or other salient stuff (e.g. snow, sky, lawn) and the corresponding words in sentences allows to capture fine-grained interplay between vision and language, and makes image-text matching more interpretable. Prior work either simply aggregates the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or uses a multi-step attentional process to capture limited number of semantic alignments which is less interpretable. In this paper, we present Stacked Cross Attention to discover the full latent alignments using both image regions and words in a sentence as context and infer image-text similarity. Our approach achieves the state-of-the-art results on the MS-COCO and Flickr30K datasets. On Flickr30K, our approach outperforms the current best methods by 22.1% relatively in text retrieval from image query, and 18.2% relatively in image retrieval with text query (based on Recall@1). On MS-COCO, our approach improves sentence retrieval by 17.8% relatively and image retrieval by 16.6% relatively (based on Recall@1 using the 5K test set). Code has been made available at: https://github.com/kuanghuei/SCAN.) <|cite_end|> <|cite_start|> (Reference: Fine-grained Visual Textual Alignment for Cross-Modal Retrieval using Transformer Encoders: Despite the evolution of deep-learning-based visual-textual processing systems, precise multi-modal matching remains a challenging task. In this work, we tackle the task of cross-modal retrieval through image-sentence matching based on word-region alignments, using supervision only at the global image-sentence level. Specifically, we present a novel approach called Transformer Encoder Reasoning and Alignment Network (TERAN). TERAN enforces a fine-grained match between the underlying components of images and sentences, i.e., image regions and words, respectively, in order to preserve the informative richness of both modalities. TERAN obtains state-of-the-art results on the image retrieval task on both MS-COCO and Flickr30k datasets. Moreover, on MS-COCO, it also outperforms current approaches on the sentence retrieval task. Focusing on scalable cross-modal information retrieval, TERAN is designed to keep the visual and textual data pipelines well separated. Cross-attention links invalidate any chance to separately extract visual and textual features needed for the online search and the offline indexing steps in large-scale retrieval systems. In this respect, TERAN merges the information from the two domains only during the final alignment phase, immediately before the loss computation. We argue that the fine-grained alignments produced by TERAN pave the way towards the research for effective and efficient methods for large-scale cross-modal information retrieval. We compare the effectiveness of our approach against relevant state-of-the-art methods. On the MS-COCO 1K test set, we obtain an improvement of 5.7% and 3.5% respectively on the image and the sentence retrieval tasks on the Recall@1 metric. The code used for the experiments is publicly available on GitHub at https://github.com/mesnico/TERAN.) <|cite_end|> <|cite_start|> (Reference: FILIP: Fine-grained Interactive Language-Image Pre-Training: Unsupervised large-scale vision-language pre-training has shown promising advances on various downstream tasks. Existing methods often model the cross-modal interaction either via the similarity of the global feature of each modality which misses sufficient information, or finer-grained interactions using cross/self-attention upon visual and textual tokens. However, cross/self-attention suffers from inferior efficiency in both training and inference. In this paper, we introduce a large-scale Fine-grained Interactive Language-Image Pre-training (FILIP) to achieve finer-level alignment through a cross-modal late interaction mechanism, which uses a token-wise maximum similarity between visual and textual tokens to guide the contrastive objective. FILIP successfully leverages the finer-grained expressiveness between image patches and textual words by modifying only contrastive loss, while simultaneously gaining the ability to pre-compute image and text representations offline at inference, keeping both large-scale training and inference efficient. Furthermore, we construct a new large-scale image-text pair dataset called FILIP300M for pre-training. Experiments show that FILIP achieves state-of-the-art performance on multiple downstream vision-language tasks including zero-shot image classification and image-text retrieval. The visualization on word-patch alignment further shows that FILIP can learn meaningful fine-grained features with promising localization ability.) <|cite_end|>.
Although these methods based on fine-level similarities are shown to be capable of learning fine-grained representations, they directly drop the global representations that contain sufficient information, and it makes them hard to be adapted to downstream tasks with methods that use a pre-trained transformer backbones like CLIP <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|>, of which the pre-training objective is aligning the global representations (classification tokens) rather than patch representations. They also neglect the fact that the relationships between single token and global statistics are also related to overall similarity. Moreover, these approaches are described in a less intuitive way which doesn’t clearly explain how they work.
In this paper, we firstly rethink the cross-modal fine-grained alignment and introduce a universal formulation for it. Then, we subsume the recent popular works which learn fine-grained interaction through token-wise or region-word similarities into our scheme and explain how they work in a clearer way. Furthermore, based on the proposed scheme, we try to model the matching problem as an optimal transport problem and define the distance of two modalities as the Earth Mover's Distance (EMD) <|cite_start|> (Reference: The Earth Mover's Distance as a Metric for Image Retrieval: ) <|cite_end|> between their structured representations. Specifically, we use spatial cross-correlation between an image (or a video) and a text as the marginal distributions when computing the optimal transport plan, where elements with larger weights generate more matching flows and thus contribute more to the overall similarity, which alleviates the issues mentioned above. However, the optimal transport problem is complicated and not friendly with time and memory consumption. Moreover, most of the existing EMD implementations don’t guarantee correctness and convergence, thus hurt the model performance.
To address the aforementioned issues, inspired by optimal transport theory, we present \emph{TokenFlow}, a more efficient and effective instantiation of the proposed scheme which achieves promising performance. \emph{TokenFlow} computes a matching flow between the token-level features and decomposes overall similarity into several token-wise similarities with different contributions. \emph{TokenFlow} develops a very simple aligning mechanism, built of simple dot products and summations without including complex object detectors or cross-attention layers. We conduct extensive experiments to compare with other instantiations on multiple benchmarks to demonstrate the effectiveness of our algorithm.
Our main contributions are summarized
as follows:
\begin{itemize}
\item We introduce a new perspective of fine-grained cross-modal alignment with a model-agnostic formulation.
\item We subsume the recent popular works into our formulation and demystify them in a clearer way.
\item We propose \emph{TokenFlow}, a novel fine-grained alignment function. Experimental results show that by learning fine-grained alignment, the performance of \emph{TokenFlow} is comparable to the SoTA algorithms with heavy model designs by only altering the similarity function, on major video-text retrieval benchmarks. Visualizations further illustrate that \emph{TokenFlow} learns meaningful fine-grained representations with promising matching ability.
\end{itemize}
Related Work
\subsection{Vision-Language Retrieval}
Existing representative works on vision-language retrieval follow the trend of learning a joint embedding space to measure the distance between visual and textual representations, which can be divided into two categories: coarse-grained and fine-grained.
Coarse-grained methods typically encode images <|cite_start|> (Reference: An Empirical Study of Training End-to-End Vision-and-Language Transformers: Vision-and-language (VL) pre-training has proven to be highly effective on various VL downstream tasks. While recent work has shown that fully transformer-based VL models can be more efficient than previous region-feature-based methods, their performance on downstream tasks often degrades significantly. In this paper, we present METER, a Multimodal End-to-end TransformER framework, through which we investigate how to design and pre-train a fully transformer-based VL model in an end-to-end manner. Specifically, we dissect the model designs along multiple dimensions: vision encoders (e.g., CLIP-ViT, Swin transformer), text encoders (e.g., RoBERTa, DeBERTa), multimodal fusion module (e.g., merged attention vs. co-attention), architectural design (e.g., encoder-only vs. encoder-decoder), and pre-training objectives (e.g., masked image modeling). We conduct comprehensive experiments and provide insights on how to train a performant VL transformer. METER achieves an accuracy of 77.64% on the VQAv2 test-std set using only 4M images for pre-training, surpassing the state-of-the-art region-feature-based model by 1.04%, and outperforming the previous best fully transformer-based model by 1.6%. Notably, when further scaled up, our best VQA model achieves an accuracy of 80.54%. Code and pre-trained models are released at https://github.com/zdou0830/METER.) <|cite_end|> <|cite_start|> (Reference: Align before Fuse: Vision and Language Representation Learning with Momentum Distillation: Large-scale vision and language representation learning has shown promising improvements on various vision-language tasks. Most existing methods employ a transformer-based multimodal encoder to jointly model visual tokens (region-based image features) and word tokens. Because the visual tokens and word tokens are unaligned, it is challenging for the multimodal encoder to learn image-text interactions. In this paper, we introduce a contrastive loss to ALign the image and text representations BEfore Fusing (ALBEF) them through cross-modal attention, which enables more grounded vision and language representation learning. Unlike most existing methods, our method does not require bounding box annotations nor high-resolution images. In order to improve learning from noisy web data, we propose momentum distillation, a self-training method which learns from pseudo-targets produced by a momentum model. We provide a theoretical analysis of ALBEF from a mutual information maximization perspective, showing that different training tasks can be interpreted as different ways to generate views for an image-text pair. ALBEF achieves state-of-the-art performance on multiple downstream vision-language tasks. On image-text retrieval, ALBEF outperforms methods that are pre-trained on orders of magnitude larger datasets. On VQA and NLVR$^2$, ALBEF achieves absolute improvements of 2.37% and 3.84% compared to the state-of-the-art, while enjoying faster inference speed. Code and pre-trained models are available at https://github.com/salesforce/ALBEF/.) <|cite_end|> <|cite_start|> (Reference: Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision: Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries.) <|cite_end|> or videos <|cite_start|> (Reference: Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling: The canonical approach to video-and-language learning (e.g., video question answering) dictates a neural model to learn from offline-extracted dense video features from vision models and text features from language models. These feature extractors are trained independently and usually on tasks different from the target domains, rendering these fixed features sub-optimal for downstream tasks. Moreover, due to the high computational overload of dense video features, it is often difficult (or infeasible) to plug feature extractors directly into existing approaches for easy finetuning. To provide a remedy to this dilemma, we propose a generic framework ClipBERT that enables affordable end-to-end learning for video-and-language tasks, by employing sparse sampling, where only a single or a few sparsely sampled short clips from a video are used at each training step. Experiments on text-to-video retrieval and video question answering on six datasets demonstrate that ClipBERT outperforms (or is on par with) existing methods that exploit full-length videos, suggesting that end-to-end learning with just a few sparsely sampled clips is often more accurate than using densely extracted offline features from full-length videos, proving the proverbial less-is-more principle. Videos in the datasets are from considerably different domains and lengths, ranging from 3-second generic domain GIF videos to 180-second YouTube human activity videos, showing the generalization ability of our approach. Comprehensive ablation studies and thorough analyses are provided to dissect what factors lead to this success. Our code is publicly available at https://github.com/jayleicn/ClipBERT) <|cite_end|> <|cite_start|> (Reference: CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval: Video-text retrieval plays an essential role in multi-modal research and has been widely used in many real-world web applications. The CLIP (Contrastive Language-Image Pre-training), an image-language pre-training model, has demonstrated the power of visual concepts learning from web collected image-text datasets. In this paper, we propose a CLIP4Clip model to transfer the knowledge of the CLIP model to video-language retrieval in an end-to-end manner. Several questions are investigated via empirical studies: 1) Whether image feature is enough for video-text retrieval? 2) How a post-pretraining on a large-scale video-text dataset based on the CLIP affect the performance? 3) What is the practical mechanism to model temporal dependency between video frames? And 4) The Hyper-parameters sensitivity of the model on video-text retrieval task. Extensive experimental results present that the CLIP4Clip model transferred from the CLIP can achieve SOTA results on various video-text retrieval datasets, including MSR-VTT, MSVC, LSMDC, ActivityNet, and DiDeMo. We release our code at https://github.com/ArrowLuo/CLIP4Clip.) <|cite_end|> <|cite_start|> (Reference: CLIP2Video: Mastering Video-Text Retrieval via Image CLIP: We present CLIP2Video network to transfer the image-language pre-training model to video-text retrieval in an end-to-end manner. Leading approaches in the domain of video-and-language learning try to distill the spatio-temporal video features and multi-modal interaction between videos and languages from a large-scale video-text dataset. Different from them, we leverage pretrained image-language model, simplify it as a two-stage framework with co-learning of image-text and enhancing temporal relations between video frames and video-text respectively, make it able to train on comparatively small datasets. Specifically, based on the spatial semantics captured by Contrastive Language-Image Pretraining (CLIP) model, our model involves a Temporal Difference Block to capture motions at fine temporal video frames, and a Temporal Alignment Block to re-align the tokens of video clips and phrases and enhance the multi-modal correlation. We conduct thorough ablation studies, and achieve state-of-the-art performance on major text-to-video and video-to-text retrieval benchmarks, including new records of retrieval accuracy on MSR-VTT, MSVD and VATEX.) <|cite_end|> <|cite_start|> (Reference: Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss: Employing large-scale pre-trained model CLIP to conduct video-text retrieval task (VTR) has become a new trend, which exceeds previous VTR methods. Though, due to the heterogeneity of structures and contents between video and text, previous CLIP-based models are prone to overfitting in the training phase, resulting in relatively poor retrieval performance. In this paper, we propose a multi-stream Corpus Alignment network with single gate Mixture-of-Experts (CAMoE) and a novel Dual Softmax Loss (DSL) to solve the two heterogeneity. The CAMoE employs Mixture-of-Experts (MoE) to extract multi-perspective video representations, including action, entity, scene, etc., then align them with the corresponding part of the text. In this stage, we conduct massive explorations towards the feature extraction module and feature alignment module. DSL is proposed to avoid the one-way optimum-match which occurs in previous contrastive methods. Introducing the intrinsic prior of each pair in a batch, DSL serves as a reviser to correct the similarity matrix and achieves the dual optimal match. DSL is easy to implement with only one-line code but improves significantly. The results show that the proposed CAMoE and DSL are of strong efficiency, and each of them is capable of achieving State-of-The-Art (SOTA) individually on various benchmarks such as MSR-VTT, MSVD, and LSMDC. Further, with both of them, the performance is advanced to a big extend, surpassing the previous SOTA methods for around 4.6\% R@1 in MSR-VTT.) <|cite_end|> and textual queries to global features and accordingly map them into a common latent space, where the similarity can be measured directly with ranking loss variants. For video-text retrieval, recent methods based on pre-trained transformer CLIP <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|> have achieved noticeable results and drawn increasing attention. CLIP4Clip <|cite_start|> (Reference: CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval: Video-text retrieval plays an essential role in multi-modal research and has been widely used in many real-world web applications. The CLIP (Contrastive Language-Image Pre-training), an image-language pre-training model, has demonstrated the power of visual concepts learning from web collected image-text datasets. In this paper, we propose a CLIP4Clip model to transfer the knowledge of the CLIP model to video-language retrieval in an end-to-end manner. Several questions are investigated via empirical studies: 1) Whether image feature is enough for video-text retrieval? 2) How a post-pretraining on a large-scale video-text dataset based on the CLIP affect the performance? 3) What is the practical mechanism to model temporal dependency between video frames? And 4) The Hyper-parameters sensitivity of the model on video-text retrieval task. Extensive experimental results present that the CLIP4Clip model transferred from the CLIP can achieve SOTA results on various video-text retrieval datasets, including MSR-VTT, MSVC, LSMDC, ActivityNet, and DiDeMo. We release our code at https://github.com/ArrowLuo/CLIP4Clip.) <|cite_end|> is the first to apply CLIP for video-text retrieval which also proposes three different ways of aggregating video frames. CLIP2Video <|cite_start|> (Reference: CLIP2Video: Mastering Video-Text Retrieval via Image CLIP: We present CLIP2Video network to transfer the image-language pre-training model to video-text retrieval in an end-to-end manner. Leading approaches in the domain of video-and-language learning try to distill the spatio-temporal video features and multi-modal interaction between videos and languages from a large-scale video-text dataset. Different from them, we leverage pretrained image-language model, simplify it as a two-stage framework with co-learning of image-text and enhancing temporal relations between video frames and video-text respectively, make it able to train on comparatively small datasets. Specifically, based on the spatial semantics captured by Contrastive Language-Image Pretraining (CLIP) model, our model involves a Temporal Difference Block to capture motions at fine temporal video frames, and a Temporal Alignment Block to re-align the tokens of video clips and phrases and enhance the multi-modal correlation. We conduct thorough ablation studies, and achieve state-of-the-art performance on major text-to-video and video-to-text retrieval benchmarks, including new records of retrieval accuracy on MSR-VTT, MSVD and VATEX.) <|cite_end|> captures temporal relationships of video frames and re-aligns the tokens of video clips and phrases. CAMoE <|cite_start|> (Reference: Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss: Employing large-scale pre-trained model CLIP to conduct video-text retrieval task (VTR) has become a new trend, which exceeds previous VTR methods. Though, due to the heterogeneity of structures and contents between video and text, previous CLIP-based models are prone to overfitting in the training phase, resulting in relatively poor retrieval performance. In this paper, we propose a multi-stream Corpus Alignment network with single gate Mixture-of-Experts (CAMoE) and a novel Dual Softmax Loss (DSL) to solve the two heterogeneity. The CAMoE employs Mixture-of-Experts (MoE) to extract multi-perspective video representations, including action, entity, scene, etc., then align them with the corresponding part of the text. In this stage, we conduct massive explorations towards the feature extraction module and feature alignment module. DSL is proposed to avoid the one-way optimum-match which occurs in previous contrastive methods. Introducing the intrinsic prior of each pair in a batch, DSL serves as a reviser to correct the similarity matrix and achieves the dual optimal match. DSL is easy to implement with only one-line code but improves significantly. The results show that the proposed CAMoE and DSL are of strong efficiency, and each of them is capable of achieving State-of-The-Art (SOTA) individually on various benchmarks such as MSR-VTT, MSVD, and LSMDC. Further, with both of them, the performance is advanced to a big extend, surpassing the previous SOTA methods for around 4.6\% R@1 in MSR-VTT.) <|cite_end|> extracts multi-perspective video representations including action, entity, and scene.
Extremely summarized global visual and textual descriptions may lose a lot of useful fine-grained information. For these reasons, many works try to utilize fine-level features and achieve fine-grained alignments between modalities. One line of work relies on objection detection to represent the visual input by dozens of object-centric features and then combines them with the paired text as the input of the model <|cite_start|> (Reference: Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks: Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks.) <|cite_end|> <|cite_start|> (Reference: Large-Scale Adversarial Training for Vision-and-Language Representation Learning: We present VILLA, the first known effort on large-scale adversarial training for vision-and-language (V+L) representation learning. VILLA consists of two training stages: (i) task-agnostic adversarial pre-training; followed by (ii) task-specific adversarial finetuning. Instead of adding adversarial perturbations on image pixels and textual tokens, we propose to perform adversarial training in the embedding space of each modality. To enable large-scale training, we adopt the "free" adversarial training strategy, and combine it with KL-divergence-based regularization to promote higher invariance in the embedding space. We apply VILLA to current best-performing V+L models, and achieve new state of the art on a wide range of tasks, including Visual Question Answering, Visual Commonsense Reasoning, Image-Text Retrieval, Referring Expression Comprehension, Visual Entailment, and NLVR2.) <|cite_end|> <|cite_start|> (Reference: UNITER: UNiversal Image-TExt Representation Learning: Joint image-text embedding is the bedrock for most Vision-and-Language (V+L) tasks, where multimodality inputs are simultaneously processed for joint visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design four pre-training tasks: Masked Language Modeling (MLM), Masked Region Modeling (MRM, with three variants), Image-Text Matching (ITM), and Word-Region Alignment (WRA). Different from previous work that applies joint random masking to both modalities, we use conditional masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). In addition to ITM for global image-text alignment, we also propose WRA via the use of Optimal Transport (OT) to explicitly encourage fine-grained alignment between words and image regions during pre-training. Comprehensive analysis shows that both conditional masking and OT-based WRA contribute to better pre-training. We also conduct a thorough ablation study to find an optimal combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question Answering, Image-Text Retrieval, Referring Expression Comprehension, Visual Commonsense Reasoning, Visual Entailment, and NLVR$^2$. Code is available at https://github.com/ChenRocks/UNITER.) <|cite_end|>. Another line of work utilizes a bunch of cross-modal transformers to learn fine-grained interaction between token-wise representations of two modalities <|cite_start|> (Reference: UNITER: UNiversal Image-TExt Representation Learning: Joint image-text embedding is the bedrock for most Vision-and-Language (V+L) tasks, where multimodality inputs are simultaneously processed for joint visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design four pre-training tasks: Masked Language Modeling (MLM), Masked Region Modeling (MRM, with three variants), Image-Text Matching (ITM), and Word-Region Alignment (WRA). Different from previous work that applies joint random masking to both modalities, we use conditional masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). In addition to ITM for global image-text alignment, we also propose WRA via the use of Optimal Transport (OT) to explicitly encourage fine-grained alignment between words and image regions during pre-training. Comprehensive analysis shows that both conditional masking and OT-based WRA contribute to better pre-training. We also conduct a thorough ablation study to find an optimal combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question Answering, Image-Text Retrieval, Referring Expression Comprehension, Visual Commonsense Reasoning, Visual Entailment, and NLVR$^2$. Code is available at https://github.com/ChenRocks/UNITER.) <|cite_end|> <|cite_start|> (Reference: ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision: Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt.) <|cite_end|>. These methods
either require a pre-trained object detector to perform time-consuming region features extracting or cross-modal transformer layers to align the features, which significantly hinders their efficiency and scalability. In contrast, we employ a simple but effective way to align the representations of two modalities via token-level similarity matrices.
\subsection{Token-Wise/Region-Word Cross-modal Alignment}
Some efforts have been made to learn fine-grained cross-modal interaction between two modalities by leveraging token-wise or region-word similarities in the contrastive loss. TERAN <|cite_start|> (Reference: Fine-grained Visual Textual Alignment for Cross-Modal Retrieval using Transformer Encoders: Despite the evolution of deep-learning-based visual-textual processing systems, precise multi-modal matching remains a challenging task. In this work, we tackle the task of cross-modal retrieval through image-sentence matching based on word-region alignments, using supervision only at the global image-sentence level. Specifically, we present a novel approach called Transformer Encoder Reasoning and Alignment Network (TERAN). TERAN enforces a fine-grained match between the underlying components of images and sentences, i.e., image regions and words, respectively, in order to preserve the informative richness of both modalities. TERAN obtains state-of-the-art results on the image retrieval task on both MS-COCO and Flickr30k datasets. Moreover, on MS-COCO, it also outperforms current approaches on the sentence retrieval task. Focusing on scalable cross-modal information retrieval, TERAN is designed to keep the visual and textual data pipelines well separated. Cross-attention links invalidate any chance to separately extract visual and textual features needed for the online search and the offline indexing steps in large-scale retrieval systems. In this respect, TERAN merges the information from the two domains only during the final alignment phase, immediately before the loss computation. We argue that the fine-grained alignments produced by TERAN pave the way towards the research for effective and efficient methods for large-scale cross-modal information retrieval. We compare the effectiveness of our approach against relevant state-of-the-art methods. On the MS-COCO 1K test set, we obtain an improvement of 5.7% and 3.5% respectively on the image and the sentence retrieval tasks on the Recall@1 metric. The code used for the experiments is publicly available on GitHub at https://github.com/mesnico/TERAN.) <|cite_end|> detects and encodes image regions at the object level with Faster-RCNN <|cite_start|> (Reference: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.) <|cite_end|> and sums the maximum of the region-word similarity scores with respect to each word or region. Similar to TERAN, FILIP <|cite_start|> (Reference: FILIP: Fine-grained Interactive Language-Image Pre-Training: Unsupervised large-scale vision-language pre-training has shown promising advances on various downstream tasks. Existing methods often model the cross-modal interaction either via the similarity of the global feature of each modality which misses sufficient information, or finer-grained interactions using cross/self-attention upon visual and textual tokens. However, cross/self-attention suffers from inferior efficiency in both training and inference. In this paper, we introduce a large-scale Fine-grained Interactive Language-Image Pre-training (FILIP) to achieve finer-level alignment through a cross-modal late interaction mechanism, which uses a token-wise maximum similarity between visual and textual tokens to guide the contrastive objective. FILIP successfully leverages the finer-grained expressiveness between image patches and textual words by modifying only contrastive loss, while simultaneously gaining the ability to pre-compute image and text representations offline at inference, keeping both large-scale training and inference efficient. Furthermore, we construct a new large-scale image-text pair dataset called FILIP300M for pre-training. Experiments show that FILIP achieves state-of-the-art performance on multiple downstream vision-language tasks including zero-shot image classification and image-text retrieval. The visualization on word-patch alignment further shows that FILIP can learn meaningful fine-grained features with promising localization ability.) <|cite_end|> also aggregates the maximum token-wise similarity scores according to every single feature, but it tries to directly localize fine-grained objects from visual patches, instead of using object detectors. SCAN <|cite_start|> (Reference: Stacked Cross Attention for Image-Text Matching: In this paper, we study the problem of image-text matching. Inferring the latent semantic alignment between objects or other salient stuff (e.g. snow, sky, lawn) and the corresponding words in sentences allows to capture fine-grained interplay between vision and language, and makes image-text matching more interpretable. Prior work either simply aggregates the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or uses a multi-step attentional process to capture limited number of semantic alignments which is less interpretable. In this paper, we present Stacked Cross Attention to discover the full latent alignments using both image regions and words in a sentence as context and infer image-text similarity. Our approach achieves the state-of-the-art results on the MS-COCO and Flickr30K datasets. On Flickr30K, our approach outperforms the current best methods by 22.1% relatively in text retrieval from image query, and 18.2% relatively in image retrieval with text query (based on Recall@1). On MS-COCO, our approach improves sentence retrieval by 17.8% relatively and image retrieval by 16.6% relatively (based on Recall@1 using the 5K test set). Code has been made available at: https://github.com/kuanghuei/SCAN.) <|cite_end|> attends differentially to important words or regions. All these works drop the global representations that contain sufficient information and neglect the relationships between fine-level features and global statistics. <|paper_end|> | [
"<|reference_start|> Large-Scale Adversarial Training for Vision-and-Language Representation Learning: We present VILLA, the first known effort on large-scale adversarial training for vision-and-language (V+L) representation learning. VILLA consists of two training stages: (i) task-agnostic adversarial pre-training; followed by (ii) task-specific adversarial finetuning. Instead of adding adversarial perturbations on image pixels and textual tokens, we propose to perform adversarial training in the embedding space of each modality. To enable large-scale training, we adopt the \"free\" adversarial training strategy, and combine it with KL-divergence-based regularization to promote higher invariance in the embedding space. We apply VILLA to current best-performing V+L models, and achieve new state of the art on a wide range of tasks, including Visual Question Answering, Visual Commonsense Reasoning, Image-Text Retrieval, Referring Expression Comprehension, Visual Entailment, and NLVR2. <|reference_end|>",
"<|reference_start|> Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP. <|reference_end|>",
"<|reference_start|> Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss: Employing large-scale pre-trained model CLIP to conduct video-text retrieval task (VTR) has become a new trend, which exceeds previous VTR methods. Though, due to the heterogeneity of structures and contents between video and text, previous CLIP-based models are prone to overfitting in the training phase, resulting in relatively poor retrieval performance. In this paper, we propose a multi-stream Corpus Alignment network with single gate Mixture-of-Experts (CAMoE) and a novel Dual Softmax Loss (DSL) to solve the two heterogeneity. The CAMoE employs Mixture-of-Experts (MoE) to extract multi-perspective video representations, including action, entity, scene, etc., then align them with the corresponding part of the text. In this stage, we conduct massive explorations towards the feature extraction module and feature alignment module. DSL is proposed to avoid the one-way optimum-match which occurs in previous contrastive methods. Introducing the intrinsic prior of each pair in a batch, DSL serves as a reviser to correct the similarity matrix and achieves the dual optimal match. DSL is easy to implement with only one-line code but improves significantly. The results show that the proposed CAMoE and DSL are of strong efficiency, and each of them is capable of achieving State-of-The-Art (SOTA) individually on various benchmarks such as MSR-VTT, MSVD, and LSMDC. Further, with both of them, the performance is advanced to a big extend, surpassing the previous SOTA methods for around 4.6\\% R@1 in MSR-VTT. <|reference_end|>",
"<|reference_start|> ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision: Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt. <|reference_end|>"
] | [
4,
20,
23,
28
] | {"<|multi_cite_1_1|>": "arxiv-168581", "<|multi_cite_1_2|>": "arxiv-331663", "<|multi_cite_1_3|>": "arxiv-335405", "<|multi_cite_2_1|>": "arxiv-259146", "<|multi_cite_2_2|>": "arxiv-270990", "<|multi_cite_2_3|>": "arxiv-225610", "<|multi_cite_3_1|>": "arxiv-355417", "<|multi_cite_3_2|>": "arxiv-319372", "<|multi_cite_4_1|>": "arxiv-152364", "<|multi_cite_4_2|>": "arxiv-284127", "<|multi_cite_4_3|>": "arxiv-381075", "<|cite_5|>": "arxiv-323919", "<|cite_6|>": "ss-792770", "<|multi_cite_7_1|>": "arxiv-378909", "<|multi_cite_7_2|>": "arxiv-355417", "<|multi_cite_7_3|>": "arxiv-320496", "<|multi_cite_8_1|>": "arxiv-320605", "<|multi_cite_8_2|>": "arxiv-335405", "<|multi_cite_8_3|>": "arxiv-349895", "<|multi_cite_8_4|>": "arxiv-365810", "<|cite_9|>": "arxiv-323919", "<|cite_10|>": "arxiv-335405", "<|cite_11|>": "arxiv-349895", "<|cite_12|>": "arxiv-365810", "<|multi_cite_13_1|>": "arxiv-259146", "<|multi_cite_13_2|>": "arxiv-270990", "<|multi_cite_13_3|>": "arxiv-225610", "<|multi_cite_14_1|>": "arxiv-225610", "<|multi_cite_14_2|>": "arxiv-319372", "<|cite_15|>": "arxiv-284127", "<|cite_16|>": "arxiv-78819", "<|cite_17|>": "arxiv-381075", "<|cite_18|>": "arxiv-152364"} |
1311.6647 | "<|paper_start|> Title: DoF Analysis of the K-user MISO Broadcast Channel with Alternating CSIT\nAbs(...TRUNCATED) | ["<|reference_start|> Degrees of Freedom of Time Correlated MISO Broadcast Channel with Delayed CSIT(...TRUNCATED) | [
1,
3,
7,
8
] | "{\"<|cite_2|>\": \"arxiv-16539\", \"<|cite_3|>\": \"arxiv-29681\", \"<|cite_4|>\": \"ss-1436014\", (...TRUNCATED) |
1712.09708-1 | " <|cite_start|> (Reference: The developing visual brain: 1. Background context 2. Paediatric vision(...TRUNCATED) | ["<|reference_start|> Supervised Learning of Universal Sentence Representations from Natural Languag(...TRUNCATED) | [
3,
5,
9,
10
] | "{\"<|cite_1|>\": \"ss-972908\", \"<|multi_cite_2_1|>\": \"ss-1016684\", \"<|multi_cite_2_2|>\": \"s(...TRUNCATED) |
1902.02823 | "<|paper_start|> Title: Compatible Natural Gradient Policy Search\nAbstract: Compatible Natural Grad(...TRUNCATED) | ["<|reference_start|> Trust Region Policy Optimization: We describe an iterative procedure for optim(...TRUNCATED) | [
7,
8,
12,
23
] | "{\"<|cite_1|>\": \"ss-690072\", \"<|multi_cite_2_1|>\": \"ss-1516973\", \"<|multi_cite_2_2|>\": \"s(...TRUNCATED) |
2309.04862 | "<|paper_start|> Title: Distributional Data Augmentation Methods for Low Resource Language\nAbstract(...TRUNCATED) | ["<|reference_start|> Text Data Augmentation for Deep Learning: <|reference_end|>","<|reference_sta(...TRUNCATED) | [
0,
6,
12,
13
] | "{\"<|cite_1|>\": \"ss-1202156\", \"<|cite_2|>\": \"arxiv-353520\", \"<|cite_3|>\": \"arxiv-353520\"(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 2