metadata
dict | paper
dict | review
dict | citation_count
int64 0
0
| normalized_citation_count
int64 0
0
| cited_papers
listlengths 0
0
| citing_papers
listlengths 0
0
|
---|---|---|---|---|---|---|
{
"id": "j0T6OQkyu-",
"year": null,
"venue": "EANN 2022",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=j0T6OQkyu-",
"arxiv_id": null,
"doi": null
}
|
{
"title": "On Forecasting Project Activity Durations with Neural Networks",
"authors": [
"Peter Zachares",
"Vahan Hovhannisyan",
"Carlos Ledezma",
"Joao Gante",
"Alan Mosca"
],
"abstract": "Accurately forecasting project end dates is an incredibly valuable and equally challenging task. In recent years it has gained added attention from the machine learning community. However, state of the art methods both in academia and in industry still rely on expert opinions and Monte-Carlo simulations. In this paper, we formulate the problem of activity duration forecasting as a classification task using a domain specific binning strategy. Our experiments on a data set of real construction projects suggest that our proposed method offers several orders of magnitude improvement over more traditional approaches where activity duration forecasting is treated as a regression task. Our results suggest that posing the forecasting problem as a classification task with carefully designed classes is crucial for high quality forecasts both at an activity and a project levels.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "w3Id07Fmee",
"year": null,
"venue": "EANN 2017",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=w3Id07Fmee",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Deep Convolutional Neural Networks for Fire Detection in Images",
"authors": [
"Jivitesh Sharma",
"Ole-Christoffer Granmo",
"Morten Goodwin",
"Jahn Thomas Fidje"
],
"abstract": "Detecting fire in images using image processing and computer vision techniques has gained a lot of attention from researchers during the past few years. Indeed, with sufficient accuracy, such systems may outperform traditional fire detection equipment. One of the most promising techniques used in this area is Convolutional Neural Networks (CNNs). However, the previous research on fire detection with CNNs has only been evaluated on balanced datasets, which may give misleading information on real-world performance, where fire is a rare event. Actually, as demonstrated in this paper, it turns out that a traditional CNN performs relatively poorly when evaluated on the more realistically balanced benchmark dataset provided in this paper. We therefore propose to use even deeper Convolutional Neural Networks for fire detection in images, and enhancing these with fine tuning based on a fully connected layer. We use two pretrained state-of-the-art Deep CNNs, VGG16 and Resnet50, to develop our fire detection system. The Deep CNNs are tested on our imbalanced dataset, which we have assembled to replicate real world scenarios. It includes images that are particularly difficult to classify and that are deliberately unbalanced by including significantly more non-fire images than fire images. The dataset has been made available online. Our results show that adding fully connected layers for fine tuning indeed does increase accuracy, however, this also increases training time. Overall, we found that our deeper CNNs give good performance on a more challenging dataset, with Resnet50 slightly outperforming VGG16. These results may thus lead to more successful fire detection systems in practice.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "-N-XGOyH6f_",
"year": null,
"venue": "EANN 2020",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=-N-XGOyH6f_",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Compact Sequence Encoding Scheme for Online Human Activity Recognition in HRI Applications",
"authors": [
"Georgios Tsatiris",
"Kostas Karpouzis",
"Stefanos D. Kollias"
],
"abstract": "Human activity recognition and analysis has always been one of the most active areas of pattern recognition and machine intelligence, with applications in various fields, including but not limited to exertion games, surveillance, sports analytics and healthcare. Especially in Human-Robot Interaction, human activity understanding plays a crucial role as household robotic assistants are a trend of the near future. However, state-of-the-art infrastructures that can support complex machine intelligence tasks are not always available, and may not be for the average consumer, as robotic hardware is expensive. In this paper we propose a novel action sequence encoding scheme which efficiently transforms spatio-temporal action sequences into compact representations, using Mahalanobis distance-based shape features and the Radon transform. This representation can be used as input for a lightweight convolutional neural network. Experiments show that the proposed pipeline, when based on state-of-the-art human pose estimation techniques, can provide a robust end-to-end online action recognition scheme, deployable on hardware lacking extreme computing capabilities.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "F_Hj7s8qmN-",
"year": null,
"venue": "EANN 2022",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=F_Hj7s8qmN-",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Learning Image Captioning as a Structured Transduction Task",
"authors": [
"Davide Bacciu",
"Davide Serramazza"
],
"abstract": "Image captioning is a task typically approached by deep encoder-decoder architectures, where the encoder component works on a flat representation of the image while the decoder considers a sequential representation of natural language sentences. As such, these encoder-decoder architectures implement a simple and very specific form of structured transduction, that is a generalization of a predictive problem where the input data and output predictions might have substantially different structures and topologies. In this paper, we explore a generalization of such an approach by addressing the problem as a general structured transduction problem. In particular, we provide a framework that allows considering input and output information with a tree-structured representation. This allows taking into account the hierarchical nature underlying both images and sentences. To this end, we introduce an approach to generate tree-structured representations from images along with an autoencoder working with this kind of data. We empirically assess our approach on both synthetic and realistic tasks.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "6RGfddltVO",
"year": null,
"venue": "EANN 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=6RGfddltVO",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Robust Deep Ensemble Classifier for Figurative Language Detection",
"authors": [
"Rolandos Alexandros Potamias",
"Georgios Siolas",
"Andreas Stafylopatis"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "k3clKDU2mDM",
"year": null,
"venue": "EANN 2014",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=k3clKDU2mDM",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Neural Trade-Offs among Specialist and Generalist Neurons in Pattern Recognition",
"authors": [
"Aaron Montero",
"Ramón Huerta",
"Francisco B. Rodríguez"
],
"abstract": "The olfactory system of insects has two types of neurons based on the conditional response to odorants. Neurons that respond to a few odor classes are called specialists, while generalist neurons code for a wide range of input classes. The function of these neurons is intriguing. Specialist neurons are perhaps essential for odor discrimination, while generalist neurons may extract general properties of the odor space to be able to generalize to new odor spaces. Our goal is to shed light on this issue by analyzing the relevance of these neurons for pattern recognition purposes. The computational model is based on the olfactory system of insects. The model contains an approximation to the antennal lobe (AL) and mushroom body (MB) using a single-hidden-layer neural network. To determine the optimal balance between specialists and generalists we measure the classification error of the pattern recognition task. The mechanism to achieve the optimal balance is synaptic pruning to select the optimal synaptic configuration. The results show that specialists play an important role in odor classification, which is not observed for generalists. Furthermore, proper classification requires low neural activity in Kenyon cells, KC, which is consistent with the sparseness condition observed in MB neurons. Moreover, we also observe that the model is robust against noise to input patterns showing better resilience for low connection probabilities between AL and MB.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "D-VCOUUk59",
"year": null,
"venue": "EANN 2014",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=D-VCOUUk59",
"arxiv_id": null,
"doi": null
}
|
{
"title": "An Iterative Feature Filter for Sensor Timeseries in Pervasive Computing Applications",
"authors": [
"Davide Bacciu"
],
"abstract": "The paper discusses an efficient feature selection approach for multivariate timeseries of heterogeneous sensor data within a pervasive computing scenario. An iterative filtering procedure is devised to reduce information redundancy measured in terms of timeseries cross-correlation. The algorithm is capable of identifying non-redundant sensor sources in an unsupervised fashion even in presence of a large proportion of noisy features. A comparative experimental analysis on real-world data from pervasive computing applications is provided, showing that the algorithm addresses major limitations of unsupervised filters in literature when dealing with sensor timeseries.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "9jdCsHJeG9n",
"year": null,
"venue": "EANN 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=9jdCsHJeG9n",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Relating Halftone Dot Quality to Paper Surface Topography",
"authors": [
"Pekka Kumpulainen",
"Marja Mettänen",
"Mikko Lauri",
"Heimo Ihalainen"
],
"abstract": "Most printed material is produced by printing halftone dot patterns. One of the key issues that determine the attainable print quality is the structure of the paper surface but the relation is non-deterministic in nature. We examine the halftone print quality and study the statistical dependence between the defects in printed dots and the topography measurement of the unprinted paper. The work concerns SC paper samples printed by an IGT gravure test printer. We have small-scale 2D measurements of the unprinted paper surface topography and the reflectance of the print result. The measurements before and after printing are aligned with subpixel resolution and individual printed dots are detected. First, the quality of the printed dots is studied using Self Organizing Map and clustering and the properties of the corresponding areas in the unprinted topography are examined. The printed dots are divided into high and low print quality. Features from the unprinted paper surface topography are then used to classify the corresponding paper areas using Support Vector Machine classification. The results show that the topography of the paper can explain some of the print defects. However, there are many other factors that affect the print quality and the topography alone is not adequate to predict the print quality.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "JHqVq_45nms",
"year": null,
"venue": "EANN 2014",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=JHqVq_45nms",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A New User Similarity Computation Method for Collaborative Filtering Using Artificial Neural Network",
"authors": [
"Noman Bin Mannan",
"Sheikh Muhammad Sarwar",
"Najeeb Elahi"
],
"abstract": "A User-User Collaborative Filtering (CF) algorithm predicts the rating of a particular item for a given user based on the judgment of other users, who are similar to the given user. Hence, measuring similarity between two users turns out to be a crucial and challenging task as the similarity function is the core component of the item rating prediction function for a particular user. In this paper, we investigate the effectiveness of a multilayer feed-forward artificial neural network as a similarity measurement function. We model similarity between two users as a function that consists of a set of adaptive weights and attempt to train a neural network to optimize the weights. Specifically, our contribution lies in designing an error function for the neural network, which optimizes the network and sets weights in such a way that enables the neural network to produce a reasonable similarity value between two users as its output. Through experimentation on Movielens dataset, we conclude that neural network, as a similarity function, gains more accuracy and coverage compared to the Genetic Algorithm (GA) based similarity architecture proposed by Bobadilla et al.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "B0MWnEyBYy9",
"year": null,
"venue": "EANN 2017",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=B0MWnEyBYy9",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Package Recommendation Framework Based on Collaborative Filtering and Preference Score Maximization",
"authors": [
"Panagiotis Kouris",
"Iraklis Varlamis",
"Georgios Alexandridis"
],
"abstract": "The popularity of recommendation systems has made them a substantial component of many applications and projects. This work proposes a framework for package recommendations that try to meet users’ preferences as much as possible through the satisfaction of several criteria. This is achieved by modeling the relation between the items and the categories these items belong to aiming to recommend to each user the top-k packages which cover their preferred categories and the restriction of a maximum package cost. Our contribution includes an optimal and a greedy solution. The novelty of the optimal solution is that it combines the collaborative filtering predictions with a graph based model to produce recommendations. The problem is expressed through a minimum cost flow network and is solved by integer linear programming. The greedy solution performs with a low computational complexity and provides recommendations which are close to the optimal solution. We have evaluated and compared our framework with a baseline method by using two popular recommendation datasets and we have obtained promising results on a set of widely accepted evaluation metrics.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Bg-WStdGHbc",
"year": null,
"venue": "EANN 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=Bg-WStdGHbc",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Pareto-Based Multi-output Metamodeling with Active Learning",
"authors": [
"Dirk Gorissen",
"Ivo Couckuyt",
"Eric Laermans",
"Tom Dhaene"
],
"abstract": "When dealing with computationally expensive simulation codes or process measurement data, global surrogate modeling methods are firmly established as facilitators for design space exploration, sensitivity analysis, visualization and optimization. Popular surrogate model types include neural networks, support vector machines, and splines. In addition, the cost of each simulation mandates the use of active learning strategies where data points (simulations) are selected intelligently and incrementally. When applying surrogate models to multi-output systems, the hyperparameter optimization problem is typically formulated in a single objective way. The different response outputs are modeled separately by independent models. Instead, a multi-objective approach would benefit the domain expert by giving information about output correlation, facilitate the generation of diverse ensembles, and enable automatic model type selection for each output on the fly. This paper outlines a multi-objective approach to surrogate model generation including its application to two problems.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "B4gbrYdGrWq",
"year": null,
"venue": "EANN 2018",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=B4gbrYdGrWq",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Structured Inference Networks Using High-Dimensional Sensors for Surveillance Purposes",
"authors": [
"Vincent Polfliet",
"Nicolas Knudde",
"Baptist Vandersmissen",
"Ivo Couckuyt",
"Tom Dhaene"
],
"abstract": "Video cameras are arguably the world’s most used sensors for surveillance systems. They give a highly detailed representation of a situation that is easily interpreted by both humans and computers. However, these representations can lose part of their representational value when being recorded in less than ideal circumstances. Bad weather conditions, low-light illumination or concealing objects can make the representation more opaque. A radar sensor is a potential solution for these situations, since it is unaffected by the light intensity and can sense through most concealing objects. In this paper, we investigate the performance of a structured inference network on data of a low-power radar device. A structured inference network applies automated feature extraction by creating a latent space out of which the observations can be reconstructed. A classification model can then be trained on this latent space. This methodology allows us to perform experiments for both person identification and action recognition, resulting in competitive error rates ranging from 0% to 6.5% for actions recognition and 10% to 12% for person identification. Furthermore, the possibility of a radar sensor being used as a complement to a camera sensor is investigated.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "1IorcBwsETn",
"year": null,
"venue": "SLPAT@Interspeech 2015",
"pdf_link": "https://aclanthology.org/W15-5116.pdf",
"forum_link": "https://openreview.net/forum?id=1IorcBwsETn",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Generating acceptable Arabic Core Vocabularies and Symbols for AAC users",
"authors": [
"E. A. Draffan",
"Mike Wald",
"Nawar Halabi",
"Ouadie Sabia",
"Wajdi Zaghouani",
"Amatullah Kadous",
"Amal Idris",
"Nadine Zeinoun",
"David Banes",
"Dana Lawand"
],
"abstract": "E.A. Draffan, Mike Wald, Nawar Halabi, Ouadie Sabia, Wajdi Zaghouani, Amatullah Kadous, Amal Idris, Nadine Zeinoun, David Banes, Dana Lawand. Proceedings of SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies. 2015.",
"keywords": [],
"raw_extracted_content": "Generating acceptable Arabic Core Vocabularies and Symbols for AAC \nusers \nE.A. Draffan, Mike Wald, Nawar Halabi, Ouadie Sabia1, Wajdi Zaghouani 2 \nAmatullah Kadous, Amal Idris ,3 Nadine Zeinoun, David Banes, Dana Lawand4 \n \n1University of Southampton , UK \n 2Carnegie Mellon University, Qatar \n 3Hamad Medical Corporation , Qatar \n 4Mada Assistive Technology Center, Qatar \n \[email protected], [email protected], [email protected], [email protected], \[email protected], [email protected], [email protected], [email protected], \[email protected], [email protected] \n \n \nAbstract \nThis paper discusses the development of an Arabic Symbol \nDictionary for Augmentative and Alternative Communication \n(AAC) users, their families, carers, therapists and teachers as \nwell as those who may benefit from the use of symbols to \nenhance literacy skills. With a requirem ent for a bi -lingual \ndictionary , a vocabulary list analyzer has been developed to \nevaluate similarities and differences in word frequencies from \na range of word lists in order to collect suitable AAC lexical \nentries. An online bespoke symbol management has been \ncreated to hold the lexical entries alongside specifically \ndesigned symbols which are then accepted via a voting system \nusing a series of criteria. Results to date have highlighted how \nsuccessful these systems can be when encouraging \nparticipation along with the need for further research into the \ndevelopment of personalised context sensitive core \nvocabularies. \nIndex Terms : symbols, Augmentative and Alternative \nCommunication , AAC, core vocabularies \n1. Introduction \nIn the last few years it has become clear that many therapists \nand teachers working with individuals who have speech and \nlanguage difficulties in the Arabic speaking Gulf area, are \ndepending on westernized symbols and English core \nvocabularies. Issues arou nd limited Arabic language \nknowledge and depend ency on translations or work ing in \nEnglish can cause difficulties for those who need \nAugmentative and Alternative forms of Communic ation \n(AAC) due to disabilities. Huer [1] reports that “observations \nof communication across cultures reveal that non -symbolic as \nwell as symbolic forms of communication are culturally \ndependent” and her later work “suggests that consumers, \nfamilies, and clinicians from some cultural backgrounds may \nnot perceive symbols in the same way as they are perceived \nwithin the dominant European -American culture” [2]. \nWith this in mind the Arabic Symbol Dictionary research \nteam were determined to take a participatory approach to the ir project, involving AAC users and those supporting them as \nwell as other researchers working in the field of Arabic \nlinguistics and graphic design. \n2. Background \nMuch has been written by speech and language therapists \nabout the necessity for core vocabular ies that have been \nadapted to suit symbol users who need to enhance their \nlanguage skills [3], [4], [5] and [6]. Research has shown that \nwith a few hundred of the most frequently used words 80% of \none’s communication needs can be accommodated [7]. More \nrecently concept coding [8] with the idea of mapping different \nsymbol vocabularies along with a focus on psychosocial and \nenvironmental factors [9] to improve outcomes have been \nadded to the mix. However, there is very little research that \nhas been undertaken to provide therapists with suitable \nvocabularies for Arabic AAC users [10]. In English t hese \nvocabularies tend to be lists of frequently used words from \nspoken and written language across all age groups and some \nfrom AAC users. Despite considerable searching there are \nvery few of these vocabularies available in Arabic with most \ncoming from language learning or frequen tly used word lists \nwith no specified ages or Arabic AAC users. \nIn some areas there is also a lack of understanding regarding \nthe complexities of Arabic spoken and written language that \ndisproportionately affect those who may have communication \nand reading difficulties [ 11], [12] and [13]. Usziel-Karl et al \n[13] cite several researchers in the course of their study \nconcerning Arabic and Hebrew linguistic frameworks and \ndiscuss the “critical importance of morphology as the main \norganizing principle both of the lexicon and of numerous \ngrammatical inflections” . The authors go on to point out the \ndiglossia [two variations of a language in different social \nsituations] nature of Arabic which means there is a \n‘phonological distance [ in grapheme -to-phoneme mapping ] \nthat has a negative impact on the acquisition of basic literacy \nskills in young Arabic children…” Words or word phrases \n(referents) may also be presented above or below a \ncorrespondin g symbol, with changing forms depending on \n91\nSLPAT 2015, 6th Workshop on Speech and Language Processing for Assistive Technologies , pages 91–96,\nDresden, Germany, 11 September, 2015. c/circlecopyrt2015 The Association for Computational Linguistics\ngrammatical status, gender and/or number plus many letters \nwill change their shape dependin g on their position within a \nword. \nThe authors of this research and others have also found there \nare key cultural and f amily values/orientations that should be \nconsidered in order to increase the effectiveness of symbol -\nreferent vocabulary interventions [14] with individuals who \nuse AAC within Arab communities. To this end not only has \nresearch concentrated on word frequency lists and collating a n \nAAC user core vocabulary, but also instigating a voting \nsystem for symbol acceptance , so that words or \nmultiword/word phrases are represented by symbols that are \nsuitable culturally, linguistically and for the settings in which \nthey will be used. \n3. Methodology for Building a Core \nVocabulary \nThe building of an Arabic AAC core vocabulary is ongoing, \nbut began with the collection of word lists used by AAC users, \ntheir families, carers , speech and language therapists and \nteachers in Doha (Qatar) (List a). Sixty three of these \nindividuals joined an AAC forum and these participants have \ncontinued to work with the team as symbols for the \nvocabularies have been developed. \nThe initial aim was to collect around 100 localised A rabic \nmost frequently used words and multiwords to compare with \nthose already in use that were in English or translated into \nArabic based on English core vocabularies. Participating \ntherapists felt a further 400 words/multiwords would be the \nmaximum the m ajority of their users would have in their \ncommunication books or devices. Most English speaking \nthree year olds use over a thousand words [15] so it was \nessential that the fringe vocabulary should be enlarged with \nwords specific to the environment and personal needs \nincluding Qatari colloquial words and place names as well as \nto be relevant to all ages. \nSurveys of core vocabularies in Arabic have revealed that \nfew are freely available [16] and even less make good \ncompanions when thinking of basic language and literacy \nlearning for AAC users . In order to expand the list of 500 \nwords a comparison was carried out against five other Arabic \nword frequency lists. Those for general conversation include d \nthe Kelly Project [17], 101languages.net 1000 most common \nspoken Arabic words and Aljazeera comments often using \ncolloquial language [18]. The Supreme Education Counci l \n(SEC) literacy lists Grade 1,2,3 and Lebanese reading lists \n[19] have been used for literacy skill buil ding in Modern \nStandard Arabic (MSA) . \n3.1. Building a vocabulary list analyser \nAn automatic system was developed that took as an input two \nmain pieces of information: \nList a: The list to be analyzed as a basis for the new core \nvocabulary list: This list could optionally have frequency of \neach entry included. If no frequency is available then a default \nvalue should be added to all the entries before running the \nprogram. Frequency in this case equated to how often a word \nwas used. This frequency does not have to correspond to an \nactual frequency of occurrence in a text somewhere. \nLists b: Lists combining existing vocabularies from a \nnumber of sources with the same structure as List a. Multiple \nvocabularies are used in List s b in an attempt to weight the occurrence of individual words. These vocabularies are \nideally from different sources and should be large enough so \nthat the frequencies of the entries listed are reliable. \n \nThe system produced three lists shown in Figure 1: \nList 1: Initial list containing the words in List a (the in -put \nlist to be analyzed) that did not occur in any of Lists b. This \noutput only contained the words with no frequency scores. \nList 2: The coverage list: containing th e words that occurred \nin List a and at least once in a source vocabulary in Lists b. \nThis output also contained scores for each word by source \nvocabulary list (each word was given severa l scores, one for \neach list in L ists b). Each score equals the frequency with \nwhich each word appeared in th e list from Lists b, normalized \nby dividing the frequencies of each word by the sum of all \nfrequencies in that list. The score was set to 0 if the word did \nnot occur in that list. \n \n \nFigure 1. Input lists (list a and lists b) \nList 3: Remaining word list: This list contained all the words \nthat were in List s b but were not contained in List a . This \noutput also contained the scores for each word and is the \nexample of the system in use (Figure 2). This is the list on \nwhich the comparison in the section 3.2 is based. \n \n \nFigure 2. Example Output from lists viewed in Excel \nFigure 2. shows frequencies are normalized to allow source \nvocabularies to be compared (column one) , this process can be \nproblematic if the list is too small as the numbers may become \ntoo high a nd significantly affect results. Even if there is \n92\nsufficient data, it is still imperative that an expert goes through \nthe different output list to inspect the results, correct errors and \nchoose the set of words to be added or removed from the input \nlist. The scores given only act as a guide to assist the expert in \nthe process. \n In practical terms w ords with high scores in List 3 could \nbe deemed suitable for inclusion in the Arabic Symbol \nDictionary and added to List a. The system has been run \nrepeatedly as lists have been added so that results become \nmore robust. \n3.2. Results of the Core Vocabulary building \nWhen comparing the list provided by participants as examples \nof AAC users’ vocabularies (L ist a), there were very small \noverlaps with those words most frequently found where the \ntop words were based on very high frequency scores for those \nmost commonly used (Lists b). \n To provide an instant comparison between Output 1 and 3 \nthe top 20 words translated from A rabic are listed below. \nOutput from 1 (List a) ordered by those most often used in \nAAC lists. \n“I/me (am), go, ball, car, banana, on/to, thing/something, \nto, chair, clock/watch, want, in, sit, was, eat, bike, \nflower/rose, play, cup, door” \n \nOutput from 3 (Lists b) ordered by frequency \n“the, God, about, oh, to, which (masculine) , and not, \npeople, no, which (feminine) , in, even, or, on, against, \nonly, however, Arabs, must , order” \n \nFurther analysis of the Lists b that were about spoken and \ncolloquial language shows that nouns only made up 5% of the \ntotal list from the Kelly project , 25 to 30 % of the Aljazeera \nand Oweini-Hazoury list s, but 50% of the AAC lists. A \nconcrete noun, even if it is considered par t of a fringe \nvocabulary, is a much easier concept to illustrate with a \nsymbol and may be seen as one of the early building blocks to \nlanguage acquisition. Verbs, however are more complex and \nhave low fre quency rates; between 5 to 20 %. The Aljazeera \nlist has the lowest and the AAC lists have the highest. The \nother parts of speech, equally pertinent in com munication , \nsuch as adjectives, adverbs, preposi tions, pronouns and \nconjunctions were found to be variably frequent from one list \nto another. The Aljazeera list has a quarter of its frequencies \nmade up of prepositions, whereas Kelly’s list, SEC and the \nAAC user list have only 5%. Conjunctions also show low \nfrequencies through the lists in question; between 1% and \n15%. It is worth mentioning that pron ouns are totally \nnonexistent in Kelly’s project list, either under their detached \nform or attached form. It should also be noted that therapists \nmay choose nouns rather than pronouns for the purpose of \nsymbol transparency. The other lists had less than 20% of \npronouns all types combined. Arabic pronouns, and also some \nprepositions combine with nouns or with other part s of speech \nas single words, this morphological aspect could be the reason \nwhy their frequencies are rather undermined. Adverbs are also \nrarely listed, The Owein-Hazoury list has none; the highest \nadverb frequency is found in the 1000 most common Arabic \nwords list (4%). In Arabic most adverbs of time and space are \nprepositional groups; typicall y a structure made of a \npreposition followed by a noun. This structural definition of \nadverbs explains the low number or even the lack of adverbs in some of the core vocabu lary lists. The users would frame \nappropriate phrases to express adverbs by using existing \nprepositions combined with nouns. \nFurther confirmation for these differences in the frequenc y \nof various parts of speech was sought for the literacy skill \nvocabularies. The conversational based lists were repla ced \nwith reading lists forming Lists b. Arabic lists such as those \nused SEC and Arabic sig ht words [ 19]. It was found that in \ntheir top 100 frequently used words 30 and 38 were nouns \nrespectively. \n3.3. Discussion about the core vocabulary data \ncollection \nAs can be seen from the top 20 words in List a and L ists b, \nboth show nouns that would not be found in the top twenty \nfrequently used words in an English core vocabulary and in \nreality would be considered fringe words . However, the lists \ndo illustrate that in Arabic there are elements of the grammar \nthat are equally as important such as conjunctions and \nprepositions. \nThere are considerable issues with the fact that root words in \nArabic clearly appear within other words and this can affect \nthe results as well as the fact that the lists collected from AAC \nusers are based on popular use , rather than large scale \nfrequency levels within a huge corpus. There will always be \nthe need to improve outcomes by collecting more lists from \nAAC users in the future to improve the balance between words \nused for symbol communication and those based on f requency \nof use, although the latter informs vocabulary development \nBy using this system the combined AAC word lists from the \nDoha schools and clinics making up ‘List a’ once translated \ninto English, could be compared to the Prenke Romich 100 \nFrequently Us ed Core Words [20], [21] (as Lists b). It was \nnoted that the Doha Arabic AAC user list (List a) contained 38 \nnouns in the top 100 words compared to none appearing in the \nEnglish core vocabulary. It has been said that in English the \nuse of nouns goes from 7% in the top 100 words to 20% in the \ntop 300 [22] whereas in MSA the corresponding frequency \nlevels are 26% and 45% according to one of the largest \nfrequency lists [23]. \nThese results highlight the need for further exploration into \nthis aspect of vocabulary build ing. In particular there is a need \nto collect more wide ranging conversations to evaluate the \ndifferences in the type of words and multiwords required to \nsuccessfully build Arabic AAC personalised and context \nsensitive vocabularies . There is also the need to be aware of \nthe differences in lists used for enhancing reading skills where \nMSA is used rather than the colloquial dialects of the area. A \nfurther distinction may be needed between adult and children’s \nvocabularies where religious and social language requirements \nmay impact on AAC use . The Speech and Language \ntherapists attending meetings with the team also noted the \nimportance of vocabularies sensitive to user’s characters, \ninterests and social setting commenting on dress and gen der \nissues as well as being aware of the issues of using lists from \nAAC users of school age due to the lack of available adult \nAAC users in the region at the time of writing. \n4. Methodology for Symbol Management \nJust as it was found that there was a paucity of core AAC \nvocabulary lists in Arabic, the same could be said about the \nsymbols provided for AAC devices. Some cent res in Doha \n93\nwere providing specifically designed symbols for the Arabic \nculture, en vironment, social and personalised linguistic needs \nbut there were no adapted symbol sets that were freely \navailable for sharing . Nor had any symbols been evaluated for \ntransparency or cultural sensitivity by local AAC users, their \nsupporting professionals and families. \nA bespoke Symbol M anagement system was developed that \nallowed the team to s tore symbols. The system also offered \nparticipants the chance to take an active role in the decisions \nmade around the development and evaluation of appropriate \nsymbols as they could see and vote on uploaded symbols \nrepresenting the core vocabularies previously collected. \nThe online database was based on a Model -View-Controller \n(MVC) framework using MongodB with JavaScript (NodeJS \nand an Express JS plugin). The code is open source and \navailable on bitbucket. View templ ates which generated the \nhtml pages were built suing the Jade templating engine. The \nonly other plugins used were for authentication and list \nfiltering. The latter will provide the basis for browse and \nsearch features in the final Arabic Symbol Dictionar y website . \n4.1. Building symbol acceptance system \nAs part of the online management system a simple voting set \nup was created using the filters developed for batches of \nsymbols. During voting sessions participants have been \npresented with a series of around 60 -65 images of new ly \ndesigned symbols, the referent in MSA, Qatari (where \napplicable ) and English. The voting criteria are presented with \nlarge selection areas on a scale of 1 to 5 where 5 is completely \nacceptable (see Figure 3) so that different visual displays can \nbe used. The four criteria are listed with a free text box for \ncomments: \n• Feelings about the symbol as a whole \n• Represents the word or phrase \n• Color contrast \n• Cultural sensitivity \n \n \nFigure 3 Voting system with criteria for acceptance on \na scale of 1 -5 where 5 is completely acceptable \n4.2. Results from voting sessions \nThe initial batch of symbols had 63 voters logging into the \nSymbol Manager resulting in 2341 votes for 65 symbols. \nOverwhelmingly the decisions were very favourable with all \nmean ratings significantly greater than a rating of 3.5. The \naverage was 4.0. (See Table 1 ) All voting data was anonymized and comments collated to inform the graphic \ndesigner. \nTwo AAC users were also able to vote on the symbols via an \nadapted system using their own Sensory Software Grid 2 \nsystems with the symbols added plus a 1 -5 or 1-3 ‘thumbs up’ \nto ‘thumbs down’ scoring depending on their ability. This \nproduced equally good results and comments were captured \nvia recordings. More AAC users are being encouraged to join \nthe forum and as further batches of symbols are developed it is \nhoped that voting sessions will continue to occur both during \nface to face meetings and remotely. \n \nTable 1. One Sample T test for Difference of Mean \nRatings from 3.5 \nCriteria Number \nof voters Mean \nrating 2 tail P \nValue for \ndifference \nfrom 3.5 \n1 63 3.94 <0.0001 \n2 63 3.90 <0.0001 \n3 63 4.07 <0.0001 \n4 63 4.10 <0.0001 \n4.3. Discussion about the Symbol Management \nsystem \nThe initial development of the Symbol Management system \nwas purely for the team to upload lexical entries and symbols \nwith a set of filter systems based on parts of speech, gender, \nnumber and symbol descriptions. However, as the \nparticipation by AAC users, their families, therapists and \nteachers grew it became essential to offer a voting system that \nquickly produced results because specialists wanted to use the \nsymbols as they were developed. As all the speech therapists \nand teachers involved had worked f or several years with AAC \nusers, but were mainly from countries other than Qatar, it was \nfelt that there should be a method to check acceptability within \nthe community before releasing them for download, not just \ndepending on the team ’s opinions. The team had already set \nup a Google+ method for initially evaluating iconicity and \ntransparency [22]. \nThose therapists working in the Doha area were very willing \nto express their opinions about symbol suitability and the link s \nwith the corresponding word lists c ollected. It was noted that \nthere was a general understanding that the lexical entries in \nModern Standard Arabic and those entries in Qatari colloquial \nArabic may share the same symbol for similar meaning words \nor multiword phrases but there may need to b e additional \nsymbols and / or changes in symbol labels to represent \ndifferent parts of speech, gender and number and to take into \naccount the bilingual nature of the dictionary to aid those who \nwere not fluent Arabic speakers. \n5. Conclusion \nThe core vocabulary and symbol management systems have \nprovided the research team with quick and easy ways to \nanalyse data as well as provide a platform for user \nparticipation . Having a selection of MSA and Qatari core and \nfringe vocabularies has been ess ential for ongoing symbol \ndevelopment , but there is still a need to continually update the \ncollection of local vocabularies to ensure that colloquial as \n94\nwell as written language is captured. The present frequency \nlevels of the words collected in Doha (List a) are low in \ncomparison to global lists (Lists b). They are also subjective, \nbased on the AAC forum input rather than a wide base of \nArabic AAC users and carers . However, with support it has \nbeen shown that where suitable core vocabular ies are \nimplemen ted alongside appropriate symbols AAC users , who \nhave the capacity , can enhance their communication and \nimprove their readiness for reading [24] and already in this \nproject AAC users have greeted the newly developed symbols \nwith much appreciation , but there remains the need to ‘focus \non long-term outcomes’ [9]. \nThere remains the debate as to the differences in parts of \nspeech seen in English core vocabular y lists compared to some \nArabic lists with high levels of noun use . It is important to \nappreciate the limitations of the collection procedures as well \nas the problems of automated comparisons between lists that \nrequire normalization and have different methods for showing \nroot words, different parts of speech and verb declensions. \nThere is much research st ill to be carried out to ensure that \nan appropriate vocabulary list suitable for Arabic AAC users \nand the development of literacy skills can be collated in a \ndiglossia situation . But as an increasing number of words lists \nare provided by participants set against the further analysis of \nthe frequency lists already gathered it is felt that this can be \nachieved. \n \n6. Acknowledgements \nThis research was made possible by the NPRP award [NPRP 6 \n- 1046 - 2 - 427] from the Qatar National Research Fund (a \nmember of The Qatar Foundation) and thanks must go to all \nthose participants in Doha who have contributed to the work \nof the Arabic Symbol Dictionary team. Grateful thanks are \nalso expressed to the ARASAAC team for allowing their \nsymbols to be used with participants. The statements made \nherein are solely the responsibility of the authors. \n \n7. References \n[1] M. B. Huer, “Culturally inclusive assessments for children \nusing augmentative and alternative communication \n(AAC),” Journal of Children’s Communication \nDevelopment, 19 (1), 23–34. 1997. \n[2] M. B. Huer, “Examining perceptions of graphic symbols \nacross cultures: Preliminary study of the impact of \nculture/ethnicity ,” Augmentative and Alternative \nCommunication 16 (3): 180 –185. 2000. \ndoi:[10.1080/07434610012331279034 ] \n[3] S. Balandin and T. Iacono, “A few well -chosen words ,” \nAugmentative and Alternative Communication, \n14(September), 147 –161 1998. \n[4] M. Banajee, C. Dicarlo, and S. Buras Stricklin, “Core \nVocabulary Determination for Toddlers ,” Augmentative \nand Alternative Communication, 19(2), 67–73. 2003. \n[5] M. Lahey, and L. Bloom, “Planning a First Lexicon: Which \nWords to Teach First ,” Journal of Speech and Hearing \nDisorders , 340–351 1975. \n[6] G. M. Van Tatenhove, “Building Language Competence \nWith Students Using AAC Devices: Six Challenges ,” \nPerspectives on Augmentative and Alternative \nCommunication, 18(2), 38–47 2009. \n[7] G. C. Vanderheiden, and D. P. Kelso, “Comparative \nanalysis of fixed -vocabulary communication acceleratio n techniques,” AAC Augmentative and Alternative \nCommunication, 3, 196-206. 1987. \n[8] M. Lundälv and S. Derbring, “AAC Vocabulary \nStandardisation and Harmonisation ,” Springer-Verlag \nBerlin Heidelberg, pp.303–310. 2012. \n[9] J. Light, and D. Mcnaughton, “Designing AAC Research \nand Intervention to Improve Outcomes for Individuals with \nComplex Communication Needs ,” Augmentative and \nAlternative Communication, ( ahead-of-print), 1-12. 2015. \n[10] R. Patel and R. Dakwar-Khamis, “An AAC training \nprogram for special education teachers: A case study of \nPalestinian Arab teachers in Israel ,” Journal of \nAugmentative and Alternative Communication, 21, 3, 205 -\n217. 2005. \n[11] S. Abu-Rabia, “Learning to read in Arabic: Reading, \nsyntactic, orthographic and working memory skills in \nnormally ac hieving and poor Arabic readers,” Reading \nPsychology: An International Quarterly, 16, 351–394. \n1995. \n[12] S. Abu Rabia, D. Share and S. M. Mansour, “Word \nrecognition and basic cognitive processes among reading -\ndisabled and normal readers of Arabic ,” Reading and \nWriting: An Interdisciplinary Journal , 16, 423-442. 2003. \ndoi:[10.1023/A:1024237415143 ] \n[13] S. Uziel-Karl, F. Kanaan, R. Yifat, I. Meir, N. Abugov, and \nD. Ravid, “Hebrew and Palestinian Arabic in Israel: \nLinguistic Frameworks and Speech -Language Pathology \nServices,” Topics in Language Disorders Vol 34 Number \n2, p 133 – 154 2014. \n[14] B. Woll, and S. Barnett, “Toward a Sociolinguistic \nPerspective on Augmentative and Alternative \nCommunication ,” AAC Augmentative and Alternative \nCommunication, 14(December), pp.200 –211. 1998. \n[15] K.J. Hill, and C. Dollaghan, “Conversations of Three -Year \nOlds: Implications for AAC Outcomes ,” American Speech -\nLanguage -Hearing (ASHA) Convention. San Francisco, \nCA. November. 1999. \n[16] W. Zaghouani, “Critical Survey of the Freely Available \nArabic Corpora,” In the Proceedings of the International \nConference on Language Resources and Evaluation \n(LREC'2014), OSACT Workshop. Rejkavik, Iceland, 26-31 \nMay 2014. \n[17] A. Kilgarriff, F. Charalabopoulou, M. Gavrilidou, J. B. \nJohannessen, S. Khalil, S. J. Kokkinakis and Volodina, E. \n“Corpus-based vocabulary lists for language learners fo r \nnine languages,” Language Resources and Evaluation, 1-\n43 2013. \n[18] W. Zaghouani, B. Mohit, N. Habash, O.Obeid, N. Tomeh, \nand K. Oflazer. “Large-scale Arabic Error Annot ation: \nGuidelines and Framework,” In the Proceedings of the \nInternational Conference on Language Resources and \nEvaluation (LREC'2014). Rejkavik, Iceland, 26-31 May \n2014. \n[19] A. Oweini and K. Hazoury, “Towards a list of Awards a \nSight Word List in Arabic ,” International Review of \nEducation, 56 (4), 457 -478 2010. \n[20] K. Hill, and B. Romich, 100 Frequently Used Core Words. \nAccessed May 2015 \nhttps://aaclanguagelab.com/files/100highfrequencycorew or\nds2.pdf \n[21] K. Hill, and B. Romich, “A summary measure clinical \nreport for characterizing AAC performance,” Proceedings \nof the RESNA ’01 Annual Conference, Reno, NV . pp 55-57. \n2001. \n[22] J. Boenisch and G. Soto, “The Oral Core Vocabulary of \nTypically Developing English -Speaking School -Aged \nChildren,” Implications for AAC Practice. Augmentative \nand Alternative Communication, pp.77–84. 2015. \n[23] T. Buckwalter and D. Parkinson, “A frequency dictionary \nof Arabic: Core vocabulary for learners,” Routledge. 2014. \n[24] D. Evans, L. Bowick, M. Johnson and P. Blenkhorn , \n“Using i conicity to evaluate symbol use,” In: \n95\nProceedings of the 10th international conference on \ncomputers helping people. Linz, Austria, pp 874–881 2006. \n[25] P. Hatch, L. Geist, and K. Erickson, “Teaching Core \nVocabulary Words and Symbols to Students w ith Complex \nCommunication Needs,” Presented at Assistive Technology \nIndustry Association, 2015. Retrieved 19/2/2015 \nfromhttp://www.med.unc.edu/ahs/clds/files/conference -\nhand-outs/atia_2015.pdf (Accessed 14 June 2015). \n96",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "WewLqxoRSu3F",
"year": null,
"venue": "e-Science 2015",
"pdf_link": "https://ieeexplore.ieee.org/iel7/7303998/7304061/07304319.pdf",
"forum_link": "https://openreview.net/forum?id=WewLqxoRSu3F",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Cloud-based E-Infrastructure for Scheduling Astronomical Observations",
"authors": [
"James Wetter",
"Ozgur Akgun",
"Adam Barker",
"Martin Dominik",
"Ian Miguel",
"Blesson Varghese"
],
"abstract": "Gravitational microlensing exploits a transient phenomenon where an observed star is brightened due to deflection of its light by the gravity of an intervening foreground star. It is conjectured that this technique can be used to measure the abundance of planets throughout the Milky Way. In order to undertake efficient gravitational microlensing an observation schedule must be constructed such that various targets are observed while undergoing a microlensing event. In this paper, we propose a cloud-based e-Infrastructure that currently supports four methods to compute candidate schedules via the application of local search and probabilistic meta-heuristics. We then validate the feasibility of the e-Infrastructure by evaluating the methods on historic data. The experiments demonstrate that the use of on-demand cloud resources for the e-Infrastructure can allow better schedules to be found more rapidly.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "7LLv_pbo3z",
"year": null,
"venue": "EC 2019",
"pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3328526.3329603",
"forum_link": "https://openreview.net/forum?id=7LLv_pbo3z",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Optimal Auctions vs. Anonymous Pricing: Beyond Linear Utility",
"authors": [
"Yiding Feng",
"Jason D. Hartline",
"Yingkai Li"
],
"abstract": "The revenue optimal mechanism for selling a single item to agents with independent but non-identically distributed values is complex for agents with linear utility (Myerson,1981) and has no closed-form characterization for agents with non-linear utility (cf. Alaei et al., 2012). Nonetheless, for linear utility agents satisfying a natural regularity property, Alaei et al. (2018) showed that simply posting an anonymous price is an e-approximation. We give a parameterization of the regularity property that extends to agents with non-linear utility and show that the approximation bound of anonymous pricing for regular agents approximately extends to agents that satisfy this approximate regularity property. We apply this approximation framework to prove that anonymous pricing is a constant approximation to the revenue optimal single-item auction for agents with public-budget utility, private-budget utility, and (a special case of) risk-averse utility.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "iJz8FBQfmbp",
"year": null,
"venue": "ECIR 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=iJz8FBQfmbp",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E-Mail Classification for Phishing Defense",
"authors": [
"Wilfried N. Gansterer",
"David Pölz"
],
"abstract": "We discuss a classification-based approach for filtering phishing messages in an e-mail stream. Upon arrival, various features of every e-mail are extracted. This forms the basis of a classification process which detects potentially harmful phishing messages. We introduce various new features for identifying phishing messages and rank established as well as newly introduced features according to their significance for this classification problem. Moreover, in contrast to classical binary classification approaches (spam vs. not spam), a more refined ternary classification approach for filtering e-mail data is investigated which automatically distinguishes three message types: ham (solicited e-mail), spam, and phishing. Experiments with representative data sets illustrate that our approach yields better classification results than existing phishing detection methods. Moreover, the direct ternary classification proposed is compared to a sequence of two binary classification processes. Direct one-step ternary classification is not only more efficient, but is also shown to achieve better accuracy than repeated binary classification.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "KFBiGPwB-dp",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=KFBiGPwB-dp",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Pacing Equilibrium in First-Price Auction Markets",
"authors": [
"Vincent Conitzer",
"Christian Kroer",
"Debmalya Panigrahi",
"Okke Schrijvers",
"Eric Sodomka",
"Nicolás E. Stier Moses",
"Chris Wilkens"
],
"abstract": "In the isolated auction of a single item, second price is often preferable to first price in properties of theoretical interest. Unfortunately, single items are rarely sold in true isolation, so considering the broader context is critical when adopting a pricing strategy. In this paper, we show that this context is important in a model centrally relevant to Internet advertising: when items (ad impressions) are individually auctioned within the context of a larger system that is managing budgets, theory offers surprising support for using a first price auction to sell each individual item. In particular, first price auctions offer theoretical guarantees of equilibrium uniqueness, monotonicity, and other desirable properties, as well as efficient computability as the solution to the well-studied Eisenberg-Gale convex program. We also use simulations to demonstrate that while there are incentives to misreport in thin markets (where budgets aren't constraining), a bidder's incentive to deviate vanishes in thick markets.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "n83XEUJ49-8",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=n83XEUJ49-8",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Pacing Equilibrium in First-Price Auction Markets",
"authors": [
"Vincent Conitzer",
"Christian Kroer",
"Debmalya Panigrahi",
"Okke Schrijvers",
"Eric Sodomka",
"Nicolás E. Stier Moses",
"Chris Wilkens"
],
"abstract": "In the isolated auction of a single item, second price is often preferable to first price in properties of theoretical interest. Unfortunately, single items are rarely sold in true isolation, so considering the broader context is critical when adopting a pricing strategy. In this paper, we show that this context is important in a model centrally relevant to Internet advertising: when items (ad impressions) are individually auctioned within the context of a larger system that is managing budgets, theory offers surprising support for using a first price auction to sell each individual item. In particular, first price auctions offer theoretical guarantees of equilibrium uniqueness, monotonicity, and other desirable properties, as well as efficient computability as the solution to the well-studied Eisenberg-Gale convex program. We also use simulations to demonstrate that while there are incentives to misreport in thin markets (where budgets aren't constraining), a bidder's incentive to deviate vanishes in thick markets.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "m7-cDGVRVyro",
"year": null,
"venue": "EAMT (Projects/Products) 2016",
"pdf_link": "https://aclanthology.org/2016.eamt-2.6.pdf",
"forum_link": "https://openreview.net/forum?id=m7-cDGVRVyro",
"arxiv_id": null,
"doi": null
}
|
{
"title": "SCATE - smart computer aided translation environment",
"authors": [
"Vincent Vandeghinste",
"Tom Vanallemeersch",
"Liesbeth Augustinus",
"Joris Pelemans",
"G. Heymans",
"Iulianna Van der Lek-Ciudin",
"Arda Tezcan",
"Donald Degraen",
"Jan Van den Bergh",
"Lieve Macken",
"Els Lefever",
"Marie-Francine Moens",
"Patrick Wambacq",
"Frieda Steurs",
"Karin Coninx",
"Frank Van Eynde"
],
"abstract": "V. Vandeghinste, T. Vanallemeersch, L. Augustinus, J. Pelemans, G. Heymans, I. Van der Lek-Ciudin, A. Tezcan, D. Degraen, J. Van den Bergh, L. Macken, E. Lefever, M. Moens, P. Wambacq, F. Steurs, K. Coninx, F. Van Eynde. Proceedings of the 19th Annual Conference of the European Association for Machine Translation: Projects/Products. 2016.",
"keywords": [],
"raw_extracted_content": "382 Proceedings of the 19th Annual Conference of the EAMT: Projects/ Products \n \n SCATE – Smart Computer Aided Translation \nEnvironment \nV. V ANDEGHINSTE1, T. V ANALLEMEERSCH1, L. A UGUSTINUS1, \nJ. PELEMANS1, G. HEYMANS1, I. VAN DER LEK-CIUDIN1, \nA. TEZCAN2, D. D EGRAEN3, J. V AN DEN BERGH3, L. M ACKEN2, \nE. LEFEVER2, M. M OENS1, P. W AMBACQ1, F. S TEURS1, \nK. C ONINX3, F. V AN EYNDE1 \n1University of Leuven – Departments of Linguistics, Computer Science, and Electronical \nEngineering; 2Ghent University – LT3; 3Hasselt University - EDM \[email protected] \nAbstract . The SCATE project aims at improving translators' efficiency through improvements in \ntranslation technology, evaluation of computer -aided translation, terminology extraction from \ncomparable corpora, speech recognition accuracy, and work flows and personalised user \ninterfaces. It is funded by IWT -SBO, project nr. 130041. h ttp://www.ccl.kuleuven.be/scate/ \nEnvisaged Project Results \nWe present the envisaged results of SCATE, now the project is mid -term, with two more \nyears to go. \nWe have surveyed and observed translators with respect to the following aspects: \nhuman -machine interaction in post -editing, human acquisition of domain knowledge and \nterminology, and workflow usage and interface personalization. We are researching \ndifferen t computer -aided translation (CAT) technologies, such as syntax -based fuzzy \nmatching and concordancing, tools for speedier and more consistent collaborative \ntranslation, automated term extraction methods from comparable corpora, and integrated \nmodels and d omain adaptation for speech as a post -editing method. Concerning MT \nTechnology, we are working on syntax -based transduction, taxonomy -based confidence \nestimation metrics and speech translation. For these purposes, we have developed the \nfollowing resources: a taxonomy of MT errors and manually annotated corpus of MT \nerrors. Concerning the user interface, we are developing new approaches towards \nvisualisation of translation features and towards flexible user interfaces. By the end of \nthe project, we intend to integrate most of these aspects in a demonstration system that \ntranslates from English to Dutch. \nWe are interested in feedback from language service providers and translators: what do \nyou consider useful and interesting – how can we improve your translati on environment? \n \n \n The Flemish Agency for Innovation through Science and Technology, Strategic Basic Research.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ynrGSzg-Q2E",
"year": null,
"venue": "ECAI 2016",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-61499-672-9-338",
"forum_link": "https://openreview.net/forum?id=ynrGSzg-Q2E",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Joint Model for Sentiment-Aware Topic Detection on Social Media",
"authors": [
"Kang Xu",
"Guilin Qi",
"Junheng Huang",
"Tianxing Wu"
],
"abstract": "Joint sentiment/topic models are widely applied in detecting sentiment-aware topics on the lengthy review data and they are achieved with Latent Dirichlet Allocation (LDA) based model. Nowadays plenty of user-generated posts, e.g., tweets and E-commerce short reviews, are published on the social media and the posts imply the public's sentiments (i.e., positive and negative) towards various topics. However, the existing sentiment/topic models are not applicable to detect sentiment-aware topics on the posts, i.e., short texts, because applying the models to the short texts directly will suffer from the context sparsity problem. In this paper, we propose a Time-User Sentiment/Topic Latent Dirichlet Allocation (TUS-LDA) which aggregates posts in the same timeslice or user as a pseudo-document to alleviate the context sparsity problem. Moreover, we design approaches for parameter inference and incorporating prior knowledge into TUS-LDA. Experiments on the Sentiment140 and tweets of electronic products from Twitter7 show that TUS-LDA outperforms previous models in the tasks of sentiment classification and sentiment-aware topic extraction. Finally, we visualize the sentiment-aware topics discovered by TUS-LDA.",
"keywords": [],
"raw_extracted_content": "A Joint Model for Sentiment-Aware Topic Detection on\nSocial Media\nKang Xu and Guilin Qi and Junheng Huang and Tianxing Wu1\nAbstract. Joint sentiment/topic models are widely applied in\ndetecting sentiment-aware topics on the lengthy review data and\nthey are achieved with Latent Dirichlet Allocation (LDA) basedmodel. Nowadays plenty of user-generated posts, e.g., tweets andE-commerce short reviews, are published on the social media andthe posts imply the public’s sentiments (i.e., positive and negative)towards various topics. However, the existing sentiment/topic mod-els are not applicable to detect sentiment-aware topics on the posts,i.e., short texts, because applying the models to the short texts di-rectly will suffer from the context sparsity problem. In this paper,we propose a Time-User Sentiment/Topic Latent Dirichlet Alloca-tion (TUS-LDA) which aggregates posts in the same timeslice oruser as a pseudo-document to alleviate the context sparsity prob-lem. Moreover, we design approaches for parameter inference andincorporating prior knowledge into TUS-LDA. Experiments on theSentiment140 and tweets of electronic products from Twitter7 showthat TUS-LDA outperforms previous models in the tasks of senti-ment classification and sentiment-aware topic extraction. Finally, wevisualize the sentiment-aware topics discovered by TUS-LDA.\n1 Introduction\nWith the rapid growth of Web 2.0, a mass of user-generated posts,e.g., tweets and E-commerce short reviews, which capture people’sinterests, thoughts, sentiments and actions. The posts have been accu-mulating on the social media with each passing day. Sentiment anal-ysis attempts to find user preference, likes and dislikes from the postson social media, such as reviews, blogs and microblogs [21] and topicmodeling attempts to discover the topics or aspects from from re-views, blogs and microblogs etc [3]. Topic modeling and sentimentanalysis on the posts are two significant tasks which can benefit manypeople. For example, we can discover a topic about “Apple Inc.” andthe overall sentiment of the topic. The sentiment of the topic about“Apple Inc.” is implicitly associated with the stock trading of “AppleInc.”, because negative sentiments towards the company on socialmedia can fall sales and financial gains but positive sentiments canimprove sales [2]. Topic modeling [1] focuses on extracting word-level or document-level topics, while sentiment analysis [23] is toanalyze the sentiments of words or documents.\nTopic modeling and sentiment analysis on the social media are\ncomplementary where sentiments on the social media often changeover different topics and topics on the social media are always re-lated to public sentiments. So jointly modeling topics and sentimentson the social media is a feasible and significative task and it can re-flect people’s sentiment on different topics. However, unlike the nor-\n1Southeast University, Nanjing, China\nEmail: {kxu,gqi,jhhuang,wutianxing}@seu.edu.cnmal documents (e.g., news and long reviews), the short and informalcharacteristic of the posts, e.g., tweets and short reviews, on the so-cial media makes the tasks of topic modeling and sentiment analysismore challenging.\nBy jointly modeling topics and sentiments on social media, we\nwant to obtain sentiment-aware topics from the posts, e.g., a topicabout “Apple Inc.” (‘ipad’, ‘iphone’, ‘itouch’, ‘imac’, ‘beautiful’ and‘popular’) with the overall sentiment polarity “positive”. Topic mod-els, e.g., LDA [1] and pLSA [10], originally focus on mining top-ics from texts, but the models can also be extended to extract anextra aspect of texts, i.e., sentiment. Conventional sentiment-awaretopic models, like Joint Sentiment/Topic Model (JST) [15] and As-pect/Sentiment Unification Model (ASUM) [11], are utilized for un-covering the hidden topics and sentiments from text corpus whereeach document is a mixture of sentiment/topics and each senti-ment/topic is a mixture of words. Thereinto, each sentiment labelin the models is viewed as a special kind of topic where topics areunknown and data-driven but sentiments are known and specified.However, for the short and informal characteristic of the posts, ap-plying the models to the short posts on the social media directly al-ways suffers from the context sparsity problem. So the models fail torecognize the accurate sentiments and senses of words in the posts.\nOne simple and effective way to alleviate the sparsity problem is\nto aggregate short posts into lengthy pseudo-documents [5, 31]. Herewe assume that the posts on the social media are a mixture of twokinds of topics: temporal topics which are related to current events(e.g., tweets about a topic “Announcement of iphone SE” in Fig 1(a)which are produced in a timeslice) and stable topics which are relatedto personal interests (e.g., tweets about a topic “Apple products” inFig 1(b) which are produced by a user). Thereinto, temporal topicsare sensitive to time. If posts belong to temporal topics, we aggregatethe posts in the same timeslice as a single document. We assume eachtimeslice is a mixture of sentiment-aware topics, i.e., each sentimentin the timeslice corresponds to several topics. Similar to temporaltopics, stable topics are related to specific users and each user is amixture of sentiment-aware topics. If a post belongs to a temporaltopic, the post is assigned to a sentiment-aware topic in its publishingtimeslice; otherwise, it is assigned to a sentiment-aware topic in itspublishing user.\nMoreover, based on the analysis of the characteristics of topics and\nsentiments, we exploit the important observation of topics: A singlepost always talks about a single topic [31]. Although a post usuallytalks about a single topic, a post may talk about multiple aspects ofthe topic with different sentiment polarities [12, 18].\nFor example, while the following short review of cannon cam-\nera from Amazon.com expresses the overall sentiment polarity ofCamera, which corresponds to the part in italics, as positive, it addi-ECAI 2016\nG.A. Kaminka et al. (Eds.)\n© 2016 The Authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/978-1-61499-672-9-338338\n(a) Announcement of iPhone SE\n (b) Apple products\nFigure 1. (a) A temporal topic (b) A stable topic\ntionally expresses a negative opinions towards the camera’s lenses\nwhich corresponds to the part in bold.\nCamera is great, but lenses are crap and cheap and don’t\nwork on auto focus. Buy body and lenses separately.\nFor a tweet, it can express a positive, a negative or neutral\nsentiment, and it can also express both positive and negative\nsentiments[24].\nSo, for sentiment polarities, we exploit the observation that words\nin a single post may correspond to multiple sentiment polarities [12,18]. A post can talk about the same topic with different sentiments.For better modeling topics and sentiments respectively, we followthe assumption that words in the same post shouldbelong to the sametopic, but they can have different sentiments.\nMoreover, we add a sentiment label for each post. The sentiment\nlabel represents the overall sentiment polarities of the post and isdetermined by the sentiment polarities of words in the post. If wordsof a post express both positive and negative sentiments, the overallsentiment polarities of the post should be judged as the stronger one[24]. The sentiment label is utilized to model the association betweensentiments and topics.\nIn this paper, we propose a novel Time-User Sentiment/Topic La-\ntent Dirichlet Allocation (TUS-LDA) to mine sentiment-aware topicsfrom the user-generated posts on social media.\nThere exist four main contributions of TUS-LDA:1) TUS-LDA aggregates posts in the same timeslice or user as a\nsingle document to alleviate the context sparsity problem.\n2) We design different ways to model topics and sentiments based\non the characteristics of topics and sentiments. Thereinto, the sen-timents of a post and the words in the post are all drawn fromdocument-level sentiment distribution. Within the chosen sentimentof the post, the topic of the post is drawn from a user-level ortimeslice-level sentiment/topic distribution.\n3) We design approaches of parameter inference and incorporating\nprior sentiment knowledge for TUS-LDA.\n4) We implement experiments on two datasets to evaluate the ef-\nfectiveness of sentiment classification and topic extraction in TUS-LDA and visualize sentiment-aware topics discovered by TUS-LDA.\nThe rest of the paper is organized as follows: in Section 2, we in-\ntroduce the related work about topic models on short texts and jointsentiment/topic models; in Section 3, we give the definitions of thebasic terminologies we will use in our paper; in Section 4 we presentour proposed model Time-User Sentiment/Topic Latent Dirichlet Al-location (TUS-LDA); Experimental settings and results are shown inSection 5. Finally, in Section 6, we conclude this paper and lists thefuture work.2 Related Work\n2.1 Topic Models on Short Texts\nLDA [1] and PLSA [10] originally focus on mining topics from\nlengthy documents. Recently topic modeling in the posts on socialmedia is popular, however, it also suffers from the context sparsityproblem of the posts. To overcome the sparsity problem of posts onthe social media, there exist some work of aggregating posts intopseudo-documents. In [31], Twitter-LDA aggregated posts publishedby a user into one lengthy pseudo-document and made words in thesame post belong to the same topic. In [5], posts in TimeUserLDAwere aggregated by timeslices or users for finding bursty topicswhere posts belong to two kinds of topics: personal topics and tem-poral topics. Similar to TimeUserLDA, posts in TUK-TTM [29]were also aggregated by timeslices or users and TUK-TTM wasutilized for time-aware personalized hashtag recommendation. Al-though these models can alleviate the problem of the context sparsityof posts on social media, they did not model an extra aspect of posts,i.e., sentiment.\n2.2 Joint Sentiment/Topic Models\nRecently, some topic models have been extended to model topics andsentiments jointly. The first work of topic and sentiment modeling isTopic-Sentiment Mixture model TSM [19]. In TSM, a sentiment is aspecial kind of topic and each word is generated from either a senti-ment or a topic. The relation between sentiments and topics cannotbe mined by TSM. At the same time, TSM is based on PLSA and suf-fers from the problems of inferencing on new documents and over-fitting the data. To overcome these shortcomings, Joint Sentiment-\nTopic model (JST) [15] which is a two-level sentiment-topic model\nbased on Latent Dirichlet Allocation (LDA) was proposed. In JST,sentiment labels are associated with documents, under which top-ics are associated with sentiment labels and words are associatedwith both sentiment labels and topics. Reverse-JST (RJST) [16] isa variant of JST where the position of sentiment and topic layer isswapped. In JST, topics were generated conditioned on a sentimentpolarity, while in RJST sentiments were generated conditioned on atopic. Aspect/Sentiment Unification Model (ASUM) [11] is simi-\nlar to JST. In ASUM, words in the same sentence belong to the samesentiment and topic. Sentiment Topic Model with Decomposed Prior(STDP) [32] is anthor variant of JST. STDP first determined whetherthe word is used as a sentiment word or ordinary topic words andthen chose the accurate sentiments for sentiment words. Time-awareK. Xu et al. / A Joint Model for Sentiment-Aware Topic Detection on Social Media 339\n(a) TS-LDA\n (b) US-LDA\n (c) TUS-LDA\nFigure 2. The graphical representation of the proposed model (TS-LDA (a), US-LDA (b), TUS-LDA (c)). Shaded circles are observations or constants.\nUnshaded ones are hidden variables.\nTopic-Sentiment Model (TTS) [4] extracted the hidden topics from\ntexts and modeled the association between topics and sentiments andtracked the strength of topic-sentiment association over time. In TTS,time is viewed as a special word to bias the topic-sentiment distri-butions. But in our model, we use time to aggregate short texts andgenerate pseudo documents for modeling topics and sentiments. JST,RJST, ASUM, STDP and TTS are designed for normal texts whereeach piece of text has rich context to infer topics and sentiments, butour work models posts (i.e., short and informal texts) on social me-dia and all of these models lose efficacy in the short and informaltexts. MaxEnt-LDA [30] jointly discovers both aspects and aspect-specific opinion words by integrating a supervised maximum entropyalgorithm to separate opinion words from objective ones. However, itdoes not further discover aspect-aware sentiment polarities of opin-ion words, which are very useful for sentiment analysis.\nIn our model, we focus on short and informal texts on social me-\ndia. There exists some work about LDA-based sentiment analysison social media. Twitter Opinion Topic Model (TOTM) [14] aggre-gated or summarized opinions of a product from tweets, which candiscover target specific opinion words and improve opinion predic-tion. Topic Sentiment Latent Dirichlet Allocation (TSLDA) [22] uti-lized sentiments on social media for predicting stock price move-ment. TSLDA distinguished topic words and opinion words wheretopic words were drawn from the topic-word distribution and opin-ion words were drawn from the sentiment-topic-word distribution.Although these two work focuses on posts on social media, they donot consider and solve the context sparsity problem of posts.\n3 Problem Definition\nIn this section, we define the basic terminologies we will use in thispaper.\n•Post: A post contains a sequence of words which express the opin-ions and thoughts of people towards different things (e.g., a tweetor a review).\n•User: Each user-generated post has a user identification that spec-ifies who publishes the post.\n•Timeslice: Each user-generated post has a timeslice that specifieswhen the user publishes the post, in this paper, the length of times-lice is a day.\n•Topic: A topic is a discrete piece of content that is about a specificsubject, has an identifiable purpose (e.g., an event, a current hotproblem and a product). Here, a topic is represented as a list ofwords.•Aspect: An aspect refers to a distinct ratable facet of an entity. Fora product, an aspect is an attribute or a component of the productthat has been commented on in a review, e.g., “screen” for a digitalcamera. For an event or other kinds of topics, an aspect can be par-ticipants of the topic [25], e.g., “Obama” in the event of “Obama’svisit to cuba”.\n•Sentiment: Sentiment is a label which refers to the polarity inwhich a concept or opinion is interpreted [17], i.e., “positive” and“negative”. For example, “positive” is a sentiment for the post“Tom was glad to visit his friends.”.\n•Sentiment-aware topic: A sentiment-aware topic is a topic la-beled with a sentiment polarity. For example, the overall senti-ment of the topic “Obama’s visit to cuba” is positive, so the topic“Obama’s visit to cuba” is a positive topic.\n4 The Proposed Models\nIn this section, we firstly introduce the notation and formally formu-late our problem. Then, we describe the method utilized for learn-ing parameters. Finally, we present the method of incorporating priorknowledge into our model.\n4.1 The Generation Process\nIt is assumed that there exists a stream of M posts, denoted as\nd1,d2,...,d M. Each post dmis generated by a user umwithin a\ntimeslice tmand the post dmcontains a bag of words, denoted as\n{wm,1,wm,2,...,w m,N m}.\nIn LDA, a document is viewed as a multinomial distribution\nover topics and a topic is a multinomial distribution over words. InJST, each document is associated with the sentiment/topic distribu-tion, i.e., each sentiment in the document has a topic distribution;the document also has a sentiment distribution for document-levelsentiment-classification and a sentiment/topic is a multinomial dis-tribution over words. LDA and JST only work well for lengthy doc-uments, because the lengthy document have rich contexts. Based onthe analysis of posts on the social media, words in the same post tendto be about a single topic [31]. However, the sentiment polarities ofwords in the same post can be different [12]. At the same time, tomodel the association between sentiments and topics, we also adda sentiment label for each post which is determined by the overallsentiment of all the words in the post.\nOn social media, a part of posts talks about stable topics which\nare related to users’ personal interests with certain sentiments, so weK. Xu et al. / A Joint Model for Sentiment-Aware Topic Detection on Social Media 340\nintroduce a global sentiment/topic distribution δfor each user to cap-\nture personal long-term topical interests and sentiment preferences.\nAnother part of posts is about temporal topics which are related tocurrent events with the corresponding sentiments, so we add a time-dependent sentiment/topic distribution θfor each timeslice to capture\ntemporal topics and the sentiments towards the topics.\nHere, we construct the generative process of all the posts in the\nstream. When a user u\nmpublishes a post dmwithin a timeslice tm,\nthe user first utilizes the variable ym, which is drawn from the global\nuser-timeslice switch distribution ε, to decide whether the post talks\nabout a stable topic or a temporal topic. Then the user chooses asentiment label l\nmfor the post from the document-sentiment πm.\nIf the user chooses a stable topic umand a sentiment label lm, the\nuser then selects a topic zmfromδum,lm; otherwise, the user selects\na topiczmfromθtm,lm. For each word wm,i in the post dm, the\nuser first chooses a sentiment label lm,i; with the chosen topic zm\nand sentiment label lm,i, the word is drawn from the sentiment-topic\nword distribution ϕlm,i,zm.\nThe notations in this paper are summarized in Table 1. Fig 2(c)\nshows the graphical representation of the generation process. For-mally, the generative story for each post is as follows:\n1. Draw ε∼Beta(γ)\n2. For each timeslice t=1,...,T\ni. For each sentiment label s=0,1,2\na. Drawθ\nt,s∼Dir(α)\n3. For each user u=1,...,U\ni. For each sentiment label s=0,1,2\na. Drawδu,s∼Dir(α)\n4. For each sentiment label s=0,1,2\ni. For each topic k=1,...,K\na. Drawϕs,k∼Dir(β)\n5. For each post dm,m=1,...,M\ni. Drawπm∼Dir(λ)\nii. Draw lm∼Multi(πm)\niii. Draw ym∼Bernoulli (ε)\niv. ifym=0, Draw zm∼Multi(θum,lm)or ifym=1, Draw\nzm∼Multi(δtm,lm)\nv. For each word wi=1,...N m\na. Drawlm,i∼Multi(πm)\nb. Drawwm,i∼Multi(ϕzm,lm,i)\nThere are two degenerate variations of our model which are shown\nin the experiments. The first one is depicted in Fig 2(a), which con-\nsiders the temporal topic-sentiment distribution. The second one isdepicted in Fig 2(b), which only considers the stable topic-sentimentdistribution. We refer to our complete model as TUS-LDA, the modelin Fig 2(a) as TS-LDA and the model in Fig 2(b) as US-LDA.\n4.2 Parameters Inference\nLike LDA, exact inference is intractable in our models. Hence ap-proximate estimation approaches, such as Gibbs Sampling [9], areutilized to solve the problem. Gibbs Sampling, a special case ofMarkov Chain Monte Carlo (MCMC) [6], is a relatively simple al-gorithm of approximate inference for our models. Due to space lim-itation, only the final formulas are given here.Table 1. Notation used in the TUS-LDA model\nSymbol Description\nM,K number of documents,topics\nV ,U,T number of vocabulary,users,timeslices\nZ,W,Y all the topics, words, user-timeslice switches\nT,U all the timeslices and users\nL,¯L all the sentiments of posts and words\nNm number of word tokens in post dm\num,tm\nym,lmuser,timeslice,user-timeslice switch\nand sentiment of post dm\nlm,i sentiment of word wm,i\nε beta distribution of stable topics and temporal topics\nπm document-sentiment distribution, Ω={π m}M\nm=1\nθt,s timeslice-sentiment topic distribution, Θ={θ t,s}T∗S\nt=1,s=1\nδu,s user-sentiment topic distribution, Φ={δ u,s}U∗S\nu=1,s=1\nϕs,k sentiment-topic word distribution, Ψ={ϕ s,k}S∗K\ns=1,k =1\nα hyperparameters of θt,sandδu,s\nβ,λ hyperparameters of ϕs,k,πm\nγ hyperparameters of ε\nωs prior knowledge of ϕs,k\n4.2.1 Joint Distribution\nThe joint probability of words, users, timeslices, timeslices-user\nswitches, topics and sentiments can be factored in Eq 1, where ε,\nπ,ϕ,δandθare integrated and− →nmcounts the number of three sen-\ntiment labels of a post and the words in the post (All the notationsare illustrated in Table 1.).\nP\nTUS−LDA(Z,W,T,U,Y,L,¯L|α,γ,λ,β,ω )=\nP(Y|γ)P(L|λ)P(Z|Y, L,α)P(¯L|λ)P(W|Z,¯L,β,ω)=\nΔ(− →ny+− →γ)\nΔ(− →γ)×M/productdisplay\nm=1Δ(− →nm+− →λ)\nΔ(− →λ)×U/productdisplay\nu=1S/productdisplay\ns=1Δ(− →nu,s+− →α)\nΔ(− →α)\n×T/productdisplay\nt=1S/productdisplay\ns=1Δ(− →nt,s+− →α)\nΔ(− →α)×S/productdisplay\ns=1K/productdisplay\nk=1Δ(− →ns,k+− →β)\nΔ(− →β);\nΔ=/producttextdim− →x\nk=1Γ(x k)\nΓ(/producttextdim− →x\nk=1xk),− →ny={n0\ny,n1y},− →nm={nposm,nnegm}\n− →nu,s={nku,s}Kk=1,− →nt,s={nkt,s}Kk=1,− →ns,k={nvs,k}Vv=1\n(1)\n4.2.2 Posterior Distribution\nPosterior distribution is estimated as follows: for the i-th post, the\nuseruiand timeslice tiare known. yi,ziandlican be jointly sam-\npled given all other variables. Here, we use yto denote all the hidden\nvariables yand y−ito denote all the other yexceptyi. All the hyper-\nparameters are omitted.\nP(yi=0,zi=k,li=s|y−i,z−i,l−i,l,w)∝γ0+n0\ny,−i/summationtext2\np=1γp+np\ny,−i\n×λs+ns\nm,− i/summationtextS\ns/prime=1λs/prime+ns\nm,− i×αk+nku,s,− i\n/summationtextK\nk=1αk/prime+nk/prime\nu,s,− i\n×/producttextVv=1/producttextN(v)−1\nnv=0(βs,k+nv\ns,k,− i+nv)\n/producttextN−1\nn=0(/summationtextV\nv/prime=1(βs,k+nv/prime\ns,k,− i)+n)\n(2)\nIfyi=0, thei-th post talks about a stable topic, the sampling for-\nmula is shown in Eq 2; otherwise, the i-th post talks about a temporal\ntopic, the sampling formula is shown in Eq 3.K. Xu et al. / A Joint Model for Sentiment-Aware Topic Detection on Social Media 341\nP(yi=1,zi=k,li=s|y−i,z−i,l−i,l,w)∝γ1+n1\ny,−i/summationtext2\np=1γp+np\ny,−i\n×λs+ns\nm,− i/summationtextS\ns/prime=1λs/prime+ns\nm,− i×αk+nkt,s,− i\n/summationtextK\nk/prime=1αk/prime+nk/prime\nt,s,− i\n×/producttextV\nv=1/producttextN(v)−1\nnv=0(βs,k+nv\ns,k,− i+nv)\n/producttextN−1\nn=0(/summationtextV\nv/prime=1(βs,k+nv/prime\ns,k,− i)+n)\n(3)\nFor thej-th word in the i-th post, the sample formula of is shown\nin Eq 4.\nP(lij=s|z,l−ij,w,y,l)∝λs+ns\nm,− ij/summationtextS\ns/prime=1(λs/prime+ns/prime\nm,− ij)\n×βv\ns,k+nv\ns,k.− ij/summationtextV\nv/prime=1(βv/prime\ns,k+nv/prime\ns,k.− ij)(4)\nSamples obtained from MCMC are then utilized for estimating the\ndistributions π(E q5) ,δ (E q6)a n dθ (E q7) ,φ (E q8) .\nπs\nm=λs+ns\nm/summationtextS\ns/prime=1(λ/prime\ns+ns/prime\nm)(5)\nδk\nu,s=αk+nk\nu,s/summationtextK\nk/prime=1(αk/prime+nk/prime\nu,s)(6)\nθk\nt,s=αk+nk\nt,s/summationtextK\nk/prime=1(αk/prime+nk/prime\nt,s)(7)\nϕv\ns,k=βv\ns,k+nv\ns,k/summationtextV\nv/prime=1(βv/prime\ns,k+nv/prime\ns,k)(8)\n4.2.3 Gibbs Sampling Algorithm\nA complete overview of Gibbs sampling procedure is given in Algo-\nrithm 1 (All the notations are listed in Table 1).\n4.3 Incorporating Prior Knowledge\nDrawing on the experience of JST and RJST [16], we also add an ad-ditional dependency link of ϕon the matrix ωof sizeS∗V, which is\nutilized for encoding word prior sentiment information into the TUS-LDA and its variants. To incorporate prior knowledge into TUS-LDAand its variants, we first set all the values of ωas 1. Then the matrix\nωis updated with a sentiment lexicon which contains words with the\ncorresponding sentiment labels, i.e., positive and negative. For eachtermw∈{1,...,V}in the corpus, if wis found in the sentiment\nlexicon with the sentiment label l∈{1,...,S}, the element ω\nlwis\nset as 1 and other elements of the word ware set as 0. The element\nlw is updated as follows:\nωlw=/braceleftBigg\n1 if S(w)=l\n0otherwise\nThe Dirichlet prior βof the size S∗K∗Vare multiplied by the\nmatrixω(a transformation matrix) to capture the word prior senti-\nment polarity.Algorithm 1: Inference on TUS-LDA\nInput:α,γ,λ,β,ω\n1Initialize matrices Ω,Θ,Φ,Ψandε.\n2for iterationc=1 to numIterations do\n3 for postm=1 toMdo\n4 Exclude post mand update count variables.\n5 Sample a timeslice-user switch, topic and sentimentlabel for post m.\n6 ify=0then\n7 Use Eq 2\n8 ify=1then\n9 Use Eq 3\n10 Update count variables with new timeslice-user switch,topic and sentiment label.\n11 forn=1 tonmdo\n12 Exclude word wnand update count variables.\n13 Sample the sentiment label for word wnusing Eq 4.\n14 Update count variables with new sentiment label.\n15Update matrices Ω,Φ,Θ,Ψusing Eq 5, 6, 7, 8\n5 Experiment Analysis\n5.1 Dataset Description and Preprocessing\nFor experiments, we performed sentiment-aware topic discovery and\nsentiment classification on tweets, which are characterized by theirlimited 140 characters text. We selected tweets, which are related toelectronic products such as camera and mobile phones, from Tweet7\n2\nand all the queried words are listed in Table 2). These tweets con-\ntain the description and reviews of various electronic products andcorrespond to multiple sentiment-aware topics. Besides, each tweetcontains the content, the release timeslice, the user information.\nTable 2. Selected Words for Extracting Tweets Related to Electronics Prod-\nucts\niphone, blackberry, nokia, palmpre, sony, motorola, canon,\nnikon, dell, lenovo, toshiba, acer, asus, macbook, hp,alienware,camera, laptop, tablet, netbook, ipad, ipod, xbox,playstation,wii, phone, nintendo, printer, panasonic, epson,samsung,kyocera, ibm, sony, microsoft, lg, hitachi, scanner,computer,fujitsu, kodak, gameboy, sega, squareenix, android,ios,windows, operatingsystem, apple\nDue to the lack of sentiment labels on the Tweet7, we utilized the\nSentiment1403[8], which contains 1.6 million tweets, for sentiment\nclassification evaluation. Each tweet in Sentiment140 has the con-\ntent, a release timeslice, a user and the overall polarity label (positiveor negative). The number of positive and negative tweets are nearlyidentical.\nWe followed the preprocessing steps in BTM [28]. To improve the\nquality of our model, we added two extra steps: (1) Part-of-speechtagging of tweet contents using the Part-of-speech tagger\n4specially\ntrained on tweets [7], retaining the words tagged as nouns, verbs oradjectives; (2) Lemmatizing words tagged as noun, verb, which wasused to reduce inflectional forms and sometimes derivationally re-lated forms of a word to a common base form. After preprocessing,\n2https://snap.stanford.edu/data/twitter7.html\n3http://help.sentiment140.com/for-students/\n4http://www.ark.cs.cmu.edu/TweetNLP/K. Xu et al. / A Joint Model for Sentiment-Aware Topic Detection on Social Media 342\n(a) Accuracy\n (b) Precision\n(c) Recall\n (d) F1\nFigure 3. (a) Accuracy (b) Precision (c)Recall (d) F1 of sentiment classification\nas is shown in Table 3, we left 2,766,325 valid tweets, 80,083 dis-\ntinct words, 174 timeslices (days) and 572,238 users in Tweet7, and\nwe left 258,268 valid tweets, 29,486 distinct words, 48 timeslices(days) and 21,815 users in Sentiment140.\nTable 3. Corpus Statistics\nElectronic Senti140\nNumber of tweets 2,766,325 258,268\nUsers 572,238 21,815\nTimeslices 174 48\n5.2 Sentiment Lexicon\nIn JST [15] and our models, each sentiment label is viewed as a spe-cial kind of topic that we have known in advance. To improve theaccuracy of sentiment detection, we need to incorporate prior knowl-edge or subjectivity lexicon (i.e., words with positive or negative po-larity). Here, we chose PARADIGM [26], which consists of a set ofpositive and negative words, e.g., happy and sad. It defines the pos-itive and negative semantic orientation of words. Moreover, emoti-cons are also strong emotion indicators on social media. The entirelist of emoticons is taken from Wikipedia\n5. To adjust to our scenario\non social media, we just chose a subset of the emoticons in Table 4.\n5.3 Parameter Settings\nTo optimize the number of topics K, we empircally ran the models\nwith four values of K: 10, 20, 50 and 100 in Sentiment140 and ran\n5https://en.wikipedia.org/wiki/List ofemoticonsTable 4. Emoticons\nPositive Negative\n:-) :o) :] :3 :c)\n:>=] 8) : }:-D\n;-D :D 8-D \\o/ˆˆ\n:}(ˆoˆ)/ (ˆ ˆ)/>:-(>:[ :-( :c\n:@>:( ;( ;-( :’-(\n:’( D; (T T) (; ;)\n(;:) T.T ! !\nthe models with three values of K: 10, 20, 50 in Twitter7 (In Twit-\nter7, these tweets only contain a small number of electronic product-related topics). In our model, we simply selected symmetric Dirichletprior vectors as is empircally done in JST and ASUM. For JST andASUM,α=\n50\nK,β=0.01 andγ=0.01. For TUS-LDA, we set\nα=0.5,γ=0.01,λ=0.01 andβ=0.01. These LDA-based\nmodels are not sensitive to the hyperparameters [27]. In all the meth-ods, Gibbs sampling was run for 1,000 iterations with 200 burn-inperiods.\n5.4 Quantitative Evaluations\n5.4.1 Sentiment Classification\nIn this section, we performed a sentiment classification task to predictthe sentiment labels of the test data in Sentiment140. Note that theSentiment140 tweets do not contain neutral tweets. We determinedthe polarity of a tweet mby selecting the polarity sthat has a higher\nprobability in π\ns\nm(πmis the sentiment distribution of the m-th post),\nthe function is shown in Eq 9.\npolarity(m) = argmax\ns={neg,pos}πs\nm (9)\nWe present the results of sentiment classification with Accuracy ,\nPrecision ,Recall andF1which are defined in the following.K. Xu et al. / A Joint Model for Sentiment-Aware Topic Detection on Social Media 343\nTable 5. Average coherence score on the top Twords in the Ktopics discovered on tweets of electronic products\nT Top 5 Top 10 Top 20\nK 10 20 50 10 20 50 10 20 50\nJST -39.88 -42.08 -41.68 -242.74 -246.79 -251.97 -1139.43 -1145.36 -1142.01\nASUM -38.02 -39.86 -39.58 -240.47 -243.97 -246.47 -1135.44 -1131.96 -1135.27\nTS-LDA -35.53 -37.66 -38.91 -238.29 -240.87 -244.71 -1136.04 -1133.02 -1130.81\nUS-LDA -36.42 -36.84 -36.59 -237.01 -238.67 -245.29 -1134.07 -1130.45 -1132.23\nTUS-LDA -33.91 -35.7 -35.61 -233.08 -234.72 -241.78 -1030.83 -1127.64 -1130.02\n(a) Proportion of coherent topics generated by each model in K=\n10,20,50\n(b) Average Precision @20 (p @20) of words in coherent topics\ngenerated by each model in K=1 0,20,50\nFigure 4. (a) Proportion of coherent topics (b) Average Precision @20 (p @20) of words in coherent topics\nAccuracy is the proportion of true results (both true positives and\ntrue negatives) among the total number of cases examined in the\nbinary classification.Precision is the proportion of the true positives against all the\npredicated positive results (both true positives and false positives)in the binary classification.Recall is the proportion of the true positives against all the ac-\ntual positive results (both true positives and false negatives) in thebinary classification.F1is the harmonic mean of Precision and Recall.\nBased on the results of sentiment classification, we can see that\nTUS-LDA outperformed JST, ASUM, TS-LDA and US-LDA in F1\n(Fig 3(d)). For Recall (3(c)), ASUM, TS-LDA, US-LDA and TUS-\nLDA performed equally well, JST performed worst. For Accuracy\n(Fig 3(a)) and Precision (Fig 3(b)), TUS-LDA performed best and\nTS-LDA performed better than US-LDA. There exist 48 timeslicesand 21,815 users, the number of users is far more than that of times-lices which causes that modeling tweets aggregated in timeslices per-formed better than tweets aggregated in users. Aggregating tweets intimeslices or users (i.e., TUS-LDA) with K=1 0 performed best in\nSentiment140.\n5.4.2 Topic Coherence\nAnother goal of TUS-LDA is to extract coherent sentiment-awaretopics from user-generated post collection and evaluate the effec-tiveness of topic and sentiment captured by our models. In order toconduct quantitative evaluation of topic coherence, we used an au-tomated metric proposed in [20], which is shown in Eq 10, wheretopic coherence, denoted as D(v), is the document frequency of\nwordv,D(v,v\n/prime)is the co-document frequency of word vandv/primeand\nV(k)=(v(k)\n1,...,v(k)\nT)is a list of the Tmost probable words in topic\nk. The key idea of the coherence score is that if a word pair is relatedto the same topic, they will co-occur frequently in the corpus. In or-der to quantify the overall coherence of the discovered topics, the av-erage coherence score,\n1\nK/summationtextkC(zk;V(zk)), was utilized. We con-\nducted and evaluated the topic extraction experiments on the tweetsof electronic products. Here we also compared TUS-LDA with foursentiment-topic models: JST, ASUM, TS-LDA and US-LDA. In thiscollection, we set the number of topics K=1 0,20,50for all the\nmethods. The result is listed in Table 5. From the topic coherent re-sults, it is clear that aggregating tweets in timeslices or users (TUS-LDA) directly leads to significant improvement of topic coherent.Note that TUS-LDA also performed best in the topic coherent andthe performance of TS-LDA (aggregating tweets in timeslices) wassimilar to US-LDA (aggregating tweets in users).\nC(t;V\n(t))=M/summationdisplay\nm=2m−1/summationdisplay\nl=1logD(v(t)\nm,v(t)\nl)+1\nD(v(t)\nl)(10)\n5.4.3 Human Evaluation\nAs our objective is to discover more coherent sentiment-aware top-ics, so we chose to evaluate the topics manually which is based onhuman judgement. Without enough knowledge, the annotation willnot be credible. Following [20], we asked two human judges, whoare familiar with common knowledge and skilled in looking up thetest tweet dataset, to annotate the discovered sentiment-aware topicsmanually. To ensure the annotation reliable, we labeled the generatedtopics by all the baseline models and our proposed model at learningiteration 10.\nTopic Labeling: Following [20], we asked the judges to label each\nsentiment-aware topic as coherent orincoherent . Each sentiment-\naware topic is represented as a list of 20 most probable words inword distribution ϕof the topic. Here they annotated a sentiment-\naware topic as coherent when at least half of top 20 words wereK. Xu et al. / A Joint Model for Sentiment-Aware Topic Detection on Social Media 344\nTable 6. Example of topics extracted by TUS-LDA\nPositive sentiment label Negative sentiment label\nTopic 1 Topic 2 Topic 3 Topic 1 Topic 2 Topic 3\ncamera ipod xbox printer window phone\ndigit song game ink vista problem\ncanon phone live print us information\nnikon listen sale cartridge microsoft security\nnew music console toner install strange\nlen love plai laser download risk\nphotograph new playstat color software finiance\nreview play ps3 laserjet file mobile\npanason shuffle microsoft paper free digit\nslr good new scanner server on-line\nrelated to the same semantic-coherent concept (e.g., an event, a hot\ntopic) and the sentiment polarities of the words are accurate, otherswereincoherent ..\nWord Labeling: Then we chose coherent sentiment-aware top-\nics which were judged before and asked judges to label each wordof the top 20 words among these coherent sentiment-aware topics.\nWhen a word was in accordance with the main semantic-coherentconcept that represents the topic, the word was annotated as correct\nand others were incorrect. After topic labeling, the judges had\nknown the concept of each sentiment-aware topic and the overallsentiment of the topic, it is easy to label words of each sentiment-aware topic. As is shown in Table 7, the annotation of both judgesinPrecision @20 (orp@20) also have good agreements (Cohen’s\nKappa score is greater than 0.8 [13]).\nTable 7. Cohen’s Kappa for pairwise inter-rater agreements\nTopic LabelingWord Labeling\np@5 p@10 p@20\nKappa 0.820 0.911 0.821 0.816\nFigure 4(a) shows that TUS-LDA can discover more coherent\ntopics than JST, ASUM, TS-LDA and US-LDA. Thereinto, TUS-LDA can discover the nearly equal number of positive and negativetopics. Figure 4(b) gives the average Precision @20 of all coher-\nent topics. TUS-LDA performed better than other four models andperformed best in K=1 0 .\nFrom the above, we can observe that aggregating posts in the same\ntimeslice or user as a single document can indeed improve the perfor-mance in sentiment classification and sentiment-aware topic extrac-tion in user-generated posts as TUS-LDA consistently outperformedthe baseline models except in K= 50(Negative ). Also the empir-\nical results reveal that the most likely number of topic for tweets ofelectronic products in Twitter7 is 10.\n5.5 Qualitative Analysis\nTo investigate the quality of topics discovered by TUS-LDA, we ran-domly choose some topics for visualization. We randomly selectedsix topics, i.e., three positive topics and three negative topics. Foreach topic, we choose the top 10 words which can most represent thetopic.\nTable 6 presents the top words of the selected topics. The three\ntopics with a positive sentiment label respectively talk about “Cam-era”, “apple music product” and “game” and these topics are listedin the left columns of Table 6; the three negative topics are related to“printer”, “window product” and “phone” are listed in right columnsof Table 6. As we can see clearly from Table 6, the six topics arequite explicit and coherent, where each of them tried to capture thetopic of a kind of electronic product. In terms of topic sentiment, bychecking each of the topics in Topic 6, it is clear that Topic 2 underthe positive sentiment label and Topic 3 under the negative sentimentlabel indeed bear positive and negative sentiment labels respectively.However, other topics under positive and negative sentiment labelcarry fewer sentiment words than the above two topics. By manu-ally examining the tweet data, we observe that the sentiment labelsof these topics are accurate. The analysis of these topics shows thatTUS-LDA can indeed discover coherent sentiment-aware topics.\n6 Conclusion and Future Work\nIn this paper, we studied the problem of sentiment-aware topic detec-tion from the user-generated posts on the social media. The existingwork is not suitable for the short and informal posts, we proposed anew sentiment/topic model that considers the time, user informationof posts to jointly model topics and sentiments. Based on the differ-ent characteristics of sentiments and topics, we limited that words inthe same post belong to the same topic, but they can belong to differ-ent sentiments. We compared our model with JST, ASUM as well astwo degenerate variations of our model on two Twitter datasets. Ourquantitative evaluation showed that our model outperformed othermodels both in sentiment classification and topic coherence. At thesame time, we asked two judges to evaluate our models and baselinemethods and the result also showed that our model TUS-LDA per-formed best in sentiment-aware topic extraction. Moreover, we usedsix examples to visualize some sentiment-aware topics. In the futurework, we want to further mine sentiment-aware events in the postswhich can monitor the sentiment variation over time of each event.Moreover, we can also utilize the user’s topic and sentiment infor-mation to cluster similar users. We will also consider to expand ourmodel for aspect-based opinion mining.\nACKNOWLEDGEMENTS\nWe would like to thank the reviewers for their comments, whichhelped improve this paper considerably. This work is supportedin part by the National Natural Science Foundation of China(NSFC) under Grant No.61272378 and the 863 Program under GrantNo.2015AA015406.\nREFERENCES\n[1] David M Blei, Andrew Y Ng, and Michael I Jordan, ‘Latent dirich-\nlet allocation’, Journal of Machine Learning Research, 3, 993–1022,\n(2003).K. Xu et al. / A Joint Model for Sentiment-Aware Topic Detection on Social Media 345\n[2] Johan Bollen, Huina Mao, and Xiaojun Zeng, ‘Twitter mood predicts\nthe stock market’, Journal of Computational Science, 2(1), 1–8, (2011).\n[3] Zhiyuan Chen, Arjun Mukherjee, Bing Liu, Meichun Hsu, Malu Castel-\nlanos, and Riddhiman Ghosh, ‘Leveraging multi-domain prior knowl-\nedge in topic models’, in Proc. of IJCAI, pp. 2071–2077. AAAI, (2013).\n[4] Mohamed Dermouche, Julien V elcin, Leila Khouas, and Sabine Loud-\ncher, ‘A joint model for topic-sentiment evolution over time’, in Proc.\nof ICDM, pp. 773–778. IEEE, (2014).\n[5] Qiming Diao, Jing Jiang, Feida Zhu, and Ee-Peng Lim, ‘Finding bursty\ntopics from microblogs’, in Proc. of ACL, pp. 536–544. ACL, (2012).\n[6] Charles J Geyer, ‘Practical markov chain monte carlo’, Statistical Sci-\nence, 473–483, (1992).\n[7] Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das,\nDaniel Mills, Jacob Eisenstein, Michael Heilman, Dani Y ogatama, Jef-frey Flanigan, and Noah A Smith, ‘Part-of-speech tagging for twitter:\nAnnotation, features, and experiments’, in Proc. of ACL, pp. 42–47.\nACL, (2011).\n[8] Alec Go, Richa Bhayani, and Lei Huang, ‘Twitter sentiment classifi-\ncation using distant supervision’, CS224N Project Report, Stanford, 1,\n12, (2009).\n[9] Gregor Heinrich, ‘Parameter estimation for text analysis’, Technical re-\nport, Technical report, (2005).\n[10] Thomas Hofmann, ‘Probabilistic latent semantic indexing’, in Proc. of\nSIGIR, pp. 50–57. ACM, (1999).\n[11] Y ohan Jo and Alice H Oh, ‘Aspect and sentiment unification model for\nonline review analysis’, in Proc. of WSDM, pp. 815–824. ACM, (2011).\n[12] Svetlana Kiritchenko, Xiaodan Zhu, Colin Cherry, and Saif Moham-\nmad, ‘Nrc-canada-2014: Detecting aspects and sentiment in customer\nreviews’, in SemEval, pp. 437–442. ACL, (2014).\n[13] JR Landis and GG Koch, ‘The measurement of observer agreement for\ncategorical data.’, Biometrics, 33, 159–174, (1977).\n[14] Kar Wai Lim and Wray Buntine, ‘Twitter opinion topic model: Extract-\ning product opinions from tweets by leveraging hashtags and sentimentlexicon’, in Proc. of CIKM, pp. 1319–1328. ACM, (2014).\n[15] Chenghua Lin and Y ulan He, ‘Joint sentiment/topic model for senti-\nment analysis’, in Proc. of CIKM, pp. 375–384. ACM, (2009).\n[16] Chenghua Lin, Y ulan He, Richard Everson, and Stefan R ¨uger, ‘Weakly\nsupervised joint sentiment-topic detection from text’, IEEE Transac-\ntions on Knowledge and Data Engineering, 24(6), 1134–1145, (2012).\n[17] Bing Liu, Web data mining: exploring hyperlinks, contents, and usage\ndata, Springer Science & Business Media, 2007.\n[18] Bin Lu, Myle Ott, Claire Cardie, and Benjamin K Tsou, ‘Multi-aspect\nsentiment analysis with topic models’, in Proc. of ICDMW, pp. 81–88.\nIEEE, (2011).\n[19] Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang\nZhai, ‘Topic sentiment mixture: modeling facets and opinions in we-\nblogs’, in Proc. of WWW, pp. 171–180. ACM, (2007).\n[20] David Mimno, Hanna M Wallach, Edmund Talley, Miriam Leenders,\nand Andrew McCallum, ‘Optimizing semantic coherence in topic mod-\nels’, in Proc. of EMNLP, pp. 262–272. ACL, (2011).\n[21] Subhabrata Mukherjee, Gaurab Basu, and Sachindra Joshi, ‘Joint au-\nthor sentiment topic model’, in SDM, pp. 370–378. SIAM, (2014).\n[22] Thien Hai Nguyen and Kiyoaki Shirai, ‘Topic modeling based senti-\nment analysis on social media for stock market prediction’, in Proc. of\nACL , pp. 1354–1364. ACL, (2015).\n[23] Alexander Pak and Patrick Paroubek, ‘Twitter as a corpus for sentiment\nanalysis and opinion mining.’, in Proc. of LREC, volume 10, pp. 1320–\n1326, (2010).\n[24] Sara Rosenthal, Preslav Nakov, Svetlana Kiritchenko, Saif M Moham-\nmad, Alan Ritter, and V eselin Stoyanov, ‘Semeval-2015 task 10: Senti-ment analysis in twitter’, SemEval, (2015).\n[25] Kim Schouten and Flavius Frasincar, ‘Survey on aspect-level senti-\nment analysis’, IEEE Transactions on Knowledge & Data Engineering,\n28(3), 813–830, (2016).\n[26] Peter D Turney and Michael L Littman, ‘Measuring praise and criti-\ncism: Inference of semantic orientation from association’, ACM Trans-\nactions on Information Systems, 21(4), 315–346, (2003).\n[27] Hanna M Wallach, David M Mimno, and Andrew McCallum, ‘Rethink-\ning lda: Why priors matter’, in NIPS, pp. 1973–1981, (2009).\n[28] Xiaohui Yan, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng, ‘A biterm\ntopic model for short texts’, in Proc. of WWW, pp. 1445–1456. Springer,\n(2013).\n[29] Qi Zhang, Yeyun Gong, Xuyang Sun, and Xuanjing Huang, ‘Time-\naware personalized hashtag recommendation on social media’, in Proc.of COLING, pp. 203–212. ACL, (2014).\n[30] Wayne Xin Zhao, Jing Jiang, Hongfei Yan, and Xiaoming Li, ‘Jointly\nmodeling aspects and opinions with a maxent-lda hybrid’, in Proc. of\nEMNLP, pp. 56–65. ACL, (2010).\n[31] Wayne Xin Zhao, Jiang Jing, Weng Jianshu, He Jing, Lim Ee-Peng, Yan\nHongfei, and Li Xiaoming, ‘Comparing twitter and traditional media\nusing topic models’, in Proc. of ECIR, pp. 338–349. Springer, (2011).\n[32] Chen Zheng, Li Chengtao, Sun Jian-Tao, and Jianwen Zhang, ‘Senti-\nment topic model with decomposed prior’, in Proc. of SDM, pp. 767–\n775. SIAM, (2013).K.Xuetal./AJointModel forSentiment-A wareTopic Detection onSocial Media 346",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "890r2AbHXVN",
"year": null,
"venue": "EAMT (Projects/Products) 2016",
"pdf_link": "https://aclanthology.org/2016.eamt-2.21.pdf",
"forum_link": "https://openreview.net/forum?id=890r2AbHXVN",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Modern MT: a new open-source machine translation platform for the translation industry",
"authors": [
"Ulrich Germann",
"Eduard Barbu",
"Luisa Bentivogli",
"Nicola Bertoldi",
"Nikolay Bogoychev",
"Christian Buck",
"Davide Caroselli",
"Luis Carvalho",
"Alessandro Cattelan",
"Mauro Cettolo",
"Marcello Federico",
"Barry Haddow",
"David Madl",
"Luca Mastrostefano",
"Prashant Mathur",
"Achim Ruopp",
"Anna Samiotou",
"Vinod Sudharshan",
"Marco Trombetti",
"Jan van der Meer"
],
"abstract": "U. Germann, E. Barbu, L. Bentivogli, N. Bertoldi, N. Bogoychev, C. Buck, D. Caroselli, L. Carvalho, A. Cattelan, R. Cettolo, M. Federico, B. Haddow, D. Madl, L. Mastrostefano, P. Mathur, A. Ruopp, A. Samiotou, V. Sudharshan, M. Trombetti, Jan van der Meer. Proceedings of the 19th Annual Conference of the European Association for Machine Translation: Projects/Products. 2016.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 19th Annual Conference of the EAMT: Projects /Products 397 \n \n Modern MT : A New Open -Source Machine \nTranslation Platform for the Translation Industry \n \nU. G ERMANN1, E. BARBU2, L. BENTIVOGLI3, N. B ERTOLDI3, \n N. B OGOYCHEV1, C. BUCK1, D. C AROSELLI2, L. CARVALHO4, \n A. C ATTELAN2, R. CATTONI3, M. C ETTOLO3, M. F EDERICO3, \nB. H ADDOW1, D. M ADL1, L. M ASTROSTEFANO2, P. M ATHUR3, \nA. R UOPP4, A. SAMIOTOU4, V. SUDHARSHAN4, \nM. T ROMBETTI2, J. van der M EER4 \n \n1 University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB, United Kingdom \n2 Translated srl, Via Nepal, 29, 00144 Rome, Italy \n3 Fondazione Bruno Kessler, Via Sommarive, 18, 38123 Povo, Italy \n4 TAUS B.V., Oosteinde 9, 1483 AB De Rijp, Netherlands \n \[email protected] \n \nAbstract. Modern MT (www.modernmt.eu ) is a three -year Horizon 2020 innovation action \n(2015 –2017) to develop new open -source machine translation technology for use in translation \nproduction environments, both fully automatic and as a back -end in interactive post -editing \nscenarios. Led by Translated srl, the project consortium also includes the Fondazione Bruno \nKessler (FBK), the University of Edinb urgh, and TAUS B.V. Modern MT has received funding \nfrom the European Union’s Horizon 2020 research and innovation programme under Grant \nAgreement No645487 (call ICT -17-2014). \n \nProject Description \n \nModern MT aims to improve the state of the art in open source machine translation \nsoftware by developing cloud -ready software that offers \n– A simple installation procedure for a ready -to-go, REST -based translation service. \n– Very fast set -up times for systems built from scratch using existing parallel \ncorpora (e.g., translation memories). The goal is to process incoming data at \napproximately the speed at which it is uploaded. \n– Immediate integration of new data (e.g., from newly post -edited MT output). \nRebuilding or retuning the system will not be necessary. \n– Instant domain adaptation by considering translation context beyond the \nindividual sentence, without the need for domain -specific custom engines. \n– High scalability with respect to throughput, concurrent users, and the amount of \ndata the system can handle. \nA first version of the software is available at https://github.com/ModernMT/MMT . \n Modern MT is also actively collecting and curating parallel data for internal use \nand public release from web crawls and contributions from translation stakeholders, to \nimprove MT quality for everyone.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "CTgHTsEt9M1",
"year": null,
"venue": "ECAL (2)2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=CTgHTsEt9M1",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Origins of Scaling in Genetic Code.",
"authors": [
"Oliver Obst",
"Daniel Polani",
"Mikhail Prokopenko"
],
"abstract": "The principle of least effort in communications has been shown, by Ferrer i Cancho and Solé, to explain emergence of power laws (e.g., Zipf’s law) in human languages. This paper applies the principle and the information-theoretic model of Ferrer i Cancho and Solé to genetic coding. The application of the principle is achieved via equating the ambiguity of signals used by “speakers” with codon usage, on the one hand, and the effort of “hearers” with needs of amino acid translation mechanics, on the other hand. The re-interpreted model captures the case of the typical (vertical) gene transfer, and confirms that Zipf’s law can be found in the transition between referentially useless systems (i.e., ambiguous genetic coding) and indexical reference systems (i.e., zero-redundancy genetic coding). As with linguistic symbols, arranging genetic codes according to Zipf’s law is observed to be the optimal solution for maximising the referential power under the effort constraints. Thus, the model identifies the origins of scaling in genetic coding — via a trade-off between codon usage and needs of amino acid translation. Furthermore, the paper extends the model to multiple inputs, reaching out toward the case of horizontal gene transfer (HGT) where multiple contributors may share the same genetic coding. Importantly, the extended model also leads to a sharp transition between ambiguous HGT and zero-redundancy HGT. Zipf’s law is also observed to be the optimal solution in the HGT case.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "DKMG7xWK-s",
"year": null,
"venue": "EC 2019",
"pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3328526.3329626",
"forum_link": "https://openreview.net/forum?id=DKMG7xWK-s",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Pandora's Problem with Nonobligatory Inspection",
"authors": [
"Hedyeh Beyhaghi",
"Robert Kleinberg"
],
"abstract": "Martin Weitzman's \"Pandora's problem\" furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher is not obligated to pay the cost of inspecting an alternative's value before selecting it. Unlike the original Pandora's problem, the version with nonobligatory inspection cannot be solved optimally by any simple ranking-based policy, and it is unknown whether there exists any polynomial-time algorithm to compute the optimal policy. This motivates the study of approximately optimal policies that are simple and computationally efficient. In this work we provide the first non-trivial approximation guarantees for this problem. We introduce a family of \"committing policies\" such that it is computationally easy to find and implement the optimal committing policy. We prove that the optimal committing policy is guaranteed to approximate the fully optimal policy within a 1-1/e = 0.63... factor, and for the special case of two boxes we improve this factor to 4/5 and show that this approximation is tight for the class of committing policies.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "K8_TwawBAwp",
"year": null,
"venue": "ECAI 2008",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-58603-891-5-147",
"forum_link": "https://openreview.net/forum?id=K8_TwawBAwp",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Method for Classifying Vertices of Labeled Graphs Applied to Knowledge Discovery from Molecules",
"authors": [
"Frédéric Pennerath",
"Géraldine Polaillon",
"Amedeo Napoli"
],
"abstract": "The article proposes a generic method to classify vertices or edges of a labeled graph. More precisely the method computes a confidence index for each vertex v or edge e to be a member of a target class by mining the topological environments of v or e. The method contributes to knowledge discovery since it exhibits for each edge or vertex an informative environnement that explains the found confidence. When applied to the problem of discovering strategic bonds in molecules, the method correctly classifies most of the bonds while providing relevant explanations to chemists. The developed algorithm GemsBond outperforms both speed and scalability of the learning method that has previously been applied to the same application while giving similar results.",
"keywords": [],
"raw_extracted_content": "A Method for Classifying Vertices of\nLabeled Graphs Applied to\nKnowledge Discovery from Molecules\nFr´ed´eric Pennerath1,3,G ´eraldine Polaillon2and Amedeo Napoli3\nAbstract. The article proposes a generic method to clas-\nsify vertices or edges of a labeled graph. More precisely the\nmethod computes a confidence index for each vertex vor edge\neto be a member of a target class by mining the topological\nenvironments of vore. The method contributes to knowl-\nedge discovery since it exhibits for each edge or vertex an in-\nformative environnement that explains the found confidence.When applied to the problem of discovering strategic bonds inmolecules, the method correctly classifies most of the bondswhile providing relevant explanations to chemists. The devel-oped algorithm GemsBond outperforms both speed and scala-\nbility of the learning method that has previously been appliedto the same application while giving similar results.\n1 Introduction\nLabeled graphs constitute one of the most widely used mod-\nels to represent symbolic data, thanks to their simplicity andgenerality. If a vertex (or an edge) is obviously characterizedby the label it carries, the most interesting information abouta vertex generally comes from its relations with its topological\nenvironment. The general question raised by the present arti-cle is about this topological information: what can be learntabout a vertex knowing its environment in a graph ? In partic-ular can a vertex be classified into a target class by compar-ing its environments with those of classified examples ? Tosolve such problems, most approaches of relational learningface the combinatorics explosion of possible graph patternsby projecting graphs into simpler representation models (cfSect.6). The problem gets tractable to the detriment of ac-\ncuracy as model reduction induces inevitably some loss inavailable topological information. By contrast, the proposedmethod called GemsBond directly works on graph patterns\nincluded in data, addressing more specifically vertex classi-fication problems that resist to topological reduction. Thisis particularly true in organic chemistry where changing thechemical element of a single atom amay radically change its\ninfluence over atoms at three or even more bonds (i.e atomconnections) away from a.\nIndeed the context that has originally motivated the design\nofGemsBond is chemical synthesis: experts of this field build\n1Supelec, France, email: [email protected]\n2Supelec, France, email: [email protected]\n3Loria, France, Nancy, email: [email protected] synthesis plans of target molecules thanks to an analyt-\nical method called retrosynthesis [3]. Every step of this re-cursive method consists in inferring from the molecular graphof the current target molecule Ma chemical reaction that\nbuilds Mfrom simpler molecules called precursors. The de-\ncomposition into subproblems is iterated, precursors serving\nas new targets, until subsequent precursors are readily avail-able molecules. The expert starts each step by identifying thestrategic bonds in the target molecular graph [3]. Strategic\nbonds are the best or the easiest candidate bonds of a moleculeMto be created by chemical reactions that synthesize M.F i g -\nure 1 illustrates a retrosynthesis step where the breaking ofa strategic bond produces two precursors fragments. Because\nFigure 1. Step of a retrosynthesis\nchemical reactions follow common patterns, reactions produce\nspecific topological environments around created bonds. As aconsequence, the strategic character of a bond can often beinferred from its topological environment. However the dis-covery of strategic bonds requires knowledge of thousands of\nreaction patterns whose conditions of applicability are notclearly known. Discovering automatically strategic bonds bymining existing reaction databases will thus help experts inimproving the quality of their strategic bond analysis.\nThe article has a twofold contribution: it presents a generic\nmethod that classifies vertices or edges of a graph by min-ing topological environments occurring frequently in a set ofexample graphs. Then it presents a successful application ofthis method to the problem of discovering strategic bonds in\nmolecular graphs. To this end, section 2 introduces the prob-lem of vertex classification based on vertex environment in a\nformal application-independent framework. Section 3 presentsour method GemsBond and gives some details about its imple-ECAI 2008\nM. Ghallab et al. (Eds.)\nIOS Press, 2008\n© 2008 The authors and IOS Press. All rights reserved.\ndoi:10.3233/978-1-58603-891-5-147147\nmentation. Section 4 explains how the previous method has\naddressed the problem of discovering strategic bonds. Section5 describes results obtained by GemsBond in predicting strate-\ngic bonds while section 6 compares the proposed method toother related works.\n2 Problem statement\nAL-vertex-labeled graph g=(V,E,L,λ g) is defined by a set\nof vertices V(g)=V , a set of pairs of said adjacent vertices\nE(g)=E called edges, a set of vertex types Land a labeling\nfunction λg:V→L that labels every vertex vby a type\nλg(v). A graph g1is a subgraph ofg2(i.eg1⊆g2)i fV (g1)\nandE(g1) are resp. subsets of V(g2)a n dE (g2) and vertices of\nV(g1) are identically labeled in g1andg2. A graph is connected\nif every pair of vertices can be linked by a sequence of adja-cent vertices. Two graphs are isomorphic if it is possible to re-\nname vertices of one graph so that it gets equal to the second.For sake of conciseness the problem statement only considers\nvertex-labeled graphs even if the problem can be generalizedto labeled graphs where edges carry types. Therefore the termgraph refers hereafter to a L-vertex-labeled graph.\nThe considered problem of vertex classification based on\nvertex environment consists in predicting whether a given in-\nput vertex vof a input graph gis member of a target class\nCby comparing the environments of vingto environments\nof already classified vertices. An environment Eofvingis\nformally defined as any connected subgraph of gcontaining\nv. This supervised classification problem assumes the exis-\ntence of a set Eofexample graphs where members of Care\nknown vertices. Figure 2 provides two example graphs, an in-\nput graph gand an environment Eof an input vertex vofg\nall referred in subsequent examples. In order to be meaning-\nFigure 2. Two example graphs (a), an input graph and input\nvertex (b) and an environment (c) of the input vertex.\nful, the problem assumes that for the considered application,\nthe hypothesis v∈Cstatically depends on the environment\nofvingand that the dependency is the same whether graph\ngis an example or an input graph.\n3T h e GemsBond algorithm\nThe principles of GemsBond rely on a confidence index c(E)f o r\nthe hypothesis v∈Cto be true knowing only one particular\nenvironment Eofv. In turn, the definition of c(E)r e l i e so n\nthe number of occurrences of Ein a set Eof example graphs.\nAn occurrence of a graph gin a graph g/primeis defined by an\ninjective application (or morphism) μ:V(g)→V(g/prime) that\npreserves both vertex adjacency and vertex labeling:\n∀{v 1;v2}∈E(g),{μ(v 1);μ(v 2)}∈E(g/prime)( 1 )\n∀v∈V(g),λ g/prime(μ(v)) =λg(v)( 2 )G i v e na ne n v i r o n m e n t Eofvin a graph g, an occurrence\nofEinEspecified by a morphism μispositive (resp. neg-\native) if the image μ(v)o fvis (resp. is not) in the target\nclassC.T h en u m b e r occ+(E) (resp. occ−(E)) of positive (resp.\nnegative) occurrences of environment Eis defined as the to-\ntal number of positive (resp. negative) occurrences of Ein\nall graphs of E. Figure 3 shows the three positive and two\nnegative occurrences of the environment of Fig.2(c) in ex-\nample graphs of Fig.2(a). Contrary to frequency of itemsets,\nFigure 3. Positive and negative occurrences\nthe number of occurrences is not monotonic wrt subgraph in-\nclusion: given two environments E1andE2, both properties\nE1⊂E2and occ( E1)<occ(E2) can hold simultaneously. For\ninstance single vertex of type chas one less (i.e two) positive\noccurrences in examples of Fig. 2(a) than the larger environ-ment of Fig.2(c). Whereas absolute values of occ\n+(E)a n d\nocc−(E) can unpredictably fluctuate when environment Eof\nvgrows, the fraction c(E) of positive occurrences of E,c a l l e d\nconfidence, can approach probability for hypothesis v∈Cto\nbe true given E:\nc(E)=occ+(E)\nocc+(E)+o c c−(E)(3)\nIn the special case both occ+(E) and occ−(E) are null, the\nmethod adopts a conservative position assuming confidencec(E) is zero. Contrary to numbers of occurrences, this ratio is\nbounded between 0 and 1 and is consistent with a probability:a value of 0 (resp. 1) states every occurrence of Ein the\nexample set is negative (resp. positive).\nThe confidence c(v) in hypothesis v∈Cis presumably inde-\npendent of any particular environment so that the whole set\nE(v) of environments of vshould contribute to the value of\nc(v). However mining every environment of vwould require\nan unacceptable amount of processing time. A compromisesolution consists in considering only the few maximal envi-ronments of vingoccurring at least in n\nminexamples of E.\nHowever the external parameter nmincannot be tuned eas-\nily: if the value of nminis too low, maximal environments get\nlarge and require long processing time whereas a too large\nvalue of nminstops the environment growth too early and pro-\nduces non discriminative environments of average confidence.Instead GemsBond empirically defines the confidence c(v)a s\nequal to the highest confidence reached by any environmentEofv:\nc(v)=c(E\nmax)w i t hE max= argmax\nE∈E (v)(c(E)) (4)\nThis choice is legitimate for said asymmetric problems when\nthe fact v∈Cis triggered by the presence around vof any\nenvironment from a restricted set of (unknown) specific en-\nvironments. As negation of an existential disjunction is notanother existential disjunction, symmetry with dual problemF.Pennerathetal./AMethod forClassifying Vertices ofLabeled Graphs Applied toKnowledg eDiscovery fromMolecules 148\nis broken. Consequently, if an input vertex has two disjoint\nenvironments EhandElresp. with a high and a low con-\nfidence, Ehis preponderant over Elso that confidence c(v)\nremains high (but lower than c(E h)). This property explains\nthe choice in formula 4 of converging in priority towards asingle environment of maximal confidence.\nFinally if the confidence of vis greater than a minimum\ndecision threshold c\nmin, the vertex vis classified positively:\nc(v)≥cmin⇔vis believed to be in C (5)\nT h eo p t i m a lv a l u eo f cmincan be learnt from E.T h em e t h o d\nprovides an easy-to-use analysis tool for the expert as thevalue of c(v) is justified by a single environment E\nmax. This\nenvironment hereafter called explanation has shown to carry\nrelevant chemical information to explain strategic bonds. Inthis sense the method does not only serve classification prob-lems but knowledge extraction as well.\nGiven an input vertex vof an input graph g, the mining\nprocedure of GemsBond consists in searching the environment\nE\nmaxthat maximises the confidence c(E) relatively to a set of\nexamples E. The depth-first search of Emaxis implemented by\na recursive procedure that develops the current environmentEset initially to the vertex-graph of v. Again, depending on\nthe desired tradeoff between result accuracy and processingspeed, multiple search strategies are possible. For the specific\nproblem of strategic bond discovery, the locally greedy search(cf algorithm 1) has shown to be approximately as accurateas other more exhaustive searches while being significativelyfaster. At each step of the recursive search, every extension\nAlgorithm 1: The greedy procedure findEMax()\nData: input graph g, example set E\nInput : current env. Ecurrent and its conf. ccurrent\nResult :e x p l a n a t i o n Emaxand confidence cmaxare\nglobal variables\nSetC←∅;clocal max ←0;\nforall extension eofEcurrent ingdo 1\nc←conf(e(E current ),E); 2\nifc≥clocal max andc>c current then 3\nifc>c local max then\nclocal max ←c;C←∅\nC←C∪{e}\nifC=∅then\nifccurrent >c max then\ncmax←ccurrent ;Emax←Ecurrent\nelse\nforall e∈Cdo\nfindEMax(e(E current ),clocal max ) 4\nof the current environment Ecurrent compatible with the in-\nput graph gis enumerated (cf line 1) before the confidence of\nthe extended environment is evaluated (cf line 2). Extending\nEcurrent simply consists in adding to Ecurrent one of the edges\ninE(g)\\E(E current )i n c i d e n tt oav e r t e xo f V(E current ). Only\nthe environments that have a maximal confidence (locally) arefurther developed by recursive calls (cf line 4). Figure 4 illus-\ntrates the greedy search of Emaxrelatively to the input graph\nand input vertex of Fig.2(b) and the two example graphs of\nFig.2(a). Bold edges represent edge extensions.\nFigure 4. Example of a greedy search\nComputing the confidence of Ecurrent (cf line 2) requires\nto count all positive and negative occurrences of Ecurrent in\nE. Graph mining algorithms using a depth-first search can\nefficiently compute the number of occurrences of the currentgraph pattern by using a fast and compact data structurecalled embedding list [9]. This structure has been upgraded\nto count simultaneously positive and negative occurrences inone single pass over the examples so that computing the con-fidence does not require more time than computing a numberof occurrences. A caching mechanism has also been addedto algorithm 1 to remember the confidence of already minedenvironment graphs so that the confidence of every mined en-vironment is computed only once. This cache is made of a triethat maps encodings of graphs to confidences in a way encod-ings are invariant to vertex index permutations. In addition\nextensions that produce a null confidence are black-listed sothat they are not applied later during the greedy search (cfFig.4). Finally in order to improve quality of results, greedyselection (cf line 3 of algorithm 1) has been disabled while thesize of E\ncurrent is below some threshold smin. This condition\nprotects GemsBond from an early convergence toward a local\nmaximum of confidence that is not globally optimal. When\ntested on chemical data, most suboptimal maxima appear forsmall environments of two or three bonds so that a good value\ns\nminis 3 labeled edges for that particular application.\n4 Application to strategic bond discovery\nThe GemsBond algorithm has been applied to the asymmetric\nproblem of discovering strategic bonds as introduced in sec-\ntion 1, where example and input graphs are molecular graphs .\nIn a molecular graph as illustrated on Fig.1, vertices representatoms labeled by their chemical elements (C for carbon, ...)\nand edges represent covalent bonds labeled by their type (sin-gle, double, triple or aromatic). The set Econtains examples of\nmolecule synthesis specified by molecular graphs where bondsF.Pennerathetal./AMethod forClassifying Vertices ofLabeled Graphs Applied toKnowledg eDiscovery fromMolecules 149\ncreated or modified by the underlying synthesis are specially\nannotated. Molecular graphs are directly imported from reac-tion databases without any additional annotation but bondaromaticity. Output confidences produced by GemsBond are\nefficiently conveyed to experts by modulating the thickness\nof every bond with its confidence: the more strategic a bondis, the thicker the bond is drawn. Figure 5 represents an ex-ample of output as displayed to the expert. Bonds created(resp. modified) by the considered synthesis are crossed twice(resp. once) whereas strategic bonds appear thicker, to vari-ous extent, than other bonds. The output illustrates the fourclasses of bonds for c\nminset to 0.7: for instance the created\nbond aof confidence 0.92 is a true positive. The respective\nFigure 5. The four classes of bonds and confidences\nexplanations of the four bonds are given on Fig.6. As ex-\npected bonds of high confidence have more sophisticated en-vironments than bonds of low confidence. These explanationsare minimal as the greedy algorithm only extends the envi-ronment if it makes the confidence strictly grow. As a con-\nsequence all atoms and bonds of an explanation play somerole in the found confidence. The uncreated bond bsymmet-\nFigure 6. Explanations\nric to created bond ais necessarily strategic. This suggests\na problem already observed in [11]: whereas a created bond\nis strategic, a not created bond for a given synthesis may becreated by another unconsidered synthesis and be actuallystrategic. This in turn induces noise in data and a persistent\nclassification error. Another difficulty is that created bondsare 9 times less frequent than not created bonds so that clas-sifiers are pushed to make positive predictions only for themost obviously strategic bonds.5 Evaluation\nClassification tests have been carried on 6600 examples4of\nmolecule synthesis. In order for the experts to better focus\ntheir analysis, only strategic character of single bonds hasbeen computed. A cross validation test has partitioned thefirst 6000 examples in subsets of 100 elements. In each subset,\nthe confidence of single bonds to be strategic has been eval-uated from the 5900 remaining examples. Figure 7(a) showsthe stacked histograms of both created and not created bondsdepending on value of confidence. Because of the unbalancednumbers of created/not created bonds, each distribution hasbeen normalized to a total sum of 1. Most bonds with aconfidence higher than 0.4 appear to be created. Peaks arecaused by recurrent explanations. Distributions of Fig.7(a)\n(a)\n (b)\n(c)\nFigure 7. Distribution of created/not created bonds (a), ROC\ncurve (b) and error rates (c)\ndetermine prediction error as a function of threshold cminon\nFig.7(c) and ROC curve on Fig.7(b) with an AUC of 0.92.\nThe minimal error of 6 % reached for cmin=0.7i sb i a s e db y\nthe over-representation of not created bonds and is to be com-pared with the 10 % error rate obtained when systematicallyrejecting the hypothesis. Corrected error on Fig.7(c) is theprediction error if both classes are assumed to be equally rep-\nresented. Optimal value for c\nminis then 0.3 for an unbiased\nerror of 16 %. These thresholds have been validated on the 600\nremaining examples and no gap has been observed with bothpredicted error rates. The learning method CNN[11] already\napplied to the same problem is not available so that no com-\nparison test between CNNand GemsBond could be performed.\nRegin reports in [11] a slightly better error rate of 4 % on their\n4Dataset cannot be distributed but retrieved from the Symyxr/circlecopyrt-\nMDLr/circlecopyrtChemInform and Reflib databases by selecting only\nmono-product and mono-step reactions with at least one created\nC−Cbond, with a yield of 90 % at least and with atoms only of\ntype H,B,C,N,O,F,Si,P,S,Cl,BrorI. Only the first 6600\nproducts are considered in the resulting dataset of 6743 reactions.F.Pennerathetal./AMethod forClassifying Vertices ofLabeled Graphs Applied toKnowledg eDiscovery fromMolecules 150\nown tests. However the 2 % of difference must be mitigated\nasCNNwas tested on data cautiously selected by hand and\nwhose atoms and bonds were annotated with additional rel-evant chemical information. Considering algorithm efficiency,Regin reports a jack-knife test of CNNover 694 single bonds\nfrom 75 molecular graphs required 72 hours on a SPARC 2.In comparison, GemsBond processes the cross validation test of\nabout 190000 single bonds from 6000 molecules in about 50minutes on an Opteron 250. Gap of performance is so large(about 20000 faster while mining 80 times more data) that\nit cannot be explained by hardware or implementation issues\nonly. Since complexities of CNNand GemsBond with the size of\nEare respectively quadratic and linear, the performance gap\nshould even increase for larger sets E.\n6 Related work\nSome computer-assisted synthesis systems like [6, 13] already\nsearch for strategic bonds in molecular graphs. However thesemethods rely on either hard-coded heuristics or deductiverule systems. To the best of our knowledge, the only at-tempt to learn strategic bonds from examples is described in[12, 11]. This complex inductive graph learning method itera-tively generalizes graph patterns from examples. The methodis robust against noise in data but requires large amount ofprocessing time to compute maximal common subgraphs of\ngraph patterns. Moreover the method does not scale properly\nwith data since its complexity is a quadratic function of the\nnumber of examples. The method GemsBond aims at solving\nthe same bond classification problem but finds inspiration in\npattern searching, as a subfield of artificial intelligence, andmore specifically in graph-based data-mining whose generalprinciple is the extraction of subgraphs occurring frequentlyin a set of labeled graphs. The first algorithm of this type isSubdue [2] that uses a beam search strategy to extract from a\nset of graphs, subgraphs maximizing a scoring function. More\nrecently algorithms have efficiently extracted subgraphs fre-\nquent in a graph dataset [7, 8, 15, 9]. These methods havefound applications in chemistry to predict biological activityof molecules based on frequency of molecular substructures[4, 14]. Our method differs from these approaches as the clas-sification is local to a vertex or edge and as mined patternsare subgraphs of one particular graph so that the search spaceis much smaller than the whole graph order. The problemaddressed by GemsBond is more related to statistical or log-\nical relational learning [10] and graph labeling [1]. Howeverthese methods reduce the topological information using ei-\nther apriori Bayesian or logical relational models [10], infor-\nmation diffusion models along edges [16] and more generally\ninjection into high-dimensional Euclidian spaces using vertex\nkernels [5]. In comparison our method directly works in theordered space of graph patterns.\n7 Conclusion\nThis article has described an original graph-mining methodto classify vertices or edges based on their environment. TheGemsBond algorithm has proved to be a fast, scalable and ac-\ncurate solution to the strategic bond discovery problem. Infuture, study of variations on the search algorithm should re-lax the assumption on problem asymmetry so that GemsBondgets applicable to a wider spectrum of applications and bench-\nmarks.\nACKNOWLEDGEMENTS\nAuthors wish to thank chemists C. Lauren¸ co and G. Niel from\nENSC, Montpellier, France for their support and feedback.\nREFERENCES\n[1] Graph Labelling Workshop of ECML/PKDD 2007, the 11th\nEuropean Conference on Principles and Practice of Knowl-\nedge Discovery in Databases, Warsaw, Poland , 2007.\n[2] D. J. Cook and L. B. Holder, ‘Substructure discovery us-\ning minimum description length and background knowl-edge’, Journal of Artificial Intelligence Research ,1, 231–255,\n(1994).\n[3] E.J. Corey and X.M. Cheng, The Logic of Chemical Synthe-\nsis, John Wiley & Sons, New York, 1989.\n[4] M. Deshpande, M. Kuramochi, and G. Karypis, ‘Frequent\nsub-structure-based approaches for classifying chemical com-pounds’, icdm,00, 35, (2003).\n[5] T. G¨ a r t n e r ,T .H o r v a r t h ,Q .V .L e ,A .J .S m o l a ,a n dS .W r o -\nbel, Mining Graph Data , chapter 11, Wiley-Interscience, 2006.\n[ 6 ]J .G a s t e i g e r ,M .P f ¨ ortner, M. Sitzmann, R. H¨ ollering,\nO. Sacher, T. Kostka, and N. Karg, ‘Computer-assisted syn-thesis and reaction planning in combinatorial chemistry’, Per-\nspectives in Drug Discovery and Design ,20, 245–264, (2000).\n[7] A. Inokuchi, T. Washio, and H. Motoda, ‘An apriori-based\nalgorithm for mining frequent substructures from graph data’,\ninPKDD ’00: Proceedings of the 4th European Conference\non Principles of Data Mining and Knowledge Discovery ,p p .\n13–23, London, UK, (2000). Springer-Verlag.\n[8] M. Kuramochi and G. Karypis, ‘Frequent subgraph discov-\nery’, in ICDM ’01: Proceedings of the 2001 IEEE Interna-\ntional Conference on Data Mining , pp. 313–320, (2001).\n[9] S. Nijssen and J. N. Kok, ‘A quickstart in frequent structure\nmining can make a difference’, in KDD ’04: Proceedings of the\ntenth ACM SIGKDD international conference on Knowledge\ndiscovery and data mining , pp. 647–652, New York, NY, USA,\n(2004). ACM Press.\n[10] Luc De Raedt, Thomas G. Dietterich, Lise Getoor, Kristian\nKersting, and Stephen Muggleton, eds. Probabilistic, Logi-\ncal and Relational Learning - A Further Synthesis, 15.04. -\n20.04.2007 , volume 07161 of Dagstuhl Seminar Proceedings .\nInternationales Begegnungs- und Forschungszentrum fuer In-\nformatik (IBFI), Schloss Dagstuhl, Germany, 2008.\n[11] J.-C. R´ egin. D´ eveloppement d’outils algorithmiques pour\nl’intelligence artificielle. Application ` al ac h i m i eo r g a n i q u e .\nTh`ese de l’Universit´ e des Sciences et Techniques du Langue-\ndoc, Montpellier, 1995.\n[12] J.C. Regin, O. Gascuel, and C. Laurenco, ‘Machine learn-\ning of strategic knowledge in organic synthesis from reac-\ntion databases’, in Proceedings of E.C.C.C-1, Computational\nChemistry, eds., F. Bernardi and J.L. Rivail, pp. 618–623,Woodbury, NY, (1995). AIP Press.\n[13] H. Satoh and T. Nakata, ‘Knowledge discovery on chemical\nreactivity from experimental reaction information’, in Discov-\nery Science, eds., Gunter Grieser, Yuzuru Tanaka, and Aki-\nhiro Yamamoto, volume 2843 of Lecture Notes in Computer\nScience , pp. 470–477. Springer, (2003).\n[14] R. M. H. Ting and J. Bailey, ‘Mining minimal contrast sub-\ngraph patterns’, in SDM, eds., J. Ghosh, D. Lambert, D. B.\nSkillicorn, and J. Srivastava. SIAM, (2006).\n[15] X. Yan and J. Han, ‘gspan: Graph-based substructure pattern\nmining’, in ICDM ’02: Proceedings of the 2002 IEEE Interna-\ntional Conference on Data Mining, p. 721, Washington, DC,USA, (2002). IEEE Computer Society.\n[16] Dengyong Zhou, Jiayuan Huang, and Bernhard Sch¨ olkopf,\n‘Learning from labeled and unlabeled data on a directedgraph’, in ICML , eds., Luc De Raedt and Stefan Wrobel, vol-\nume 119 of ACM International Conference Proceeding Series ,\npp. 1036–1043. ACM, (2005).F.Pennerathetal./AMethod forClassifying Vertices ofLabeled Graphs Applied toKnowledg eDiscovery fromMolecules 151",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "OWPx9gVXhdd",
"year": null,
"venue": "CGW@ECAI2014",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=OWPx9gVXhdd",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Job-Level Algorithms for Connect6 Opening Position Analysis.",
"authors": [
"Ting-Han Wei",
"I-Chen Wu",
"Chao-Chin Liang",
"Bing-Tsung Chiang",
"Wen-Jie Tseng",
"Shi-Jim Yen",
"Chang-Shing Lee"
],
"abstract": "This paper investigates job-level (JL) algorithms to analyze opening positions for Connect6. The opening position analysis is intended for opening book construction, which is not covered by this paper. In the past, JL proof-number search (JL-PNS) was successfully used to solve Connect6 positions. Using JL-PNS, many opening plays that lead to losses can be eliminated from consideration during the opening game. However, it is unclear how the information of unsolved positions can be exploited for opening book construction. For this issue, this paper first proposes four heuristic metrics when using JL-PNS to estimate move quality. This paper then proposes a JL upper confidence tree (JL-UCT) algorithm and some heuristic metrics, one of which is the number of nodes in each candidate move’s subtree. In order to compare these metrics objectively, we proposed two kinds of measurement methods to analyze the suitability of these metrics when choosing best moves for a set of benchmark positions. The results show that for both metrics this node count heuristic metric for JL-UCT outperforms all the others, including the four for JL-PNS.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "98WisvKfJd",
"year": null,
"venue": "EC 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=98WisvKfJd",
"arxiv_id": null,
"doi": null
}
|
{
"title": "LP-based Approximation for Personalized Reserve Prices",
"authors": [
"Mahsa Derakhshan",
"Negin Golrezaei",
"Renato Paes Leme"
],
"abstract": "We study the problem of computing personalized reserve prices in eager second price auctions without having any assumption on valuation distributions. Here, the input is a dataset that contains the submitted bids of n buyers in a set of auctions and the goal is to return personalized reserve prices r that maximize the revenue earned on these auctions by running eager second price auctions with reserve r . We present a novel LP formulation to this problem and a rounding procedure which achieves a (1+2(√2-1)e √2-2 ) -1 ≅0.684-approximation. This improves over the 1/2-approximation Algorithm due to Roughgarden and Wang. We show that our analysis is tight for this rounding procedure. We also bound the integrality gap of the LP, which bounds the performance of any algorithm based on this LP. Supplemental Material Available for Download mp4 p589-derakhshan.mp4 (1.2 GB) Index Terms LP-based Approximation for Personalized Reserve Prices Mathematics of computing Mathematical analysis Mathematical optimization Continuous optimization Linear programming Theory of computation Design and analysis of algorithms Approximation algorithms analysis Rounding techniques Theory and algorithms for application domains Algorithmic game theory and mechanism design Computational pricing and auctions Comments var disqus_config ",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "EJSl66H1M5y",
"year": null,
"venue": "ISIT 2019",
"pdf_link": "https://ieeexplore.ieee.org/iel7/8827389/8849208/08849575.pdf",
"forum_link": "https://openreview.net/forum?id=EJSl66H1M5y",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Being correct eventually almost surely",
"authors": [
"Changlong Wu",
"Narayana Santhanam"
],
"abstract": "We study the problem of predicting upper bounds on the next draw of an unknown probability distribution after observing a sample generated by it. The unknown distribution is modeled as belonging to a class P of distributions over natural numbers. The goal is to err only finitely many times even though the game proceeds over an infinite horizon, and though there is no upper bound on what the next sample can be. If a universal prediction scheme exists that makes only finitely many errors regardless of what model in P generated the data, we say P is eventually almost surely (e.a.s.) predictable. In this paper, we fully characterize when P can be e.a.s.-predictable.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "WIlFLFfDuWM",
"year": null,
"venue": "EACL 1999",
"pdf_link": "https://aclanthology.org/E99-1021.pdf",
"forum_link": "https://openreview.net/forum?id=WIlFLFfDuWM",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Automatic Authorship Attribution",
"authors": [
"Efstathios Stamatatos",
"Nikos Fakotakis",
"George K. Kokkinakis"
],
"abstract": "E. Stamatatos, N. Fakotakis, G. Kokkinakis. Ninth Conference of the European Chapter of the Association for Computational Linguistics. 1999.",
"keywords": [],
"raw_extracted_content": "Proceedings of EACL '99 \nAutomatic Authorship Attribution \nE. Stamatatos, N. Fakotakis and G. Kokkinakis \nDept. of Electrical and Computer Engineering \nUniversity of Patras \n26500 - Patras \nGREECE \[email protected] \nAbstract \nIn this paper we present an approach to \nautomatic authorship attribution dealing \nwith real-world (or unrestricted) text. \nOur method is based on the \ncomputational analysis of the input text \nusing a text-processing tool. Besides the \nstyle markers relevant to the output of \nthis tool we also use analysis-dependent \nstyle markers, that is, measures that \nrepresent the way in which the text has \nbeen processed. No word frequency \ncounts, nor other lexically-based \nmeasures are taken into account. We \nshow that the proposed set of style \nmarkers is able to distinguish texts of \nvarious authors of a weekly newspaper \nusing multiple regression. All the \nexperiments we present were performed \nusing real-world text downloaded from \nthe World Wide Web. Our approach is \neasily trainable and fully-automated \nrequiring no manual text preprocessing \nnor sampling. \n1 Introduction \nThe vast majority of the attempts to computer- \nassisted authorship attribution has been focused \non literary texts. In particular, a lot of attention \nhas been paid to the establishment of the \nauthorship of anonymous or doubtful texts. A \ntypical paradigm is the case of the Federalist \npapers twelve of which are of disputed \nauthorship (Mosteller and Wallace, 1984; \nHolmes and Forsyth, 1995). Moreover, the lack \nof a generic and formal definition of the \nidiosyncratic style of an author has led to the \nemployment of statistical methods (e.g., discriminant analysis, principal components, \netc.). Nowadays, the wealth of text available in \nthe World Wide Web in electronic form for a \nwide variety of genres and languages, as well as \nthe development of reliable text-processing tools \nopen the way for the solution of the authorship \nattribution problem as regards real-world text. \nThe most important approaches to authorship \nattribution involve lexically based measures. A \nlot of style markers have been proposed for \nmeasuring the richness of the vocabulary used \nby the author. For example, the type-token ratio, \nthe hapax legomena (i.e., once-occurring \nwords), the hapax dislegomena (i.e., twice- \noccurring words), etc. There are also functions \nthat make use of these measures such as Yule's \nK (Yule, 1944), Honore's R (Honore, 1979), etc. \nA review of this metrics can be found in \n(Holmes, 1994). In (Holmes and Forsyth, 1994) \nfive vocabulary richness functions were used in \nthe framework of a multivariate statistical \nanalysis of the Federalist papers and a principal \ncomponents analysis was performed. All the \ndisputed papers lie in the side of James Madison \n(rather than Alexander Hamilton) in the space of \nthe first two principal components. However, \nsuch measures require the development of large \nlexicons with specialized information in order to \ndetect the various forms of the lexical units that \nconstitute an author's vocabulary. For languages \nwith a rich morphology, i.e. Modem Greek, this \nis an important shortcoming. \nInstead of counting how many words occur \ncertain number of times, Burrows (1987) \nproposed the use of a set of common function \n(or context-free) word frequencies in the sample \ntext. This method combined with a principal \ncomponents analysis achieved remarkable \nresults when applied to a wide variety of authors \n(Burrows, 1992). On the other hand, a lot of \n158 \n\nProceedings of EACL '99 \neffort is required regarding the selection of the \nmost appropriate set of words that best \ndistinguish a given set of authors (Holmes and \nForsyth, 1995). Moreover, all the lexically- \nbased style markers are highly author and \nlanguage dependent. The results of a work using \nsuch measures, therefore, can not be applied to a \ndifferent group of authors nor another language. \nIn order to avoid the problems of lexically- \nbased measures, (Baayen, et al., 1996) proposed \nthe use of syntax-based ones. This approach is \nbased on the frequencies of the rewrite rules as \nthey appear in a syntactically annotated corpus. \nBoth high-frequent and low-frequent rewrite \nrules give accuracy results comparable to \nlexically-based methods. However, the \ncomputational analysis is considered as a \nsignificant limitation of this method since the \nrequired syntactic annotation scheme is very \ncomplicated and current text-processing tools \nare not capable of providing automatically such \ninformation, especially in the case of \nunrestricted text. \nTo the best of our knowledge, there is no \ncomputational system for the automatic \ndetection of authorship dealing with real-world \ntext. In thispaper, we present an approach to \nthis problem. In particular, our aim is the \ndiscrimination between the texts of various \nauthors of a Modem Greek weekly newspaper. \nWe use an already existing text processing tool \nable to detect sentence and chunk boundaries in \nunrestricted text for the extraction of style \nmarkers. Instead of trying to minimize the \ncomputational analysis of the text, we attempt to \ntake advantage of this procedure. In particular, \nwe use a set of analysis-level style markers, i.e., \nmeasures that represent the way in which the \ntext has been processed by the tool. For \nexample, a useful measure is the percentage of \nthe sample text remaining unanalyzed after the \nautomatic processing. In other words, we \nattempt to adapt the set of the style markers to \nthe method used by the sentence and chunk \ndetector in order to analyze the sample text. The \nstatistical technique of multiple regression is, \nthen, used for extracting a linear combination of \nthe values of the style markers that manages to \ndistinguish the different authors. The \nexperiments we present, for both author \nidentification and author verification tasks, were \nperformed using real-world text downloaded from the World Wide Web. Our approach is \neasily trainable and fully automated requiring no \nmanual text preprocessing nor sampling. \nA brief description of the extraction of the \nstyle markers is given in section 2. Section 3 \ndescribes the composition of the corpus of real- \nworld text used in the experiments. The training \nprocedure is given in section 4 while section 5 \ncomprises analytical experimental results. \nFinally, in section 6 some conclusions are drawn \nand future work directions are given. \n2 Extraction of Style Markers \nAs aforementioned, an already existing tool is \nused for the extraction of the style markers. This \ntool is a Sentence and Chunk Boundaries \nDetector (SCBD) able to deal with unrestricted \nModem Greek text (Stamatatos, et aL, \nforthcoming). Initially, SCBD segments the \ninput text into sentences using a set of \ndisambiguation rules, and then detects the \nboundaries of intrasentential phrases (i.e., \nchunks) such as noun phrases, prepositional \nphrases, etc. It has to be noted that SCBD makes \nuse of no complicated resources (e.g., large \nlexicons). Rather, it is based on common word \nsuffixes and a set of keywords in order to detect \nthe chunk boundaries using empirically derived \nrules. A sample of its output is given below: \nVP[Aev 0~ko~ va p~] NP[XdSt] PP[tm 1 \nq0co~td] CON[akkd] VP[ma~m3co] \nCON[6~t] NP[I] sml3dpvvml] PP[oxov \nnpoiJ~oko'/togr] PP[a~6 zoa)q 13ovksm~q] \nVP[Sev gnopei va xpoagezpeixat] g6vo \nPP[ge za \"5 *Sto. *Spz. zcov \nctvaSpogtKrbv] xov NP[xqlpav ze)~evzai.a] \nVP[xpo,:a~.rbv~aq] NP[vr I 5voq0opia \"Crlq \nKotvClq $vcbgrlq]. \nBased on the output of this tool, the \nfollowing measures are provided: \nToken-leveh sentence count, word count, \npunctuation mark count, etc. \nPhrase-level: noun phrase count, word \nincluded in noun phrases count \nprepositional phrase count, word included \nin prepositional phrases count etc. \nIn addition, we use measures relevant to the \ncomputational analysis of the input text: \n159 \n\nProceedings of EACL '99 \nTable 1. The Corpus Consisting of Texts Taken from the Weekly Newspaper TO BHMA. \nCode \nA01 \nA02 \nA03 \nA04 \nA05 \nA06 \nA07 \nA08 \nA09 \nA10 Author name Texts Total words Thematic area \nD. Maronitis 20 11,771 Culture, society \nM. Ploritis 20 22,947 Culture, history \nK. Tsoukalas 20 30,316 International affairs \nC. Kiosse 20 34,822 Archeology \nS. Alachiotis 20 19,162 Biology \nG. Babiniotis 20 25,453 Linguistics \nT. Tasios 20 20,973 Technology, society \nG. Dertilis 20 18,315 History, society \nA. Liakos 20 25,826 History, society \nG. Vokos 20 20,049 Philosophy \n• Analysis-level: unanalyzed word count after \neach pass, keyword count, non-matching \nword count, and assigned morphological \ndescriptions for both words and chunks. \nThe latter measures can be calculated only \nwhen this particular computational tool is \nutilized. In more detail, SCBD performs \nmultiple pass parsing (i.e., 5 passes). Each \nparsing pass analyzes a part of the sentence, \nbased on the results of the previous passes, and \nthe remaining part is kept for the subsequent \npasses. The first passes try to detect the simplest \ncases of the chunk boundaries which are easily \nrecognizable while the last ones deal with more \ncomplicated cases using the findings of the \nprevious passes. The percentage of the words \nremaining unanalyzed after each parsing pass, \ntherefore, is an important stylistic factor that \nrepresents the syntactic complexity of the text. \nAdditionally, the measure of the detected \nkeywords and the detected words that do not \nmatch any of the stored suffixes include crucial \nstylistic information. \nThe vast majority of the natural language \nprocessing tools can provide analysis-level style \nmarkers. However, the manner of capturing the \nstylistic information may differ since it depends \non the method of analysis. \nIn order to normalize the calculated style \nmarkers we make use of ratios of them (e.g., \nwords / sentences, noun phrases / total detected \nchunks, words remaining unanalyzed after \nparsing pass 1 / words, etc.). The total set of \nstyle markers comprises 22 markers, namely: 3 \ntoken-level, 10 phrase-level, and 9 analysis-level \nones. 3 Corpus \nThe corpus used for this study consists of texts \ndownloaded from the World Wide Web-site of \nthe Modem Greek weekly newspaper TO BHMA \n(Dolnet, 1998). This newspaper comprises \nseveral supplements. We chose to deal with \nauthors of the supplement B, entitled NEEZ \nEHOXEZ (i.e., new ages), which comprises \nessays on science, culture, history, etc. since in \nsuch writings the indiosyncratic style of the \nauthor is not likely to be overshadowed by the \ncharacteristics of the corresponding text-genre. \nIn general, the texts included in the supplement \nB are written by scholars, writers, etc., rather \nthan journalists. Moreover, there is a closed set \nof authors that regularly publish their writings in \nthe pages of this supplement. The collection of a \nconsiderable amount of texts by an author was, \ntherefore, possible. \nInitially, we selected l0 authors whose \nwritings are frequently published in this \nsupplement. No special criteria have been taken \ninto account. Then, 20 texts of each author were \ndownloaded from the Web-site of the \nnewspaper. No manual text preprocessing nor \ntext sampling was performed aside from \nremoving unnecessary headings. All the \ndownloaded texts were taken from issues \npublished during 1998 in order to minimize the \npotential change of the personal style of an \nauthor over time. Some statistics of the \ndownloaded corpus are shown in table 1. The \nlast column of this table refers to the thematic \narea of the majority of the writings of each \nauthor. Notice that this information was not \n160 \n\nProceedings of EACL '99 \ntaken into account during the construction of the \ncorpus. \n4 Training \nThe corpus described in the previous section \nwas divided into a training and a test corpus. As \nit is shown by Biber (1990; 1993), it is possible \nto represent the distributions of many core \nlinguistic features of a stylistic category based \non relatively few texts from each category (i.e., \nas few as ten texts). Thus, for each author 10 \ntexts were used for training and I 0 for testing. \nAll the texts were analyzed using SCBD which \nprovided a vector of 22 style markers for each \ntext. Then, the statistical methodology of \nmultivariate linear multiple regression was \napplied to the training corpus. Multiple \nregression provides predicting values of a group \nof response (dependent) variables from a \ncollection of predictor (independent) variable \nvalues. The response is expressed as a linear \ncombination of the predictor variables, namely: \ny~=bo + zlblt + z2b2i +... + zrbri + e~ \nwhere y, is the response for the i-th author, zi, \nze,..and Zr are the predictor variables (i.e., in our \ncase r=22), bo, bl,, b2,,..., and br,, are the \nunknown coefficients, and e, is the random \nerror. During the training procedure the \nunknown coefficients for each author are \ndetermined using binary values for the response \nvariable (i.e., I for the texts written by the \nauthor in question, 0 for the others). Thus, the \ngreater the response variable of a certain author, \nthe more likely to be the author of the text. \nSome statistics measuring the degree to \nwhich the regression functions fit the training \ndata are presented in table 2. Notice that R e is \nthe coefficient of determination defined as \nfollows: \nt/ \nR 2 - j=l \n~--~(yj _ y)2 \nj=l \nwhere n is the total number of training data \n(texts), y is the mean response, )3j and yj are \nthe estimated response and the training response \nvalue of the j-th author respectively. \nAdditionally, a significant F-value implies that a statistically significant proportion of the total \nvariation in the dependent variable is explained. \nTable 2. Statistics of the Regression Functions. \nCode l R 2 [ FValue \nA01 0.40 2.32 \nA02 0.72 9.12 \nA03 0.44 2.80 \nA04 0.44 2.80 \nA05 0.32 1.61 \nA06 0.51 3.57 \nA07 0.59 5.13 \nA08 0.35 1.87 \nA09 0.53 4.00 \nA10 0.63 5.90 \nIt has to be noted that we use this particular \ndiscrimination method due to the facility offered \nin the computation of the unknown coefficients \nas well as the computationally simple \ncalculation of the predictor values. However, we \nbelieve that any other methodology for \ndiscrimination-classification can be applied \n(e.g., discriminant analysis, neural networks, \netc.). \n5 Performance \nBefore proceeding to the presentation of the \nanalytical results of our disambiguation method, \na representation of the test corpus into a \ndimensional space would illustrate the main \ndifferences and similarities between the authors. \nTowards this end, we performed a principal \ncomponents analysis and the representation of \nthe 100 texts of the test corpus in the space \ndefined by the first and the second principal \ncomponents (i.e., accounting for the 43% of the \ntotal variation) is depicted in figure 1. As can be \nseen, the majority of the texts written by the \nsame author tend to cluster. Nevertheless, these \nclusters cannot be clearly separated. \nAccording to our approach, the criterion for \nidentifying the author of a text is the value of the \nresponse linear function. Hence, a text is \nclassified to the author whose response value is \nthe greatest. The confusion matrix derived from \nthe application of the disambiguation procedure \nto the test corpus is presented in table 3, where \neach row contains the responses for the ten test \ntexts of the corresponding author. The last \ncolumn refers to the identification error (i.e., \n161 \n\nProceedings of EACL '99 \ni \ni • 3 -2 6 7 \n! ) \n4) \nX'O \ni \n0 \nA X \n---0 \n0 \n-4. \n• + \n-6- \n-8. + t- o \n@ \n-10 J \nFirst principal component X \nX \nX • \n• ~ ~.g [] A \nrl \n• & \n2 \n[] \n+ \n@ \n+ [] \n• + \n+ \n+ • \n+ + • \n•+ \nFigure 1. The Test Corpus in the Space of the First Two Principal Components. \nTable 3. Confusion Matrix of the Author Identification Experiment. • A01 \n• A02 \nt, A03 \nX A04 \n6 \no A05 \n• A06 \n+ A07 \n[] A08 \n- A09 \n&AI0 \nActual Guess \nA01 A02 .IA03 ]A04 A05 A06 A07 A08 \nA01 3 2 0 0 2 0 0 2 \nA02 0 10 0 0 0 0 0 0 \nA03 0 0 8 0 0 0 0 1 \nA04 0 0 0 9 0 0 0 0 \nA05 0 0 0 3 3 1 0 0 \nA06 2 1 0 0 0 7 0 0 \nA07 0 0 0 0 0 0 10 0 \nA08 1 2 0 1 0 2 0 4 \nA09 0 0 0 0 0 0 0 1 \nA10 0 0 2 1 1 0 0 0 A09 I A10 \n0 1 \n0 0 \n0 1 \n0 1 \n3 0 \n0 0 \n0 0 \n0 0 \n9 0 \n0 6 \nAverage Error \n0.7 \n0.0 \n0.2 \n0.1 \n0.7 \n0.3 \n0.0 \n0.6 \n0.1 \n0.4 \nerroneously classified texts / total texts) for each \nauthor. Approximately 65% of the average \nidentification error corresponds to three authors, \nnamely: A01, A05, and A08. Notice that these \nare the authors with an average text-size smaller \nthan 1,000 words (see table 1). It appears, \ntherefore, that a text sample of relatively short \nsize (i.e., less than 1,000 words) is not adequate \nfor the representation of the stylistic \ncharacteristics of an author's style. Notice that \nsimilar conclusions are drawn by Biber (1990; \n1993). Instead of trying to identify who the author \nof a text is, some applications require the \nverification of the hypothesis that a given person \nis the author of the text. In such a case, only the \nresponse function of the author in question is \ninvolved. Towards this end, a threshold value \nhas to be defined for each response function. \nThus, if the response value for the given author \nis greater than the threshold then the author is \naccepted. \nAdditionally, for measuring, the accuracy of \nthe author verification method as regards a \n162 \n\nProceedings of EACL '99 \nFR ....... FA .... Mean \n.9-z \n0.8 \n0.7 ~ \n0.6 ~- \n0.4 ~ \"-. i \"e \n0.3 ~ \"'. / ~- \n0.2 ~-- ' ~\" \n\" \" ......... 1 .... T ....... I i \n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 \nxR \nFigure 2. FR, FA, and Mean Error as Functions of Subdivisions of R. \ncertain author, we defined False Rejection (FR) \nand False Acceptance (FA) as follows: \nFR = rejected texts of the author \ntotal texts of the author \nFA = accepted texts of the author \ntotal text of other authors \nSimilar measures are widely utilized in the \narea of speaker recognition in speech processing \n(Fakotakis, et al., 1993). \nThe multiple correlation coefficient \nR = +x/R 2 of a regression function (see table 2) \nequals 1 if the fitted equation passes through all \nthe data points. At the other extreme, it equals 0. \nThe fluctuation of average FR, FA, and mean \nerror (i.e., (FR+FA)/2) for the entire test corpus \nusing subdivisions of R as threshold (x-axis) is \nshown in figure 2, and the minimum mean error \ncorresponds to R/2. Notice that by choosing the \nthreshold based on the minimal mean error the \nmajority of applications is covered. On the other \nhand, some applications require either minimal \nFR or FA, and this fact has to be taken into \naccount during the selection of the threshold. \nThe results of the author verification \nexperiment using R/2 as threshold are presented \nin table 4. Approximately 70% of the total false \nrejection corresponds to the authors A01, A05, \nA08 as in the case of author identification. On \nthe other hand, false acceptance seems to be highly relevant to the threshold value. The \nsmaller the threshold value, the greater the false \nacceptance. Thus, the authors A03, A04, A05, \nand A08 are responsible for 72% of the total \nfalse acceptance error. \nTable 4. Author Verification Results \n\"threshold=R/2). \nCode I R/2 [ FR I FA \nA01 0.32 0.3 0.022 \nA02 0.42 0.0 0.044 \nA03 0.33 0.0 0.155 \nA04 0.33 0.1 0.089 \nA05 0.28 0.6 0.144 \nA06 0.36 0.2 0.011 \nA07 0.38 0.0 0.022 \nA08 0.30 0.6 0.100 \nA09 0.36 0.0 0.055 \nA10 0.40 0.4 0.033 \nAverage 0.35 0.22 [ 0.068 \nFinally, the total time cost (i.e., text \nprocessing by SCBD, calculation of style \nmarkers, computation of response values) for the \nentire test corpus was 58.64 seconds, or 1,971 \nwords per second, using a Pentium at 350 MHz. \n6 Conclusions \nWe presented an approach to automatic \nauthorship attribution of real-world texts. A \n163 \n\nProceedings of EACL '99 \ncomputational tool was used for the automatic \nextraction of the style markers. In contrast to \nother proposed systems we took advantage of \nthis procedure in order to extract analysis-level \nstyle markers that represent the way in which \nthe text has been analyzed. The experiments \nbased on texts taken from a weekly Modem \nGreek newspaper prove that the stylistic \ndifferences among a wide range of authors can \nbe easily detected using the proposed set of style \nmarkers. Both author identification and author \nverification tasks have given encouraging \nresults. \nMoreover, no lexically-based measures, such \nas word frequencies, are involved. This \napproach can be applied to a wide-variety of \nauthors and types of texts since any domain- \ndependent, genre-dependent, author-dependent \nstyle marker have not been taken into account. \nAlthough our method has been tested on Modem \nGreek, it requires no language-specific \ninformation. The only prerequisite of this \nmethod in order to be employed in another \nlanguage is the availability of a text-processing \ntool of general purpose and the appropriate \nselection of the analysis-level measures. \nThe presented approach is fully-automated \nsince it is not based on specialized text \npreprocessing requiring manual effort. \nNevertheless, we believe that the accuracy \nresults may be significantly improved by \nemploying text-sampling procedures for \nselecting the parts of text that best illustrate the \nstylistic features of an author. \nRegarding the amount of required training \ndata, we proved that ten texts are adequate for \nrepresenting the stylistic features of an author. \nSome experiments we performed using more \nthan ten texts as training corpus for each author \ndid not improved significantly the accuracy \nresults. It has been also shown that a lower \nbound of the text-size is 1,000 words. \nNevertheless, we believe that this limitation \naffects mainly authors with vague stylistic \ncharacteristics. \nWe are currently working on the application \nof the presented methodology to text-genre \ndetection as well as to any stylistically \nhomogeneous group of real-world texts. We also \naim to explore the usage of a variety of \ncomputational tools for the extraction of analysis-level style markers for Modem Greek \nand other natural languages. \nReferences \nBaayen, H., H. Van Halteren, and F. Tweedie \n1996, Outside the Cave of Shadows: Using \nSyntactic Annotation to Enhance Authorship \nAttribution, Literary and Linguistic Computing, \n11(3): 121-131. \nBiber, D. 1990, Methodological Issues \nRegarding Corpus-based Analyses of Linguistic \nVariations, Literary and Linguistic Computing, \n5: 257-269. \nBiber, D. 1993, Representativeness in Corpus \nDesign, Literary and Linguistic Computing, 8: \n1-15. \nBurrows, J. 1987, Word-patterns and Story- \nshapes: The Statistical Analysis of Narrative \nStyle, Literary and Linguistic Computing, 2(2): \n61-70. \nBurrows, J. 1992, Not Unless You Ask \nNicely: The Interpretative Nexus Between \nAnalysis and Information, Literary and \nLinguistic Computing, 7(2): 91-109. \nDolnet, 1998, TO BHMA, Lambrakis \nPublishing Corporation, http://tovima.dolnet.gr/ \nFakotakis, N., A. Tsopanoglou, and G. \nKokkinakis, 1993, A Text-independent Speaker \nRecognition System Based on Vowel Spotting, \nSpeech Communication, 12: 57-68. \nHolmes, D. 1994, Authorship Attribution, \nComputers and the Humanities, 28: 87-106. \nHolmes, D. and R. Forsyth 1995, The \nFederalist Revisited: New Directions in \nAuthorship Attribution, Literary and Linguistic \nComputing, 10(2): 111-127. \nHonore, A., 1979, Some Simple Measures of \nRichness of Vocabulary, Association for \nLiterary and Linguistic Computing Bulletin, \n7(2): 172-177. \nMosteller, F. and D. Wallace 1984, Applied \nBayesian and Classical Inference.\" The Case of \nthe Federalist Papers, Addison-Wesley, \nReading, MA. \nStamatatos, E., N. Fakotakis, and G. \nKokkinakis forthcoming, On Detecting Sentence \nand Chunk Boundaries in Unrestricted Text \nBased on Minimal Resources. \nYule, G. 1944, The Statistical Study of \nLiterary Vocabulary, Cambridge University \nPress, Cambridge. \n164",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "IA2FOQpIrM",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=IA2FOQpIrM",
"arxiv_id": null,
"doi": null
}
|
{
"title": null,
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "DI56kL8Lku",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.43.pdf",
"forum_link": "https://openreview.net/forum?id=DI56kL8Lku",
"arxiv_id": null,
"doi": null
}
|
{
"title": "PosEdiOn: Post-Editing Assessment in PythOn",
"authors": [
"Antoni Oliver",
"Sergi Alvarez",
"Toni Badia"
],
"abstract": "There is currently an extended use of post-editing of machine translation (PEMT) in the translation industry. This is due to the increase in the demand of translation and to the significant improvements in quality achieved by neural machine translation (NMT). PEMT has been included as part of the translation workflow because it increases translators’ productivity and it also reduces costs. Although an effective post-editing requires enough quality of the MT output, usual automatic metrics do not always correlate with post-editing effort. We describe a standalone tool designed both for industry and research that has two main purposes: collect sentence-level information from the post-editing process (e.g. post-editing time and keystrokes) and visually present multiple evaluation scores so they can be easily interpreted by a user.",
"keywords": [],
"raw_extracted_content": "PosEdiOn: Post-Editing Assessment in PythOn\nAntoni Oliver\nUniversitat Oberta de Catalunya\[email protected] Alvarez\nUniversitat Pompeu Fabra\[email protected] Badia\nUniversitat Pompeu Fabra\[email protected]\nAbstract\nThere is currently an extended use of post-\nediting of machine translation (PEMT) in\nthe translation industry. This is due to the\nincrease in the demand of translation and\nto the significant improvements in quality\nachieved in recent years. PEMT has been\nincluded as part of the translation work-\nflow because it increases translators’ pro-\nductivity and it also reduces costs. Al-\nthough effective post-editing requires suf-\nficiently high quality MT output, usual au-\ntomatic metrics do not always correlate\nwith post-editing effort. We describe a\nstandalone tool designed both for indus-\ntry and research that has two main pur-\nposes: to collect sentence-level informa-\ntion from the post-editing process (e.g.\npost-editing time and keystrokes) and to\nvisually present multiple evaluation scores\nso they can be easily interpreted by a user.\n1 Introduction\nPost-editing of machine translation (PEMT) is a\nvery common practice in the translation indus-\ntry. It has been included as part of the translation\nworkflow because it increases productivity when\ncompared with human translation (Aranberri et al.,\n2014) and reduces costs (Guerberof, 2009) with-\nout having a negative impact on quality (Plitt and\nMasselot, 2010). Post-editors “edit, modify and/or\ncorrect pre-translated text that has been processed\nby an MT system from a source language into (a)\ntarget language(s)” (Allen, 2003, p. 296).\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.In the last few years, both research and indus-\ntry have become very interested in neural machine\ntranslation (NMT) because it has produced very\nsuccessful results in terms of quality, for exam-\nple in WMT 2017 (Bojar et al., 2017), WMT 2018\n(Bojar et al., 2018) and WMT 2019 (Barrault et al.,\n2019). Given the overall performance of NMT, it\nis necessary to study all the potential this approach\ncan offer to post-editing. One of the main prob-\nlems is that automatic scores give a general idea\nof the MT output quality but do not always corre-\nlate with post-editing effort (Koponen, 2016; Shte-\nrionov et al., 2018). Many professional translators\nstate that if the quality of the MT output is not good\nenough, they delete the remaining segments and\ntranslate everything from scratch (Parra Escart ´ın\nand Arcedillo, 2015).\nOne of the main goals both of industry and re-\nsearch is to establish a correlation between the\nquality measurements of the MT output and trans-\nlators’ performance. Research is especially fo-\ncused on the effort this activity entails, mainly\ntaking into account the temporal, technical, and\ncognitive effort (Krings, 2001). The use of tools\nthat can log these three dimensions becomes a\nparamount challenge for research.\nProfessional translators usually use commercial\nproducts to translate and post-edit. In the 2018\nLanguage Industry Survey1conducted by EUATC,\nElia, FIT Europe, GALA and LINDWeb, SDL Tra-\ndos2was the most used product with more than\nhalf of the market quota, followed by MemoQ,3\n1http://fit-europe-rc.org/wp-content/uploads/2019/05/2018-\nLanguage-Industry-Survey-Report.pdf\n2https://www.sdl.com/\n3https://www.memoq.com\nMemsource,4Wordfast,5and Across.6However,\nthese existing post-editing environments have a re-\nstricted availability and flexibility. As proprietary\ntools, they are difficult to modify and do not usu-\nally provide translator activity data that may be\nused to study post-editing effort. However, other\nopen-source computer-assisted translation (CAT)\nenvironments such as OmegaT,7have been mod-\nified and used for data collection (Moran et al.,\n2014).\nInstead of trying to reproduce the working con-\nditions of translators, which vary greatly among\nindividuals, other tools establish controlled con-\nditions in order to obtain non-biased data. For\nthis purpose, translators use a post-editing tool that\nrecords the post-editing information, can be easily\naccessed from any platform and has an easy-to-use\ninterface.\nIn this paper we present PosEdiOn, a simple\nstandalone tool that allows post-editing of MT out-\nput and records information of the post-editing ef-\nfort (time and keystrokes) at sentence-level. It also\nincludes multiple evaluation scores that the user\ncan interpret easily to assess the post-editing pro-\ncess (such as edit distance, HBLEU and HTER).\nAs it does not depend on any specific CAT tool, it\nallows the collection of post-editing data in a con-\ntrolled way. It can be used by professionals to as-\nsess the convenience of post-editing a certain MT\noutput and by researchers to study post-editing ef-\nfort.\nIn Section 2 we analyze some of the previous\ntools developed for this purpose. The tool and its\nmains characteristics are presented in Section 3.\nIn Section 4 we describe the PosEdiOn analyzer,\nwhich is used to perform all the analysis, and Sec-\ntion 5 includes the conclusions and future work.\n2 Previous Work\nIn order to analyze the different components of\npost-editing effort, it becomes paramount to use\ntools that are able to log time, keyboarding, and\nother potential indicators of cognitive effort (e.g.\ngaze data). Currently there is a proliferation of\nthese tools (Vieira, 2013), mainly because each re-\nsearch project has specific requirements.\nSome of the tools developed focus more on pro-\n4https://www.memsource.com\n5https://www.wordfast.net\n6https://www.across.net\n7https://omegat.orgductivity as part of an industry scenario. For exam-\nple, the Qualitivity8plugin can be added to SDL\nTrados to measure post-editing effort. Alterna-\ntively, TAUS developed DQF,9which can be used\nas a standalone benchmark or as an SDL Trados\nplugin. There has also been EU-funded research\nto develop open-source workbenches to help im-\nprove quantitative measurements of effort (CAS-\nMACAT10and Matecat11).\nOther tools collect gaze data, which can be used\nto study post-editing effort. Tobii Pro Lab is the\ncommercial Windows-oriented eye-tracking soft-\nware that accompanies Tobii eye trackers. It can\ncalculate a variety of eye-tracking metrics and cre-\nate visual representations of the data.\nAnother similar product is Translog-II (Carl,\n2012), which is a Windows-oriented program that\nrecords user activity data (UAD), that is, all the\nkeystrokes and gaze movements. It is meant\nspecifically for translation process research (TPR)\nand it offers the possibility of further process-\ning the data with the scripts included in the TPR\ndatabase of the Centre for Research and Innova-\ntion in Translation and Technology (CRITT TPR-\nDB). Even though these tools collect extensive in-\nformation, they have specific and demanding set-\ntings which are not suitable for all experiments.\nSome products devised for a specific experi-\nment are not made available to the public after-\nwards (Plitt and Masselot, 2010; Green et al.,\n2013). Other tools focus on obtaining as much in-\nformation as possible with an easy-to-use product.\nFor example, TransCenter (Denkowski and Lavie,\n2012) is an open-source, web-based tool that al-\nlows users to carry out PE tasks and logs time and\nkeyboard/mouse activity at a sentence level.\nAnother tool useful for quantitative investiga-\ntions specifically designed for post-editing is PET\n(Aziz et al., 2012). It can also be accessed from\nany platform, although it is based in Java, which\ncan sometimes be challenging for end-users who\nneed to open the tool from their desktop comput-\ners. In addition to recording time and effort indi-\ncators at a segment level, PET also allows users\nto perform evaluation tasks on different customiz-\nable scales and criteria. The data file with all the\ninformation is saved in xml. However, it does not\noffer graphics or any other visual information with\n8https://appstore.sdl.com/language/app/qualitivity/612/\n9https://www.taus.net/dqf\n10https://www.casmacat.eu\n11https://www.matecat.com\nthe results nor does it include an analyzer which\nproduces multiple automatic metrics.\n3 PosEdiOn\nPosEdiOn is a post-editing tool developed mainly\nto collect information on different implicit and\nexplicit effort indicators. It records time and\nkeystrokes, and it also calculates some of the\nmain indirect effort estimation measures (HTER,\nHBLEU and edit distance). It produces a file with\nthe raw measurements but it also includes a results\nfile with visually structured information that can\nbe easily understood by any user.\nIt was developed completely in Python3 and it\nworks in any platform which has Python installed.\nTranslators tend to work from home with a great\nvariety of platforms and devices, and do not always\nhave the computer skills to solve any compatibility\nerrors they may encounter with the tools they are\nabout to use. A Windows executable file is also\navailable, which allows to run PosEdiOn without\nthe need of installing the Python interpreter.\n3.1 Files and tasks\nPosEdiOn is designed to facilitate the distribution\nof post-editing tasks in an easy and error-free way.\nThe user receives a zip compressed folder with all\nthe needed elements:\n\u000fThe PosEdiOn program itself, usually as a\nPython file. Optionally, a Windows exe-\ncutable can be also used. In this case, send-\ning the zipped file by e-mail can cause prob-\nlems as some mail providers block attach-\nments with executable files. Alternatively, a\nlink to the zipped file can be used to distribute\nthe post-editing tasks.\n\u000fThe configuration file ( config.yaml ) that pro-\nvides all the information necessary for the\npost-editing task. See section 3.3.\n\u000fThe post-editing task itself as a tab delimited\nplain text file. The text file is structured in\nfour fields: source text, machine translated\ntext, post-edited text and segment status.\nFor translation tasks, only the first field is com-\npulsory. In this case, the translator will be pre-\nsented only with the source text. For post-editing\ntasks, the first two fields are compulsory and the\npost-editor will be presented with the source text\nand the output text from MT. Each time a segmentis validated, this file and the status of the segment\nare updated.\nOnce the compressed file is received, it must be\nunzipped. After executing the program, the task\nis directly presented. When the translator begins\nto work on the new task, a new file ( actions.txt\nor any other file name stated in the configuration\nfile) is created. All actions including keystrokes,\nmouse actions and button clicks are stored in this\nfile along with the time it is performed. An exam-\nple can be seen in the following figure:\nSTART 1 2020-02-22 22:28:04.979308\nF 1 2020-02-22 22:28:04.996692 Focus in\nM 1 2020-02-22 22:28:08.840216 Mouse.button1\nF 1 2020-02-22 22:28:08.840857 Focus in\nK 1 2020-02-22 22:28:09.742533 Key.letter.u 1.6\nM 1 2020-02-22 22:28:13.129137 Mouse.button1\nOUT 1 2020-02-22 22:28:23.827548\nIN 2 2020-02-22 22:28:23.829034\nK 2 2020-02-22 22:28:25.018297 Command.CtrlReturn 1.8\nOUT 2 2020-02-22 22:28:25.020480\nIN 3 2020-02-22 22:28:25.046122\nK 3 2020-02-22 22:28:29.602347 Key.navigation 2.5\n....\nFigure 1: File with the actions recorded\nAll analysis and measurements can be obtained\nfrom this actions file. Each line contains several\ninformation fields separated by tabs:\n\u000fThe first field provides information about the\nkind of action. The actions are: START (task\nis started); PAUSE (task is paused); EXIT\n(user exits the application); RESTART (user\nrestarts the task); IN (user enters into a seg-\nment); OUT (user exits a segment); K (key-\nboard action); M (mouse action); C (com-\nmand action); B (user clicks a button on the\napplication); F (application loses or gains fo-\ncus); CLEAR (user clears all the content of\nthe translation); RESTORE (user restores the\ncontent of the translation).\n\u000fThe second field indicates the segment num-\nber.\n\u000fThe third field gives the time and date of the\nevent.\n\u000fSome actions have a fourth field which pro-\nvides more detailed information about the\nevent. For example, the key pressed, the text\ncopied or pasted, and so on.\nFigure 2: PosEdiOn interface\n\u000fKey actions have another field indicating the\nposition in the target text where the key is\npressed.\nThe user can pause and even stop the task and\nclose the PosEdiOn program. Once the task is\nrestarted, the new data will be appended to the ex-\nisting actions file.\nWhen the task is finished, the folder containing\nthe program should be compressed again and sent\nback to the person who has to carry out the analy-\nsis.\n3.2 User Interface\nThe interface displays the source and target lan-\nguage segments one on top of the other. Figure\n2 shows the PosEdiOn interface, where the up-\nper window contains the source segment and the\nlower window enables the translator to edit the\ntext. Translators can see a wider context using the\ntoolbar buttons located on the lower part, which\ncan be used to move along the whole document.\nEach unit is translated/edited one at a time and\nnavigation through the different segments of the\ndocument can be achieved in four ways:\n\u000fOnce the translator has finished post-editing\na segment, he needs to validate it using the\nCtrl+Enter keys. When this is done, the tool\nmoves automatically to the next segment.\n\u000fTo validate a segment, the user can also use\nthe ACCEPT button. Once pressed, it also\nmoves to the next segment.\n\u000fUsing the << or>> buttons in the toolbar\nlocated at the lower part of the screen.\u000fUsing the GO TO box, where you can write\nthe number of the segment you want to move\nto.\nOnce a segment is accepted, its background\nturns green. The user can mark a segment as val-\nidated (green) using Ctrl+g; or he can change the\nstate to undone (white background) using Ctrl+w.\nSegments can also be marked as red (Ctrl+r) to in-\ndicate a problematic status. Red segments can be\nreached directly using Ctrl+s.\n3.3 Customization\nIn order to facilitate customization, certain ele-\nments can be modified in the config.yaml file with-\nout having to access the Python script.\nAs shown in Figure 4, users can customize the\nfollowing elements:\n\u000fThe size of the tool’s window. Both height\nand width can be changed.\n\u000fWhether the source segment text can be\nedited or not. The edits introduced in the\nsource segment are not registered by the tool.\nIf the source segments can be edited, users\ncan select and copy fragments of the source\ntext.\n\u000fThe size and type of font used for the source\nand target segments.\n\u000fWhether or not to show the chronometer.\n\u000fThe name of the text file containing the task\nto translate or post-edit.\n\u000fThe name of the actions file, where all the\ninformation containing the user’s actions is\nstored.\nFigure 3: PosEdiOn analyzer interface\n\u000fThe source and target language codes.\n\u000fThe set of characters to be considered as sym-\nbols or punctuation. It also includes up to\nthree user-defined groups of characters. In the\nexample, a user-defined group called mathe-\nmatical (containing symbols of mathematical\noperations) is defined.\nSize:\nheight: 10\nwidth: 80\nBehaviour:\nallowEditSL: True\nFont:\nfont: courier 12\nChronometer:\nstatus: show\n#possible values: show / hide\nText:\nfile: test-Google-1.txt\nActions:\nfile: actions.txt\nLanguages:\nsource: eng\ntarget: spa\nDefinition:\nsymbols: \"! @ # $ % ˆ & ( ) _ { } [ ]\"\npunctuation: \", : ; .\"\nnameuserdef1: mathematical\nuserdef1: \"+ - */ =\"\nnameuserdef2: None\nuserdef2: None\nnameuserdef3: None\nuserdef3: None\nFigure 4: View of the customizable elements\n4 PosEdiOn analyzer\nPosEdiOn has a companion program, PosEdiOn\nanalyzer, that performs different analyses on the\nPosEdiOn project files and offers a wide range of\nmeasurements. More specifically, it can calculate:\u000fTime spent editing each segment.\n\u000fHTER (Snover et al., 2006), the TER value\ncomparing the raw MT output with the post-\nedited segment. A value of HTER is provided\nfor each segment. The value of TER is calcu-\nlated using tercom.12\n\u000fHBLEU, a BLEU (Papineni et al., 2002)\nvalue obtained comparing the raw MT output\nwith the post-edited segment.\n\u000fHEd, an edit distance (Leveshtein distance)\nvalue calculated comparing the raw MT out-\nput with the post-edited segment.\n\u000fKeystrokes for each segment.\nIf a reference translation file is provided, the fol-\nlowing measurements are also calculated:\n\u000fTER comparing the raw MT output with the\nreference translations. A value of TER is cal-\nculated for each segment.\n\u000fBLEU comparing the raw MT output with the\nreference translations. A value of BLEU is\ncalculated for each segment.\n\u000fEd, edit distance value calculated comparing\nthe raw MT output with the post-edited seg-\nment.\nTo calculate the normalization of time, HEd\n(and eventually Ed) and keystrokes values, users\ncan chose three different criteria: segment, token\nor character. All these values are provided both for\neach segment and for the whole document. On top\nof that, the mean and standard deviation are also\ncalculated.\nUsers can choose to prune results. The prun-\ning is based on a maximum value of normal-\nized time and on a maximum value of normalized\n12https://github.com/jhclark/tercom\nsegmentID tokens time time norm HTER HBLEU HEd HEd norm keys keys norm\n1 5 38.02 7.6 0.1905 0.6703 18 3.6 18 3.6\n2 11 48.81 4.44 0.12 0.6775 6 0.55 238 21.6\n3 1 21.31 21.31 0.3333 0 8 8.0 10 10.0\n4 29 279.69 9.64 0.2785 0.2318 72 2.48 148 5.1\n5 15 72.12 4.81 0.0606 0.7242 2 0.13 50 3.3\nFigure 5: Detailed information for each segment\nkeystrokes. These maximum values are calculated\nwith the mean value and two times the standard\ndeviation. All segments with a normalized time\ngreater than the maximum or with a normalized\nnumber of keystrokes greater than the maximum\nare not taken into account to calculate the pruned\nvalues of all scores. The results are provided as nu-\nmeric values and with a visual presentation of the\nresults following the ideas of the Vis-Eval Metric\nViewer (Steele and Specia, 2018).\n4.1 Configuration\nThe configuration of the tool is performed using\na Yaml configuration file ( config-analyzer.yaml ) as\nshown in Figure 6:\nFilepath:\npath_in: /home/user/directory\npath_out: /home/user/directory\nResults_file:\nprefix: results-\nsufix:\nextension: txt\nMeasures:\nbysegment: True\nnormalization: tokens\n#one of segment, token, char\nHTER: True\nHBLEU: True\nHEd: True\nround_time: 2\nround_keys: 2\nround_HTER: 4\nround_HBLEU: 4\nround_HEd: 2\nround_other: 1\nTER: True\nBLEU: True\nED: True\nround_TER: 4\nround_BLEU: 4\nround_Ed: 2\nFigure 6: Yaml configuration file\nThe file paths including the location of the\nproject and the results can be specified. The name\nof the results file can also be customized by adding\na prefix, a suffix and an extension to the name of\nthe project. If no prefix, suffix or extension is re-\nquired, any of these fields can be left blank. Themeasurements can also be customized, and users\ncan decide whether or not to show measurements\nby segment, the normalization criteria, which mea-\nsurements will be calculated and shown, as well as\nthe number of decimal points. Remember that the\nvalues of TER, BLEU and Ed will be calculated\nand shown only if a reference file is provided, re-\ngardless of the values in the configuration file.\n4.2 Use of PosEdiOn analyzer\nPosEdiOn analyzer can work both in text com-\nmand and in graphical mode. To start the graphical\nuser interface (shown in Figure 2) the program can\nbe called with no parameters or with the --gui\nparameter. If no parameters or incomplete param-\neters are given, the GUI interface starts (see Figure\n3). To use it in command line mode, you need to\nprovide a set of parameters that can be checked us-\ning the --h option.\nUsually we simply set the path for the directory\ncontaining the PosEdiOn project to analyze and the\nname of the output file containing the results:\npython3 PosEdiOn-analyzer.py -p ./project\n-o results.txt\nIf we want the results to be pruned, the option\n--prune should be used. Eventually we can set\nthe name of a reference file containing the refer-\nence translation. The reference file is a text file\nthat includes the reference translation aligned line\nby line with the text in the project.\nPosEdiOn analyzer can also work with a set of\nfiles instead of a PosEdiOn project. This can be\ndone using the Files tab, where the user can se-\nlect the raw MT (option --raw ), the post-edited\nfiles ( --ped ) and, optionally, the reference files\n(--refs ) to calculate HTER, HBLEU and HEd\nvalues. If the reference files are provided, TER,\nBLEU and Ed values are also calculated. This al-\nlows PosEdiOn analyzer to be used independently\nfrom PosEdiOn tool.\n4.3 Results\nThe analyzer can provide the following global re-\nsults: time normalized, keystrokes normalized,\nHTER, HBLEU and HEd. Remember that the nor-\nmalization factor can be segment, token and char-\nacter and can be set by the user. For each mea-\nsurement, the mean and the standard deviation are\nprovided. Pruned values are calculated rejecting\nthe values lower than the mean minus two times\nthe standard deviation or higher than the mean plus\ntwo times the standard deviation.\n-----------------------------------------\nPRUNING:\n-----------------------------------------\ntime norm. mean: 9.19\ntime norm. std. dev.: 33.97\nkeys norm. mean: 6.36\nkeys norm. std. dev.: 28.25\nmax. norm. time: 77.14\nmax. norm. keystrokes: 62.86\n-----------------------------------------\nIGNORED SEGMENT 9 norm.\ntime: 387.3 norm. kestrokes: 192.0\nIGNORED SEGMENT 15 norm.\ntime: 212.24 norm. kestrokes: 301.0\nIGNORED SEGMENT 19 norm.\ntime: 215.58 norm. kestrokes: 219.0\nIGNORED SEGMENT 120 norm.\ntime: 122.75 norm. kestrokes: 3.5\nIGNORED SEGMENT 189\nnorm. time: 67.42 norm. kestrokes: 75.0\n-----------------------------------------\nTIME:\nTIME TOTAL 19864.11\nTIME NORM. MEAN 90.7\nTIME NORM. STD 92.74\n-----------------------------------------\nKEYS:\nKEYS TOTAL: 12717\nKEYS NORM MEAN 2.9\nKEYS NORM STD 4.75\n-----------------------------------------\nHTER:\nHTER MEAN 0.1611\nHTER STD 0.1172\n-----------------------------------------\nHBLEU:\nHBLEU MEAN 0.5303\nHBLEU STD 0.2714\n-----------------------------------------\nHEd NORM:\nHEd NORM MEAN 1.28\nHEd NORM STD 1.19\n-----------------------------------------\nFigure 7: View of the results file\nIf the user has selected the detailed results\nthrough the config-analyzer.yaml file, the output\nfile includes the following information for each\nsegment (see Figure 5): segment ID, number\nof tokens or characters, time, time normalized,\nHTER, HBLEU, HEd, HEd normalized, number\nof pressed keys, number of pressed keys normal-\nized.PosEdiOn is able to generate graphics using the\ndata, as the one shown in Figure 8 created from the\npruned HTER values. The user can choose which\ndata should be used to generate graphics.\nFigure 8: Graphic of the pruned HTER distribution\nThe results are stored in a tabulated text file, so\nthey can be easily imported into any spreadsheet to\nperform further calculations.\n5 Conclusions and future work\nIn this paper we have presented PosEdiOn, a tool\nto perform evaluations of post-editing tasks, and\nits companion program PosEdiOn analyzer, which\nallows to user to easily analyze the data obtained\nwith PosEdiOn. Both programs are released under\na free license (GNU GPL v3) and can be freely\ndownloaded from the SourceForge page created\nfor the project.13\nWe plan to use this tool in several studies related\nto post-editing and to implement new features such\nas the evaluation of fluency and adequacy and an\nerror mark-up tool. Both programs are developed\nin Python3 and they can be easily adapted and im-\nproved. As the data are stored as tabbed text files,\nthey can be easily processed or imported into any\nspreadsheet program to perform further analysis or\ndata visualization.\nAcknowledgements: The training of the neural\nmachine translation systems used to develop and\ntest PosEdiOn has been possible thanks to the\nNVIDIA GPU grant programme.\nReferences\nAllen, Jeffrey H. 2003. Post-editing. In Sommer,\nHarold, editor, Computers and Translation: A trans-\nlator’s guide . John Benjamin, Amsterdam.\n13https://sourceforge.net/projects/posedion/\nAranberri, Nora, Gorka Labaka, Arantza Ilarraza, and\nKepa Sarasola. 2014. Comparison of Post-Editing\nProductivity between Professional Translators and\nLay Users. In Proceedings of the Third Workshop\non Post-Editing Technology and Practice (WPTP -\n3), Vancouver, Canada.\nAziz, Wilker, Sheila C. M. De Sousa, and Lucia Specia.\n2012. PET: a Tool for Post-editing and Assessing\nMachine Translation. In Proceedings of the Eight In-\nternational Conference on Language Resources and\nEvaluation (LREC’12) , pages 3982–3987.\nBarrault, Lo ¨ıc, Ond ˇrej Bojar, Marta R. Costa-juss `a,\nChristian Federmann, Mark Fishel, Yvette Gra-\nham, Barry Haddow, Matthias Huck, Philipp Koehn,\nShervin Malmasi, Christof Monz, Mathias M ¨uller,\nSantanu Pal, Matt Post, and Marcos Zampieri. 2019.\nFindings of the 2019 Conference on Machine Trans-\nlation (WMT19). In Proceedings of the Fourth Con-\nference on Machine Translation (Volume 2: Shared\nTask Papers, Day 1) , pages 1–61, Florence, Italy,\nAugust. Association for Computational Linguistics.\nBojar, Ond ˇrej, Rajen Chatterjee, Christian Federmann,\nYvette Graham, Barry Haddow, Shujian Huang,\nMatthias Huck, Lmu Munich, Philipp Koehn, Jhu\nEdinburgh, Qun Liu, Varvara Logacheva, Mipt\nMoscow, Christof Monz, Matteo Negri, Matt Post,\nJohns Hopkins, Raphael Rubino, and Marco Turchi.\n2017. Findings of the 2017 Conference on Machine\nTranslation). In Proceedings of the 2017 Conference\non Machine Translation (WMT17)) , volume 2, pages\n169–214.\nBojar, Ond ˇrej, Christian Federmann, Mark Fishel,\nYvette Graham, Barry Haddow, Matthias Huck,\nPhilipp Koehn, and Christof Monz. 2018. Find-\nings of the 2018 Conference on Machine Translation.\nInProceedings of the 2018 Conference on Machine\nTranslation (WMT18) , volume 2, pages 272–303.\nCarl, Michael. 2012. Translog-II: A Program for\nRecording User Activity Data for Empirical Reading\nand Writing Research. In Proceedings of the Eighth\nInternational Conference on Language Resources\nand Evaluation (LREC’12) , pages 4108–4112, Is-\ntanbul, Turkey, May. European Language Resources\nAssociation (ELRA).\nDenkowski, M. and A. Lavie. 2012. TransCen-\nter: Web-Based Translation Research Suite. In\nAMTA 2012 Workshop on Post-Editing Technology\nand Practice Demo Session , May.\nGreen, Spence, Jeffrey Heer, and Christopher D. Man-\nning. 2013. The Efficacy of Human Post-editing\nfor Language Translation. In Proceedings of the\nSIGCHI Conference on Human Factors in Comput-\ning Systems - CHI 13 . ACM Press.\nGuerberof, Ana. 2009. Productivity and Quality in MT\nPost-editing. In Proceedings of Machine Translation\nSummit XII .Koponen, Maarit. 2016. Is Machine Translation Post-\nediting Worth the Effort? A Survey of Research into\nPost-editing and Effort. The Journal of Specialised\nTranslation , pages 131–148.\nKrings, Hans P. 2001. Repairing texts: Empirical in-\nvestigations of machine translation post-editing pro-\ncess. The Kent State University Press, Kent, OH.\nMoran, J., D. Lewis, and C. Saam. 2014. Analysis of\nPost-editing Data: A Productivity Field Test using an\nInstrumented CAT Tool. In O’Brien, Sharon, Laura\nWinther Balling, Michael Carl, Michel Simard,\nand Lucia Specia, editors, Post-editing of Machine\nTranslation: Processes and Applications , chapter 6.\nCambridge Scholars Publishing, United Kingdom.\nPapineni, Kishore, Salim Roukos, Todd Ward, and\nWj Zhu. 2002. BLEU: A Method for Automatic\nEvaluation of Machine Translation. In ACL ’02:\nProceedings of the 40th Annual Meeting on Asso-\nciation for Computational Linguistics , number July,\npages 311–318.\nParra Escart ´ın, Carla and Manuel Arcedillo. 2015. A\nFuzzier Approach to Machine Translation Evalua-\ntion: A Pilot Study on Post-editing Productivity and\nAutomated Metrics in Commercial Settings. In Pro-\nceedings of the ACL 2015 Fourth Workshop on Hy-\nbrid Approaches to Translation (HyTra) , volume 1,\npages 40–45.\nPlitt, Mirko and Franc ¸ois Masselot. 2010. A Produc-\ntivity Test of Statistical Machine Translation Post-\nEditing in a Typical Localisation Context. The\nPrague Bulletin of Mathematical Linguistics NUM-\nBER, 93:7–16.\nShterionov, Dimitar, Riccardo Superbo, Pat Nagle,\nLaura Casanellas, Tony O’Dowd, and Andy Way.\n2018. Human versus Automatic Quality Evalua-\ntion of NMT and PBSMT. Machine Translation ,\n32(3):217–235, Sep.\nSnover, Matthew, Bonnie Dorr, Richard Schwartz, Lin-\nnea Micciulla, and John Makhoul. 2006. A Study\nof Translation Edit Rate with Targeted Human An-\nnotation. In Proceedings of Association for Machine\nTranslation in the Americas , pages 223–231, August.\nSteele, David and Lucia Specia. 2018. Vis-eval Met-\nric Viewer: A Visualisation Tool for Inspecting and\nEvaluating Metric Scores of Machine Translation\nOutput. In Proceedings of the 2018 Conference of\nthe North American Chapter of the Association for\nComputational Linguistics: Demonstrations , pages\n71–75.\nVieira, Lucas Nunes. 2013. An Evaluation of Tools for\nPost-editing Research: The Current Picture and Fur-\nther Needs. In Proceedings of the MT Summit XIV\nWorkshop on Post-editing Technology and Practice\n(WPTP-2) , pages 93–101, Nice.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "GgAVDGHj2A",
"year": null,
"venue": "ECAL 2017",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=GgAVDGHj2A",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A curiosity model for artificial agents",
"authors": [
"Eugénio Ribeiro",
"Ricardo Ribeiro",
"David Martins de Matos"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "v2gZBK2_J70",
"year": null,
"venue": "ECAL 2011",
"pdf_link": "http://mitpress.mit.edu/sites/default/files/titles/alife/0262297140chap115.pdf",
"forum_link": "https://openreview.net/forum?id=v2gZBK2_J70",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Ouroboros avatars: A mathematical exploration of self-reference and metabolic closure",
"authors": [
"Jorge Soto Andrade",
"Sebastián Jaramillo",
"Claudio Gutierrez",
"Juan-Carlos Letelier"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "xiVBy81orU8",
"year": null,
"venue": "ECAL 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=xiVBy81orU8",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Soft-Body Muscles for Evolved Virtual Creatures: The Next Step on a Bio-Mimetic Path to Meaningful Morphological Complexity",
"authors": [
"Dan Lessin",
"Sebastian Risi"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "kDDB4YXos9Z",
"year": null,
"venue": "ECAL 2003",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=kDDB4YXos9Z",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Optimal Communication in a Noisy and Heterogeneous Environment",
"authors": [
"Willem H. Zuidema"
],
"abstract": "Compositionality is a fundamental property of natural language. Explaining its evolution remains a challenging problem because existing explanations require a structured language to be present before compositionality can spread in the population. In this paper, I study whether a communication system can evolve that shows the preservation of topology between meaning-space and signal-space, without assuming that individuals have any prior processing mechanism for compositionality. I present a formalism to describe a communication system where there is noise in signaling and variation in the values of meanings. In contrast to previous models, both the noise and values depend on the topology of the signal- and meaning spaces. I consider a population of agents that each try to optimize their communicative success. The results show that the preservation of topology follows naturally from the assumptions on noise, values and individual-based optimization.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "lRT9w29rjOk",
"year": null,
"venue": "ECAL 2001",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=lRT9w29rjOk",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Emergent Syntax: The Unremitting Value of Computational Modeling for Understanding the Origins of Complex Language",
"authors": [
"Willem H. Zuidema"
],
"abstract": "In this paper we explore the similarities between a mathematical model of language evolution and several A-life simulations. We argue that the mathematical model makes some problematic simplifications, but that a combination with computational models can help to adapt and extend existing language evolution scenario’s.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "oYN1waRao4",
"year": null,
"venue": "ECAL 2005",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=oYN1waRao4",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Coordinating Dual-Mode Biomimetic Robotic Fish in Box-Pushing Task",
"authors": [
"Dandan Zhang",
"Yimin Fang",
"Guangming Xie",
"Junzhi Yu",
"Long Wang"
],
"abstract": "This paper presents a novel method for coordinating multiple biomimetic robotic fish in box-pushing task. Based on our successfully developing a robotic fish prototype of which the swimming modes can be switched flexibly and smoothly, we step further to study coordination problems of multiple robotic fish in unstructured and dynamic environments. To simplify the difficulties of path planning and action decision when the robotic fish is approaching the box, we employ the situated-behavior method, and for each situation a specific behavior is elaborately designed. On dealing with the synchronization and coordinated pushing problems in the particular underwater environment, fuzzy logic method is adopted for motion planning of the fish. Experimental results of box-pushing performed by two robotic fish validate the effectiveness of the proposed method.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "4PyUHC8vQ6r",
"year": null,
"venue": "ECAL 2007",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=4PyUHC8vQ6r",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Guided Self-organisation for Autonomous Robot Development",
"authors": [
"Georg Martius",
"J. Michael Herrmann",
"Ralf Der"
],
"abstract": "The paper presents a method to guide the self-organised development of behaviours of autonomous robots. In earlier publications we demonstrated how to use the homeokinesis principle and dynamical systems theory to obtain self-organised playful but goal-free behaviour. Now we extend this framework by reinforcement signals. We validate the mechanisms with two experiment with a spherical robot. The first experiment aims at fast motion, where the robot reaches on average about twice the speed of a not reinforcement robot. In the second experiment spinning motion is rewarded and we demonstrate that the robot successfully develops pirouettes and curved motion which only rarely occur among the natural behaviours of the robot.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "CzxCXBIMWm",
"year": null,
"venue": "ECAL 2003",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=CzxCXBIMWm",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Universal Framework for Self-Replication",
"authors": [
"Bryant Adams",
"Hod Lipson"
],
"abstract": "Self-replication is a fundamental property of many interesting physical, formal and biological systems, such as crystals, waves, automata, and especially forms of natural and artificial life. Despite its importance to many phenomena, self-replication has not been consistently defined or quantified in a rigorous, universal way. In this paper we propose a universal, continuously valued property of the interaction between a system and its environment. This property represents the effect of the presence of such a system upon the future presence of similar systems. We demonstrate both analytical and computational analysis of self-replicability factors for three distinct systems involving both discrete and continuous behaviors.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "E7sDCQzI6eZV",
"year": null,
"venue": "ECAL 2007",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=E7sDCQzI6eZV",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Investigating the Emergence of Phenotypic Plasticity in Evolving Digital Organisms",
"authors": [
"Jeff Clune",
"Charles Ofria",
"Robert T. Pennock"
],
"abstract": "In the natural world, individual organisms can adapt as their environment changes. In most in silico evolution, however, individual organisms tend to consist of rigid solutions, with all adaptation occurring at the population level. If we are to use artificial evolving systems as a tool in understanding biology or in engineering robust and intelligent systems, however, they should be able to generate solutions with fitness-enhancing phenotypic plasticity. Here we use Avida, an established digital evolution system, to investigate the selective pressures that produce phenotypic plasticity. We witness two different types of fitness-enhancing plasticity evolve: static-execution-flow plasticity, in which the same sequence of actions produces different results depending on the environment, and dynamic-execution-flow plasticity, where organisms choose their actions based on their environment. We demonstrate that the type of plasticity that evolves depends on the environmental challenge the population faces. Finally, we compare our results to similar ones found in vastly different systems, which suggest that this phenomenon is a general feature of evolution.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "S9y8gcGmeHW",
"year": null,
"venue": "ECAL 2007",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=S9y8gcGmeHW",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Simulations of Simulations in Evolutionary Robotics",
"authors": [
"Edgar Bermudez Contreras",
"Anil K. Seth"
],
"abstract": "In recent years simulation tools for agent-environment interactions have included increasingly complex and physically realistic conditions. These simulations pose challenges for researchers interested in evolutionary robotics because the computational expense of running multiple evaluations can be very high. Here, we address this issue by applying evolutionary techniques to a simplified simulation of a simulation itself. We show this approach to be successful when transferring controllers evolved for example visual tasks from a simplified simulation to a comparatively rich visual simulation.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ltxzTx2T8U",
"year": null,
"venue": "ECAL 2003",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=ltxzTx2T8U",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Learning Biases for the Evolution of Linguistic Structure: An Associative Network Model",
"authors": [
"Kenny Smith"
],
"abstract": "Structural hallmarks of language can be explained in terms of adaptation, by language, to pressures arising during its cultural transmission. Here I present a model which explains the compositional structure of language as an adaptation in response to pressures arising from the poverty of the stimulus available to language learners and the biases of language learners themselves.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "CTG7vi_oKcO",
"year": null,
"venue": "ECAL 2003",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=CTG7vi_oKcO",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Language Evolution in Populations: Extending the Iterated Learning Model",
"authors": [
"Kenny Smith",
"James R. Hurford"
],
"abstract": "Models of the cultural evolution of language typically assume a very simplified population dynamic. In the most common modelling framework (the Iterated Learning Model) populations are modelled as consisting of a series of non-overlapping generations, with each generation consisting of a single agent. However, the literature on language birth and language change suggests that population dynamics play an important role in real-world linguistic evolution. We aim to develop computational models to investigate this interaction between population factors and language evolution. Here we present results of extending a well-known Iterated Learning Model to a population model which involves multiple individuals. This extension reveals problems with the model of grammar induction, but also shows that the fundamental results of Iterated Learning experiments still hold when we consider an extended population model.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "SoYc2t6RAM",
"year": null,
"venue": "ECAL 2001",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=SoYc2t6RAM",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The Importance of Rapid Cultural Convergence in the Evolution of Learned Symbolic Communication",
"authors": [
"Kenny Smith"
],
"abstract": "Oliphant [5,6] contends that language is the only naturally-occurring, learned symbolic communication system, because only humans can accurately observe meaning during the cultural transmission of communication. This paper outlines several objections to Oliphant’s argument. In particular, it is argued that the learning biases necessary to support learned symbolic communication may not be common and that the speed of cultural convergence during cultural evolution of communication may be a key factor in the evolution of such learning biases.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "mq0KtxQIOp",
"year": null,
"venue": "ECAL 2005",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=mq0KtxQIOp",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Ant Clustering Embeded in Cellular Automata",
"authors": [
"Xiao-hua Xu",
"Ling Chen",
"Ping He"
],
"abstract": "Inspired by the emergent behaviors of ant colonies, we present a novel ant algorithm to tackle unsupervised data clustering problem. This algorithm integrates swarm intelligence and cellular automata, making the clustering procedure simple and fast. It also avoid ants’ longtime idle moving, and show good separation of data classes in clustering visualization. We have applied the algorithm on the standard ant clustering benchmark and we get better results compared with the LF algorithm. Moreover, the experimental results on real world applications report that the algorithm is significantly more efficient than the previous approaches.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "m_u47-Hvap",
"year": null,
"venue": "ECAL 2011",
"pdf_link": "http://mitpress.mit.edu/sites/default/files/titles/alife/0262297140chap92.pdf",
"forum_link": "https://openreview.net/forum?id=m_u47-Hvap",
"arxiv_id": null,
"doi": null
}
|
{
"title": "An Experiment in mixing evolving and preprogrammed robots",
"authors": [
"Sancho Oliveira",
"Luís Nunes",
"Anders Lyhne Christensen"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ZwZv6euocsT",
"year": null,
"venue": "ECAL 2011",
"pdf_link": "http://mitpress.mit.edu/sites/default/files/titles/alife/0262297140chap22.pdf",
"forum_link": "https://openreview.net/forum?id=ZwZv6euocsT",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Emergence of temporal and spatial synchronous behaviors in a foraging swarm",
"authors": [
"Sylvain Chevallier",
"Nicolas Bredèche",
"Hélène Paugam-Moisy",
"Michèle Sebag"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "GI4ZoxkAcs",
"year": null,
"venue": "ECAL 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=GI4ZoxkAcs",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Behavior as broken symmetry in embodied self-organizing robots",
"authors": [
"Ralf Der",
"Georg Martius"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "X7LRrMlNUWM",
"year": null,
"venue": "ECAL 2011",
"pdf_link": "http://mitpress.mit.edu/sites/default/files/titles/alife/0262297140chap78.pdf",
"forum_link": "https://openreview.net/forum?id=X7LRrMlNUWM",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Tipping the scales: guidance and intrinsically motivated behavior",
"authors": [
"Georg Martius",
"J. Michael Herrmann"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "rEu_ESMhkX",
"year": null,
"venue": "ECAL 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=rEu_ESMhkX",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Quantifying Self-Organizing Behavior of Autonomous Robots",
"authors": [
"Georg Martius",
"Eckehard Olbrich"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "-1304wzCD8",
"year": null,
"venue": "ECAL 2011",
"pdf_link": "http://mitpress.mit.edu/sites/default/files/titles/alife/0262297140chap86.pdf",
"forum_link": "https://openreview.net/forum?id=-1304wzCD8",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Learning symbolic forward models for robotic motion planning and control",
"authors": [
"Hirotaka Moriguchi",
"Hod Lipson"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "q-G5c2W0oFC",
"year": null,
"venue": "ECAL 2011",
"pdf_link": "http://mitpress.mit.edu/sites/default/files/titles/alife/0262297140chap134.pdf",
"forum_link": "https://openreview.net/forum?id=q-G5c2W0oFC",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Evolving robot gaits in hardware: the HyperNEAT generative encoding vs. parameter optimization",
"authors": [
"Jason Yosinski",
"Jeff Clune",
"Diana Hidalgo",
"Sarah Nguyen",
"Juan Cristóbal Zagal",
"Hod Lipson"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ypr9tfACAPt",
"year": null,
"venue": "ECAL 2011",
"pdf_link": "http://mitpress.mit.edu/sites/default/files/titles/alife/0262297140chap24.pdf",
"forum_link": "https://openreview.net/forum?id=ypr9tfACAPt",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Evolving three-dimensional objects with a generative encoding inspired by developmental biology",
"authors": [
"Jeff Clune",
"Hod Lipson"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "PXwqCEvbErW",
"year": null,
"venue": "ECAL 2011",
"pdf_link": "http://mitpress.mit.edu/sites/default/files/titles/alife/0262297140chap25.pdf",
"forum_link": "https://openreview.net/forum?id=PXwqCEvbErW",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Selective pressures for accurate altruism targeting: evidence from digital evolution for difficult-to-test aspects of inclusive fitness theory",
"authors": [
"Jeff Clune",
"Heather J. Goldsby",
"Charles Ofria",
"Robert T. Pennock"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "PJbG5HwPf9P",
"year": null,
"venue": "ECAL 2011",
"pdf_link": "http://mitpress.mit.edu/sites/default/files/titles/alife/0262297140chap48.pdf",
"forum_link": "https://openreview.net/forum?id=PJbG5HwPf9P",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Explaining emergent behavior in a swarm system based on an inversion of the fluctuation theorem",
"authors": [
"Heiko Hamann",
"Thomas Schmickl",
"Karl Crailsheim"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "cv5ZGQCuUQJf",
"year": null,
"venue": "ECAL 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=cv5ZGQCuUQJf",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Evolving Collective Behaviors With Diverse But Predictable Sensor States",
"authors": [
"Payam Zahadat",
"Heiko Hamann",
"Thomas Schmickl"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "SFT8rM48C0",
"year": null,
"venue": "ECAL 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=SFT8rM48C0",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Comparing Reinforcement Learning and Evolutionary Based Adaptation in Population Games",
"authors": [
"Ana L. C. Bazzan"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "GsfkYkW3IX",
"year": null,
"venue": "ECAL 2017",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=GsfkYkW3IX",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Learning aquatic locomotion with animats",
"authors": [
"Dennis G. Wilson",
"Jean Disset",
"Sylvain Cussat-Blanc",
"Yves Duthen",
"Hervé Luga"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "JgzN8IX_NkZ",
"year": null,
"venue": "ECAL 2011",
"pdf_link": "http://mitpress.mit.edu/sites/default/files/titles/alife/0262297140chap72.pdf",
"forum_link": "https://openreview.net/forum?id=JgzN8IX_NkZ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Controlling legged robots with coupled artificial biochemical networks",
"authors": [
"Michael A. Lones",
"Andy M. Tyrrell",
"Susan Stepney",
"Leo S. D. Caves"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "BEXd5gxKguc",
"year": null,
"venue": "ECAL 2007",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=BEXd5gxKguc",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Spatial Embedding and Complexity: The Small-World Is Not Enough",
"authors": [
"Christopher L. Buckley",
"Seth Bullock"
],
"abstract": "The “order for free” exhibited by some classes of system has been exploited by natural selection in order to build systems capable of exhibiting complex behaviour. Here we explore the impact of one ordering constraint, spatial embedding, on the dynamical complexity of networks. We apply a measure of functional complexity derived from information theory to a set of spatially embedded network models in order to make some preliminary characterisations of the contribution of space to the dynamics (rather than mere structure) of complex systems. Although our measure of dynamical complexity hinges on a balance between functional integration and segregation, which seem related to an understanding of the small-world property, we demonstrate that small-world structures alone are not enough to induce complexity. However, purely spatial constraints can produce systems of high intrinsic complexity by introducing multiple scales of organisation within a network.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "PCdhnFkF6U",
"year": null,
"venue": "ECAL 2005",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=PCdhnFkF6U",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Timescale and Stability in Adaptive Behaviour",
"authors": [
"Christopher L. Buckley",
"Seth Bullock",
"Netta Cohen"
],
"abstract": "Recently, in both the neuroscience and adaptive behaviour communities, there has been growing interest in the interplay of multiple timescales within neural systems. In particular, the phenomenon of neuromodulation has received a great deal of interest within neuroscience and a growing amount of attention within adaptive behaviour research. This interest has been driven by hypotheses and evidence that have linked neuromodulatory chemicals to a wide range of important adaptive processes such as regulation, reconfiguration, and plasticity. Here, we first demonstrate that manipulating timescales can qualitatively alter the dynamics of a simple system of coupled model neurons. We go on to explore this effect in larger systems within the framework employed by Gardner, Ashby and May in their seminal studies of stability in complex networks. On the basis of linear stability analysis, we conclude that, despite evidence that timescale is important for stability, the presence of multiple timescales within a single system has, in general, no appreciable effect on the May-Wigner stability/connectance relationship. Finally we address some of the shortcomings of linear stability analysis and conclude that more sophisticated analytical approaches are required in order to explore the impact of multiple timescales on the temporally extended dynamics of adaptive systems.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ZXZIlw7zwhL",
"year": null,
"venue": "ECAL 2007",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=ZXZIlw7zwhL",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Emergence of Genetic Coding: An Information-Theoretic Model",
"authors": [
"Piraveenan Mahendra",
"Daniel Polani",
"Mikhail Prokopenko"
],
"abstract": "This paper introduces a simple model for evolutionary dynamics approaching the “coding threshold”, where the capacity to symbolically represent nucleic acid sequences emerges in response to a change in environmental conditions. The model evolves a dynamical system, where a conglomerate of primitive cells is coupled with its potential encoding, subjected to specific environmental noise and inaccurate internal processing. The separation between the conglomerate and the encoding is shown to become beneficial in terms of preserving the information within the noisy environment. This selection pressure is captured information-theoretically, as an increase in mutual information shared by the conglomerate across time. The emergence of structure and useful separation inside the coupled system is accompanied by self-organization of internal processing, i.e. an increase in complexity within the evolving system.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "T-_Su6YtK8",
"year": null,
"venue": "ECAL 2005",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=T-_Su6YtK8",
"arxiv_id": null,
"doi": null
}
|
{
"title": "On a Quantitative Measure for Modularity Based on Information Theory",
"authors": [
"Daniel Polani",
"Peter Dauscher",
"Thomas Uthmann"
],
"abstract": "The concept of modularity appears to be crucial for many questions in the field of Artificial Life research. However, there have not been many quantitative measures for modularity that are both general and viable. In this paper we introduce a measure for modularity based on information theory. Due to the generality of the information theory formalism, this measure can be applied to various problems and models; some connections to other formalisms are presented.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "LEYwb2vyR1W",
"year": null,
"venue": "ECAL 2005",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=LEYwb2vyR1W",
"arxiv_id": null,
"doi": null
}
|
{
"title": "All Else Being Equal Be Empowered",
"authors": [
"Alexander S. Klyubin",
"Daniel Polani",
"Chrystopher L. Nehaniv"
],
"abstract": "The classical approach to using utility functions suffers from the drawback of having to design and tweak the functions on a case by case basis. Inspired by examples from the animal kingdom, social sciences and games we propose empowerment, a rather universal function, defined as the information-theoretic capacity of an agent’s actuation channel. The concept applies to any sensorimotoric apparatus. Empowerment as a measure reflects the properties of the apparatus as long as they are observable due to the coupling of sensors and actuators via the environment.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "AJibdFlMsRz",
"year": null,
"venue": "ECAL 2003",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=AJibdFlMsRz",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Preventing Bluff Agent Invasions in Honest Societies",
"authors": [
"Robert Lowe",
"Daniel Polani"
],
"abstract": "Frequently debated issues in the domain of game theory involve the issue of signalling strategies used in order to resolve conflicts between agents over indivisible resources and to reduce the costly outcomes associated with fighting. Signalling behaviour, used by agents of different strengths, to aid resource acquisition was modelled using an artificial life simulation environment. Honest signalling and the bluff strategy based on Enquist/Hurd’s adapted pay-off matrix (1997) were evaluated relative to different proportions of resident strong agents capable of imposing a ‘punishment’ cost on bluffer agents. We found that in order for honest signalling to be immune to invasion by a bluff strategy, the number of punishment enforcers in the society must be high. Additionally, the number of punishment enforcers is more influential in preventing bluff agent invasions than the severity of punishment.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "iqpGfi_EBnO",
"year": null,
"venue": "ECAL 2003",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=iqpGfi_EBnO",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Measuring Self-Organization via Observers",
"authors": [
"Daniel Polani"
],
"abstract": "We introduce organization information, an information-theoretic characterization for the phenomenon of self-organization. This notion, which requires the specification of an observer, is discussed in the paradigmatic context of the Self-Organizing Map and its behaviour is compared to that of other information-theoretic measures. We show that it is sensitive to the presence and absence of “self-organization” (in the intuitive sense) in cases where conventional measures fail.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "swaWhUu1Oqc",
"year": null,
"venue": "ECAL 2001",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=swaWhUu1Oqc",
"arxiv_id": null,
"doi": null
}
|
{
"title": "An Information-Theoretic Approach for the Quantification of Relevance",
"authors": [
"Daniel Polani",
"Thomas Martinetz",
"Jan T. Kim"
],
"abstract": "We propose a concept for a Shannon-type quantification of information relevant to a decision unit or agent. The proposed measure is operational, can - at least in principle - be calculated for a given system and has an immediate interpretation as an information quantity. Its use as a natural framework for the study of sensor evolution is discussed.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "jsaQnULh5Nd",
"year": null,
"venue": "ECAL 2007",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=jsaQnULh5Nd",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Modelling the Effects of Colony Age on the Foraging Behaviour of Harvester Ants",
"authors": [
"Tom Diethe",
"Peter J. Bentley"
],
"abstract": "The colonies of certain species of ants, for example Pogonomyrmex barbatus, exhibit changes in behaviour as the colonies grow older, despite nearly all of the individual ants being replaced each year [1]. The behaviour of older colonies is more stable, and they are more likely to avoid intraspecific conflict [2]. Gordon hypothesised that the reason for this is that a 3-4 year old colony is in the steepest part of its growth curve, i.e. the 4000 workers of the 3 year-old colony are feeding 6000 larvae, and that the aggression of individual ants is based on colony level food requirements. This study aims to model this phenomenon using an individual-based simulation. The results from model are compared with field experiments taken over a period of years at the study site in New Mexico [3,4]. The model provides support to the biological hypothesis by showing that both colony age and aggression of individual ants have significant effects on foraging ranges.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "iLzBZO-O45",
"year": null,
"venue": "ECAL 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=iLzBZO-O45",
"arxiv_id": null,
"doi": null
}
|
{
"title": "On the evolution of self-organised role-allocation and role-switching behaviour in swarm robotics: a case study",
"authors": [
"Elio Tuci",
"Boris Mitavskiy",
"Gianpiero Francesca"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "cS9UKG4LHhU",
"year": null,
"venue": "ECAL 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=cS9UKG4LHhU",
"arxiv_id": null,
"doi": null
}
|
{
"title": "On the evolution of self-organised role-allocation and role-switching behaviour in swarm robotics: a case study",
"authors": [
"Elio Tuci",
"Boris Mitavskiy",
"Gianpiero Francesca"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "XE5_dMwSFu4",
"year": null,
"venue": "ECAL 2005",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=XE5_dMwSFu4",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Simulating Artificial Organisms with Qualitative Physiology",
"authors": [
"Simon Hartley",
"Marc Cavazza",
"Louis Bec",
"Jean-Luc Lugrin",
"Sean Crooks"
],
"abstract": "In this paper, we describe an approach to artificial life, which uses Qualitative Reasoning for the simulation of life within a 3D virtual environment. This system uses qualitative formalisms to describe both the physiology of a virtual creature and its environment. This approach has two main advantages: the possibility of representing integrated physiological functions at various levels of abstraction and the use of a common formalism for the simulation of internal (physiological) and external (environmental) processes. We illustrate this framework by revisiting early work in Artificial Life and providing these virtual life forms with a corresponding physiology, to obtain a complete living organism in virtual worlds.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "0fD1jaK7ntC",
"year": null,
"venue": "ECAL 2005",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=0fD1jaK7ntC",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Ant-Based Computing",
"authors": [
"Loizos Michael"
],
"abstract": "We propose a biologically and physically plausible model for ants and pheromones, and show this model to be sufficiently powerful to simulate the computation of arbitrary logic circuits. We thus establish that coherent deterministic and centralized computation can emerge from the collective behavior of simple distributed markovian processes as those followed by ants.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "SxWb1DSFogc",
"year": null,
"venue": "ECAL 1999",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=SxWb1DSFogc",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Emotional Disorders in Autonomous Agents?",
"authors": [
"Aapo Hyvärinen",
"Timo Honkela"
],
"abstract": "It has been recently suggested by a number of authors that modelling of emotions and related motivational systems in agents might have great practical value, apart from the interest of providing possible explanations for the emotional mechanisms of human agents. Emotions, or needs, may be used as signalling mechanisms between different subsystems (subagents) inside an agent, as well as between different agents. In this paper, we investigate some problems that may arise with emotional agents. Since needs and emotions are largely global, stable reaction tendencies, they may exhibit rigidities that lead to different forms of maladaptive behavior, i.e. behavior that is not well suited to the present environment of the agent. We investigate emotional learning in agents by an utterly simplified decision-theoretical model. We show that even in this very simple model agents may develop maladaptive patterns of behavior that closely resemble patterns found in emotional disorders in humans. The maladaptive behavior patterns are due to non-optimal values for the two decision parameters, which are functions of the prior beliefs of the agent.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "BTwECs93dgq",
"year": null,
"venue": "ECAL 2003",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=BTwECs93dgq",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Are There Representations in Embodied Evolved Agents? Taking Measures",
"authors": [
"Hezi Avraham",
"Gal Chechik",
"Eytan Ruppin"
],
"abstract": "The question of conceptual representation has received considerable attention in philosophy, neuroscience and embodied evolved agents. Numerous theories on the interpretation of the term ‘representation’ exist, and many arguments have been made for and against the existence of representations in animate and animat agents. Our work studies this question in evolved artificial embodied agents in a quantitatively rigorous manner, for the first time. We develop two measures, based on information theory, to account for representations. These measures are studied by applying them to evolved agents performing a visual categorization, generalized XOR task. Our results show that having quantitative measures still leaves one with arbitrary “threshold values” decisions which permit wide freedom in determining the existence of representations. However, and more importantly, our results show that information-theoretic measures can still be used efficiently to identify discriminative neural patterns and internal structures that characterize a representation, if the latter is formed.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "PGWT5DVpCf",
"year": null,
"venue": "ECAL 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=PGWT5DVpCf",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Exploring the organisation of complex systems through the dynamical interactions among their relevant subsets",
"authors": [
"Alessandro Filisetti",
"Marco Villani",
"Andrea Roli",
"Marco Fiorucci",
"Roberto Serra"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "KIdPRJ3tHWI",
"year": null,
"venue": "ECAL 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=KIdPRJ3tHWI",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Enhancing the learning capacity of immunological algorithms: a comprehensive study of learning operators",
"authors": [
"Shangce Gao",
"Tao Gong",
"Weiya Zhong",
"Fang Wang",
"Beibei Chen"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "r7g141VffMc",
"year": null,
"venue": "ECAL 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=r7g141VffMc",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Using Reproductive Altruism to Evolve Multicellularity in Digital Organisms",
"authors": [
"Jack Hessel",
"Sherri Goings"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "jQ3UFD7Cut",
"year": null,
"venue": "ECAL 2017",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=jQ3UFD7Cut",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Time as it could be measured in artificial living systems",
"authors": [
"Andrei D. Robu",
"Christoph Salge",
"Chrystopher L. Nehaniv",
"Daniel Polani"
],
"abstract": "Being able to measure time, whether directly or indirectly, is a significant advantage for an organism. It permits it to predict regular events, and prepare for them on time. Thus, clocks are ubiquitous in biology. In the present paper, we consider the most minimal abstract pure clocks and investigate their characteristics with respect to their ability to measure time. Amongst other, we find fundamentally diametral clock characteristics, such as oscillatory behaviour for local time measurement or decay-based clocks measuring time periods in scales global to the problem. We include also cascades of independent clocks (“clock bags”) and composite clocks with controlled dependency; the latter show various regimes of markedly different dynamics.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "KyirICcjj9i",
"year": null,
"venue": "ECAL 2007",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=KyirICcjj9i",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Grounding Action-Selection in Event-Based Anticipation",
"authors": [
"Philippe Capdepuy",
"Daniel Polani",
"Chrystopher L. Nehaniv"
],
"abstract": "Anticipation is one of the key aspects involved in flexible and adaptive behavior. The ability for an autonomous agent to extract a relevant model of its coupling with the environment and of the environment itself can provide it with a strong advantage for survival. In this work we develop an event-based anticipation framework for performing latent learning and we provide two mathematical tools to identify relevant relationships between events. These tools allow us to build a predictive model which is then embedded in an action-selection architecture to generate adaptive behavior. We first analyze some of the properties of the model in simple learning tasks. Its efficiency is evaluated in a more complex task where the agent has to adapt to a changing environment. In the last section we discuss extensions of the model presented.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "dWs2VkcoiX",
"year": null,
"venue": "ECAL 2007",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=dWs2VkcoiX",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Constructing the Basic Umwelt of Artificial Agents: An Information-Theoretic Approach",
"authors": [
"Philippe Capdepuy",
"Daniel Polani",
"Chrystopher L. Nehaniv"
],
"abstract": "In the context of situated and embodied cognition, we evaluate an information-theoretic approach to the construction of the basic Umwelt of an artificial agent. We make the assumption that the construction of such a basic Umwelt is an emergent property of the coupling between the agent and its environment where the goal of the agent is to maximize its control abilities. An information-theoretic approach of the perception-action loop allows us to evaluate the capacity of the agent to inject information into its environment and to later recapture this information in its own sensors. We define a construction mechanism based on an automaton that generates internal states relevant to the agent in terms of perception-action loop. Optimizing this automaton leads to internal representations that can be a basis for the construction of the basic Umwelt of the agent. We illustrate the properties of the proposed mechanism in a simple example where an agent is acting in a box world. Simulation results show that this construction mechanism leads to a representation that captures important properties of the environment.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "LxqeP-_5n2-",
"year": null,
"venue": "ECAL 2017",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=LxqeP-_5n2-",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Mood modelling within reinforcement learning",
"authors": [
"Joe Collenette",
"Katie Atkinson",
"Daan Bloembergen",
"Karl Tuyls"
],
"abstract": "Simulating mood within a decision making process has been shown to allow cooperation to occur within the Prisoner’s Dilemma. In this paper we propose how to integrate a mood model into the classical reinforcement learning algorithm Sarsa, and show how this addition can allow self-interested agents to be successful within a multi agent environment. The human-inspired moody agent will learn to cooperate in social dilemmas without the use of punishments or other external incentives. We use both the Prisoner’s Dilemma and the Stag Hunt as our dilemmas. We show that the model provides improvements in both individual payoffs and levels of cooperation within the system when compared to the standard Sarsa model. We also show that the agents’ interaction model and their ability to differentiate between opponents influences how the reinforcement learning process converges.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "wWI4hgpU6nn",
"year": null,
"venue": "ECAL 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=wWI4hgpU6nn",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Stackelberg-based Coverage Approach in Nonconvex Environments",
"authors": [
"Bijan Ranjbar Sahraei",
"Katerina Stankova",
"Karl Tuyls",
"Gerhard Weiss"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "UwjprHFiCc1",
"year": null,
"venue": "ECAL 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=UwjprHFiCc1",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Evaluation of an Experimental Framework for Exploiting Vision in Swarm Robotics",
"authors": [
"Sjriek Alers",
"Bijan Ranjbar Sahraei",
"Stefan May",
"Karl Tuyls",
"Gerhard Weiss"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "-fCAQu-mtcB",
"year": null,
"venue": "ECAL 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=-fCAQu-mtcB",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Towards a methodology for describing the relationship between simulation and reality",
"authors": [
"Eric Schneider",
"Elizabeth I. Sklar",
"M. Q. Azhar",
"Simon Parsons",
"Karl Tuyls"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "DpiSGEcVkXJ",
"year": null,
"venue": "ECAL 2011",
"pdf_link": "http://mitpress.mit.edu/sites/default/files/titles/alife/0262297140chap116.pdf",
"forum_link": "https://openreview.net/forum?id=DpiSGEcVkXJ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Recruitment, selection and alignment of spatial language strategies",
"authors": [
"Michael Spranger"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "VYIhcgy0Le",
"year": null,
"venue": "ECAL 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=VYIhcgy0Le",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Evolutionary Explanations for Spatial Language - A Case Study on Landmarks",
"authors": [
"Michael Spranger"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "zYqD-S8Ljrj",
"year": null,
"venue": "ECAL 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=zYqD-S8Ljrj",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Enhancing the learning capacity of immunological algorithms: a comprehensive study of learning operators",
"authors": [
"Shangce Gao",
"Tao Gong",
"Weiya Zhong",
"Fang Wang",
"Beibei Chen"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "sdOgZeOnJZW",
"year": null,
"venue": "ECAL 2011",
"pdf_link": "http://mitpress.mit.edu/sites/default/files/titles/alife/0262297140chap104.pdf",
"forum_link": "https://openreview.net/forum?id=sdOgZeOnJZW",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Local information maximisation creates emergent flocking behavior",
"authors": [
"Christoph Salge",
"Daniel Polani"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "WyKjiZX-Bpk",
"year": null,
"venue": "ECAL 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=WyKjiZX-Bpk",
"arxiv_id": null,
"doi": null
}
|
{
"title": "One way to see two in one",
"authors": [
"Martin Biehl",
"Daniel Polani"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "b_WkyFJ351T",
"year": null,
"venue": "ECAL 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=b_WkyFJ351T",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Empowerment and State-dependent Noise - An Intrinsic Motivation for Avoiding Unpredictable Agents",
"authors": [
"Christoph Salge",
"Cornelius Glackin",
"Daniel Polani"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "2c6zEPN8fuH",
"year": null,
"venue": "ECAL 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=2c6zEPN8fuH",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Informational parasites in code evolution",
"authors": [
"Andrés C. Burgos",
"Daniel Polani"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "os34boFWPYC",
"year": null,
"venue": "ECAL 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=os34boFWPYC",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Apparent actions and apparent goal-directedness",
"authors": [
"Martin Biehl",
"Daniel Polani"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "YLx338Jsa-",
"year": null,
"venue": "ECAL 2017",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=YLx338Jsa-",
"arxiv_id": null,
"doi": null
}
|
{
"title": "An active inference implementation of phototaxis",
"authors": [
"Manuel Baltieri",
"Christopher L. Buckley"
],
"abstract": "Active inference is emerging as a possible unifying theory of perception and action in cognitive and computational neuroscience. On this theory, perception is a process of inferring the causes of sensory data by minimising the error between actual sensations and those predicted by an inner generative (probabilistic) model. Action on the other hand is drawn as a process that modifies the world such that the consequent sensory input meets expectations encoded in the same internal model. These two processes, inferring properties of the world and inferring actions needed to meet expectations, close the sensory/motor loop and suggest a deep symmetry between action and perception. In this work we present a simple agent-based model inspired by this new theory that offers insights on some of its central ideas. Previous implementations of active inference have typically examined a “perceptionoriented” view of this theory, assuming that agents are endowed with a detailed generative model of their surrounding environment. In contrast, we present an “action-oriented” solution showing how adaptive behaviour can emerge even when agents operate with a simple model which bears little resemblance to their environment. We examine how various parameters of this formulation allow phototaxis and present an example of a different, “pathological” behaviour. Active inference is emerging as a possible unifying theory of perception and action in cognitive and computational neuroscience. On this theory, perception is a process of inferring the causes of sensory data by minimising the error between actual sensations and those predicted by an inner generative (probabilistic) model. Action on the other hand is drawn as a process that modifies the world such that the consequent sensory input meets expectations encoded in the same internal model. These two processes, inferring properties of the world and inferring actions needed to meet expectations, close the sensory/motor loop and suggest a deep symmetry between action and perception. In this work we present a simple agent-based model inspired by this new theory that offers insights on some of its central ideas. Previous implementations of active inference have typically examined a “perceptionoriented” view of this theory, assuming that agents are endowed with a detailed generative model of their surrounding environment. In contrast, we present an “action-oriented” solution showing how adaptive behaviour can emerge even when agents operate with a simple model which bears little resemblance to their environment. We examine how various parameters of this formulation allow phototaxis and present an example of a different, “pathological” behaviour. Active inference is emerging as a possible unifying theory of perception and action in cognitive and computational neuroscience. On this theory, perception is a process of inferring the causes of sensory data by minimising the error between actual sensations and those predicted by an inner generative (probabilistic) model. Action on the other hand is drawn as a process that modifies the world such that the consequent sensory input meets expectations encoded in the same internal model. These two processes, inferring properties of the world and inferring actions needed to meet expectations, close the sensory/motor loop and suggest a deep symmetry between action and perception. In this work we present a simple agent-based model inspired by this new theory that offers insights on some of its central ideas. Previous implementations of active inference have typically examined a “perceptionoriented” view of this theory, assuming that agents are endowed with a detailed generative model of their surrounding environment. In contrast, we present an “action-oriented” solution showing how adaptive behaviour can emerge even when agents operate with a simple model which bears little resemblance to their environment. We examine how various parameters of this formulation allow phototaxis and present an example of a different, “pathological” behaviour.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ROGZnjrdkA3",
"year": null,
"venue": "ECAL 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=ROGZnjrdkA3",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Quantifying Morphological Computation based on an Information Decomposition of the Sensorimotor Loop",
"authors": [
"Keyan Ghazi-Zahedi",
"Johannes Rauh"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "kWhK7yznUWR",
"year": null,
"venue": "ECAL 2011",
"pdf_link": "http://mitpress.mit.edu/sites/default/files/titles/alife/0262297140chap128.pdf",
"forum_link": "https://openreview.net/forum?id=kWhK7yznUWR",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Transformations and multi-scale optimisation in biological adaptive networks",
"authors": [
"Richard A. Watson",
"Rob Mills",
"Christopher L. Buckley"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "shkuw4fUMdP",
"year": null,
"venue": "Guide to e-Science 2011",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=shkuw4fUMdP",
"arxiv_id": null,
"doi": null
}
|
{
"title": "e-Science, the Way Leading to Modernization of Sciences and Technologies: e-Science Practice and Thought in Chinese Academy of Sciences",
"authors": [
"Baoping Yan",
"Wenzhuang Gui",
"Ze Luo",
"Gang Qin",
"Jian Li",
"Kai Nan",
"Zhonghua Lu",
"Yuanchun Zhou"
],
"abstract": "This chapter mainly introduces our understanding and practice of e-Science in the Chinese Academy of Sciences. We present the current situation of the information infrastructure from five aspects including digital network and communication infrastructure, high performance computing environment, scientific data environment, digital library, and virtual laboratory. In terms of e-Science applications, we focus on an e-Science application conducted in Qinghai Lake region to show how various information and communication technologies can be employed to facilitate the scientific research, providing an infrastructure for protecting wildlife and ecological environment and decision-making. We have realized that e-Science is the way leading to the next-generation scientific research, and we have been promoting e-Science practice and application systematically. By e-Science, to easy Science.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "VeiI6aK022",
"year": null,
"venue": "EDUCON 2018",
"pdf_link": "https://ieeexplore.ieee.org/iel7/8360187/8363090/08363325.pdf",
"forum_link": "https://openreview.net/forum?id=VeiI6aK022",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Best practices in e-assessments with a special focus on cheating prevention",
"authors": [
"Dirk Von Gruenigen",
"Fernando Benites de Azevedo e Souza",
"Beatrice Pradarelli",
"Amani Magid",
"Mark Cieliebak"
],
"abstract": "In this digital age of the computer, Internet, and social media and Internet of Things, e-assessments have become an accepted method to determine if students have learned materials presented in a course. With acceptance of this electronic means of assessing students, many questions arise about this method. What should be the format of e-assessment? What amount of time? What kinds of questions should be asked (multiple choice, short answer, etc.)? These are only a few of the many different questions. In addition, educators have always had to contend with the possibility that some students might cheat on an examination. It is widely known that students are often times more technologically savvy than their professors. So how does one prevent students from cheating on an e-assessment? Understandably, given the amount of information available on e-assessments and the variety of formats to choose from, choosing to administer e-assessments over paper-based assessments can lead to confusion on the part of the professor. This paper presents helpful guidance for lecturers who want to introduce e-assessments in their class, and it provides recommendations about the technical infrastructure to implement to avoid students cheating. It is based on literature review, on an international survey that gathers insights and experiences from lecturers who are using e-assessment in their class, and on technological evaluation of e-assessment infrastructure.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "KYezPzdjDoR",
"year": null,
"venue": "ECIR (2) 2022",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=KYezPzdjDoR",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Leveraging Customer Reviews for E-commerce Query Generation",
"authors": [
"Yen-Chieh Lien",
"Rongting Zhang",
"F. Maxwell Harper",
"Vanessa Murdock",
"Chia-Jung Lee"
],
"abstract": "Customer reviews are an effective source of information about what people deem important in products (e.g. “strong zipper” for tents). These crowd-created descriptors not only highlight key product attributes, but can also complement seller-provided product descriptions. Motivated by this, we propose to leverage customer reviews to generate queries pertinent to target products in an e-commerce setting. While there has been work on automatic query generation, it often relied on proprietary user search data to generate query-document training pairs for learning supervised models. We take a different view and focus on leveraging reviews without training on search logs, making reproduction more viable by the public. Our method adopts an ensemble of the statistical properties of review terms and a zero-shot neural model trained on adapted external corpus to synthesize queries. Compared to competitive baselines, we show that the generated queries based on our method both better align with actual customer queries and can benefit retrieval effectiveness.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "lcQPOlM_4Ow",
"year": null,
"venue": "EACL 1993",
"pdf_link": "https://aclanthology.org/E93-1057.pdf",
"forum_link": "https://openreview.net/forum?id=lcQPOlM_4Ow",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Morphological Analysis Based Method for Spelling Correction",
"authors": [
"Itziar Aduriz",
"Eneko Agirre",
"Iñaki Alegria",
"Xabier Arregi",
"Jose Maria Arriola",
"Xabier Artola",
"Arantza Díaz de Ilarraza Sánchez",
"Nerea Ezeiza",
"Montse Maritxalar",
"Kepa Sarasola",
"Miriam Urkia"
],
"abstract": "I. Aduriz, E. Agirre, I. Alegria, X. Arregi, J.M Arriola, X. Artola, A. Diaz de Ilarraza, N. Ezeiza, M. Maritxalar, K. Sarasola, M. Urkia. Sixth Conference of the European Chapter of the Association for Computational Linguistics. 1993.",
"keywords": [],
"raw_extracted_content": "A Morphological Analysis Based Method for Spelling Correction \nAduriz I., Agirre E., Alegria I., Arregi X., Arriola J.M, Artola X., Diaz de Ilarraza A., \nEzeiza N., Maritxalar M., Sarasola K., Urkia M.(*) \nInformatika Fakultatea, Basque Country University. P.K. 649. 20080 DONOSTIA (Basque Country) \n(*) U.Z.E.I. Aldapeta, 20. 20009 DONOSTIA (Basque Country) \n1 Introduction \nXuxen is a spelling checker/corrector for Basque which \nis going to be comercialized next year. The checker \nrecognizes a word-form if a correct morphological \nbreakdown is allowed. The morphological analysis is \nbased on two-level morphology. \nThe correction method distinguishes between ortho- \ngraphic errors and typographical errors. \n• Typographical errors (or misstypings) are uncogni- \ntive errors which do not follow linguistic criteria. \n• Orthographic errors are cognitive errors which occur \nwhen the writer does not know or has forgotten the \ncorrect spelling for a word. They are more persistent \nbecause of their cognitive nature, they leave worse \nimpression and, finally, its treatment is an interest- \ning application for language standardization purposes. \n2 Correction Method in Xuxen \nThe main problems found in designing the \nchecking/correction strategy were: \n• Due to the high level of inflection of Basque, it is \nimpossible to store every word-form in a dictionary; \ntherefore, the mainstream checking/correction \nmethods were not suitable. \n• Because of the recent standardization and widespread \ndialectal use of Basque, orthographic errors are more \nlikely and therefore their treatment becomes critical. \n• The word-forms which are generated without \nlinguistic knowledge must be fed into the spelling \nchecker to check whether they are correct or not. \nIn order to face these issues the strategy used is \nbasically the following (see also Figure 1). \nHandling orthographic errors \nThe treatment of orthographic errors is based on the \nparallel use of a two-level subsystem designed to detect \nmisspellings previously typified. This subsystem has \ntwo main components: \n• Additional two-level rules describing the most likely \nchanges that are produced in the orthographic errors. \nTwenty five new rules have been defined to cover the \nmost common orthographic errors. For instance, the \nrule h: 0 => V:V V:V describes that between \nvowels the h of the lex-:cal level may dissapear in the \nsurface. In this way bear, typical misspelling of \nbehar (to need), will be detected and corrected. \n• Additional morphemes linked to the corresponding \ncorrect ones. They describe particular errors, mainly \ndialectal forms. Thus, using the new entry tikan, \ndialectal form of the ablative singular, the system is \nable to detect and correct word-forms as etxe- tikan, kaletikan .... (vm4ants of etxetik \n(from me home), kaletik (from me s~eeO .... ) \n~ I~ L --,,~'~', J '=='= \nFigure 1 - Correcting strategy in Xuxen \nWhen a word-form is not accepted by the checker the \northographic error subsystem is added and the system \nretries the morphological checking. If the incorrect form \ncan be recognized now (1) the correct lexical level form \nis directly obtained and, (2) as the two-level system is \nbidirectional, the corrected surface form will be \ngenerated from the lexical form. \nFor example, the complete correction process of the \nword-form beartzetikan (from the need), would be \nthe following: \nbeart zet ikan \n$ (t) \nbehar tze tikan(tik) \n~L (2) \nbehartzetik \nHandling tyPographical errors \nThe treatment of typographical errors is quite \nconventional and performs the following steps: \n• Generating proposals to typographical errors using \nDamerau's classification. \n• Trigram analysis. Proposals with trigrams below a \ncertain probability treshold are discarded, while the \nrest are classified in order of trigramic probability. \n• Spelling checking of proposals. \nTo speed up this treatment the following techniques \nhave been used: \n• If during the original morphological checking of the \nmisspelled word a correct morpheme has been found, \nthe criteria of Damerau are applied only to the unre- \ncognized part. Moreover, on entering the proposals \ninto the checker, the analysis starts from the state it \nwas at the end of the last recognized morpheme. \n• The number of proposals is also limited by filtering \nthe words containing very low frequency u'igrams. \n463",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "jyz24_9B5-",
"year": null,
"venue": "CoRR 2023",
"pdf_link": "http://arxiv.org/pdf/2307.13770v1",
"forum_link": "https://openreview.net/forum?id=jyz24_9B5-",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning",
"authors": [
"Cheng Han",
"Qifan Wang",
"Yiming Cui",
"Zhiwen Cao",
"Wenguan Wang",
"Siyuan Qi",
"Dongfang Liu"
],
"abstract": "As the size of transformer-based models continues to grow, fine-tuning these large-scale pretrained vision models for new tasks has become increasingly parameter-intensive. Parameter-efficient learning has been developed to reduce the number of tunable parameters during fine-tuning. Although these methods show promising results, there is still a significant performance gap compared to full fine-tuning. To address this challenge, we propose an Effective and Efficient Visual Prompt Tuning (E^2VPT) approach for large-scale transformer-based model adaptation. Specifically, we introduce a set of learnable key-value prompts and visual prompts into self-attention and input layers, respectively, to improve the effectiveness of model fine-tuning. Moreover, we design a prompt pruning procedure to systematically prune low importance prompts while preserving model performance, which largely enhances the model's efficiency. Empirical results demonstrate that our approach outperforms several state-of-the-art baselines on two benchmarks, with considerably low parameter usage (e.g., 0.32% of model parameters on VTAB-1k). Our code is available at https://github.com/ChengHan111/E2VPT.",
"keywords": [],
"raw_extracted_content": "E2VPT: An Effective and Efficient Approach for Visual Prompt Tuning\nCheng Han1, Qifan Wang2, Yiming Cui3, Zhiwen Cao4, Wenguan Wang5, Siyuan Qi6, Dongfang Liu1*\nRochester Institute of Technology1, Meta AI2, University of Florida3, Purdue University4\nZhejiang University5, BIGAI6†\n{ch7858, dongfang.liu }@rit.edu, [email protected], [email protected], [email protected]\[email protected], [email protected]\nAbstract\nAs the size of transformer-based models continues to\ngrow, fine-tuning these large-scale pretrained vision models\nfor new tasks has become increasingly parameter-intensive.\nParameter-efficient learning has been developed to reduce\nthe number of tunable parameters during fine-tuning. Al-\nthough these methods show promising results, there is still\na significant performance gap compared to full fine-tuning.\nTo address this challenge, we propose an Effective and Ef-\nficient Visual Prompt Tuning (E2VPT) approach for large-\nscale transformer-based model adaptation. Specifically, we\nintroduce a set of learnable key-value prompts and visual\nprompts into self-attention and input layers, respectively, to\nimprove the effectiveness of model fine-tuning. Moreover,\nwe design a prompt pruning procedure to systematically\nprune low importance prompts while preserving model per-\nformance, which largely enhances the model’s efficiency.\nEmpirical results demonstrate that our approach outper-\nforms several state-of-the-art baselines on two benchmarks,\nwith considerably low parameter usage ( e.g., 0.32% of\nmodel parameters on VTAB-1k). Our code is available at\nhttps://github.com/ChengHan111/E2VPT.\n1. Introduction\nThe development of artificial intelligence (AI) should\nnot only prioritize performance advances, but also empha-\nsize sustainable deployment [64, 78, 80, 87]. Despite the\ncaptivating pursuit of performance improvements in visual-\nrelated tasks, the size of present models has been rapidly\nincreasing, resulting in energy-intensive and computation-\nally expensive training [31, 73, 92]. Transformer-based ar-\nchitectures currently dominate visual-related models, such\nas ViT-Huge [12] (632M) and Swin-Large [54] (197M),\nwith significantly more parameters than the Convolutional\n*Corresponding author\n†National Key Laboratory of General Artificial Intelligence, Beijing\nInstitute for General Artificial Intelligence\n...\n......\n...\n......\n(d) Ours\n (c) Prompt tuning\n...\n(a) Partial tuning\n...\n(b) Extra module\nAccuracy (%)\nPrompt T .\n OursFull F. T.\nPartial T.\nExtra M.\nTunable Parameters (%)100 101 102 4050607080\n10-1 VTAB -1k Natural\nVTAB -1k StructuredVTAB -1k SpecializedLN\nL2\nL1LN\nL2\nL1LN\nL2\nL1LN\nL2\nL1Figure 1. E2VPT (ours) vsconcurrent arts (i.e.,\n partial tun-\ning [91],\n extra module [6], and\n prompt tuning [34] meth-\nods) under pretrain-then-finetune paradigm. Our method yields\nsolid performance gains over state-of-the-art fine-tuning methods\nand competitive to full fine-tuning on a wide range of classifica-\ntion tasks adapting the pretrained ViT-Base/16 [12] as backbone\nwith considerable lower parameter usage (see Table 1).\ncolors represent results on VTAB-1k [96] Specialized ,Natural and\nStructure , respectively.\nNeural Networks (CNN) like ResNet [26] (25M). Training\nsuch large models from scratch presents challenges such as\nlimited data [5, 20, 75] and slow convergence at low ac-\ncuracy [37, 47]. A common paradigm to overcome these\nchallenges is pretrain-then-finetune , which reduces the need\nfor vast amounts of training data and speeds up processing\nof various visual tasks. However, the traditional full fine-\ntuning involves storing and deploying a complete copy of\nthe backbone parameters for every single task [34], which\nremains computationally expensive and not suitable for fast\nmodel deployment.\nTo address this issue, various approaches have been de-arXiv:2307.13770v1 [cs.CV] 25 Jul 2023\nveloped, which can be divided into three main categories\n(see Fig. 1): partial tuning, extra module, and prompt tuning\nmethods. Partial tuning methods [10, 35, 58] only fine-tune\npart of the backbone, such as the classifier head or last few\nlayers, while freezing the others. Extra module methods in-\nsert learnable bias term [6] or additional adapters [70, 98]\nto the network for adaptation. Prompt tuning methods add\nprompt tokens [34, 36, 94] to the input layer of the trans-\nformer without changing or fine-tuning the backbone it-\nself. All of these methods operate within the pretrain-then-\nfinetune paradigm, which reduces the number of learnable\nparameters compared to full fine-tuning [10, 35, 58, 70, 98].\nHowever, despite achieving promising results, there are two\nmain limitations in existing parameter-efficient methods.\nFirstly , they do not scrutinize the core architecture of the\ntransformer’s self-attention mechanism, resulting in a large\nperformance gap with full fine-tuning. Secondly , they usu-\nally need to fine-tune a relatively large number of parame-\nters to achieve reasonable performance and fail to explore\nthe extremes of parameter efficiency.\nThe perspective outlined above leads to two fundamen-\ntal questions: ❶How can we establish the effectiveness\nof prompt tuning for large-scale transformer-based vision\nmodels? ❷How can we explore the extremes of parame-\nterefficiency to reduce the number of tunable parameters?\nThese two questions are the foundation of our work. The\nintuition is that instead of solely focusing on modifying\ninputs, as in previous prompt tuning methods, we should\nexplicitly investigate the potential of improving the self-\nattention mechanism during fine-tuning, and explore the ex-\ntremes of parameter efficiency.\nIn response to question ❶, we discuss and analyze the\nself-attention mechanism of the transformer, which is cru-\ncial in capturing long-range token dependencies within a\nglobal context [21, 38, 49]. In additional to the input visual\nprompts, we introduce learnable key-value prompts and in-\ntegrate them into the Key and Value matrices in the self-\nattention layers. The key-value prompts are jointly learned\nwith the input visual prompts during fine-tuning. This ap-\nproach effectively leverages the well-designed prompt ar-\nchitecture of the transformer, resulting in significant perfor-\nmance improvements. Moreover, it provides a generic plug-\nand-play prompt module for current transformer architec-\ntures, and its fine-tuning solution is conceptually different\nfrom all aforementioned arts in the vision domain.\nMotivated by ❷, we propose a pruning strategy to fur-\nther reduce the number of parameters while maintaining the\nmodel performance. Our approach draws inspiration from\nthe lottery ticket hypothesis (LTH) [16, 102], which posits\nthat for a given task, there exists a sub-network that can\nmatch the test accuracy of the original over-parameterized\nnetwork without the unnecessary weights [22, 23, 41, 43,\n44]. Building on this paradigm, we revisit the core designof prompt tuning methods and further reduce the number\nof learnable parameters. Specifically, we aim to retain the\nprompt tokens that contribute significantly to the perfor-\nmance, while pruning the prompt tokens that are redundant\nor unnecessary during fine-tuning. By pruning these unnec-\nessary prompts, we can significantly improve the prompt\ntuning efficiency while maintaining the performance.\nTo answer question ❶-❷, we propose E2VPT , namely\nEffective and Efficient Visual Prompt Tuning. E2VPT is\na novel prompt tuning framework that is both architecture-\naware and pruning-anchored (see Fig. 1). In §2, we con-\nduct a literature review and discuss relevant works. Our\nproposed approach is presented in §3, where we describe\nin detail how we design visual and key-value prompts\nto achieve superior performance with fewer parameters.\nIn §4, we present compelling experimental results on various\nbenchmarks, backbones, and different pretraining objectives.\nSpecifically, our approach achieves an average improvement\nof5.85% in accuracy on VTAB-1k compared to full fine-\ntuning, and 1.99% compared to VPT [34]. Moreover, our\napproach uses considerably fewer learnable parameters than\nexisting methods, accounting for an average of only 0.32%\nof the backbone parameters on VTAB-1k, whereas VPT on\naverage requires 0.68% (see Fig. 1). We further demonstrate\nand explain the superiority of our approach over VPT with\nhyperbolic visualization. Finally, we demonstrate the strong\nalgorithmic generalization of our approach to the language\ndomain in the Appendix. We trust that this work provides\nvaluable insights into related fields.\n2. Related Work\n2.1. Vision Transformers\nInspired by the remarkable success of transformers in\nnatural language processing (NLP) [5, 11, 52, 69, 79, 83],\nresearchers have extended the transformer architecture to\nvarious supervised vision tasks, including image classifi-\ncation [12, 53, 54, 56], image segmentation [46, 51, 74,\n82, 84, 86, 100], object detection [4, 7, 50, 66, 93, 101]\nand pose estimation [29, 30, 48, 90]). Self-supervised pre-\ntraining paradigms [3, 10, 24] has also been explored, lead-\ning to state-of-the-art results. transformers dominate in\nvisual-related disciplines due to their superior performance\nand scalability compared to convolutional neural networks\n(CNNs) [27, 34]. However, the significant computational\nand parameter overhead required to adapt transformers to\nvarious vision tasks cannot be ignored [15, 33, 97]. For in-\nstance, recent transformer-based models such as MViTv2-\nLarge [45] (218M), ViT-G [95] (1.8B), SwinV2-G [53]\n(3.0B), and V-MoE [72] (14.7B) incur substantial compu-\ntational costs. Therefore, we propose E2VPT , which is\ndesigned to reduce the computational cost of transformer-\nbased architectures while maintaining high performance in\nthepretrain-then-finetune paradigm.\n2.2. Parameter-efficient Fine-tuning\nEfficient model training has drawn much attention in\nthe vision community, particularly with the rise of Vision\nTransformers [1, 8, 12, 54, 85]. However, despite their\neffectiveness and widespread use, these models are often\ntoo large for practical deployment and adaptation. As a re-\nsult, the pretrain-then-finetune paradigm is commonly em-\nployed. While full fine-tuning ensures strong performance,\nit is an expensive approach that involves updating all net-\nwork parameters [27, 75]. To overcome this challenge, re-\nsearchers are exploring alternatives that balance parameter-\nefficiency and robust performance, which can be broadly\ncategorized into three groups: partial tuning ,extra module\nandprompt tuning methods.\nPartial tuning methods are widely used for parameter-\nefficient fine-tuning. These methods involve freezing most\nof the backbone and only fine-tune a small portion of the\nparameters, such as linear [32] or MLP heads [9], or a\nfew blocks/layers of the backbone [24, 65, 91, 99]. While\nthese methods are straightforward and simple to imple-\nment [10, 35, 58], they often have a large performance\ngap compared to full fine-tuning. Extra module methods\ndesign additional learnable plug-in architecture for fine-\ntuning. For example, the work in [98] introduces a side\nstructure alternatively while freezing the original network.\nThe works in [6, 70] insert additional residual units into\nthe backbone. However, one drawback of these methods\nis that the inserted modules are often customized for spe-\ncific architectures and might not be generalized to others.\nAdditionally, these modules usually consume even more\nparameters compared to partial tuning methods. Prompt\ntuning or prompting [28, 42, 57, 89] has been originally\nproposed for fast model adaptation in the language do-\nmain. These methods prepend a set of learnable vec-\ntors to the input of the backbone and only update these\ntask-specific prompts during fine-tuning. Recently, visual-\nrelated prompting [18, 34, 88] is introduced in vision do-\nmain, which designs visual prompts in the input sequence\nand shows competitive performance with full fine-tuning.\nHowever, current methods do not consider the inner design\nof transformer-based architectures, resulting in less effec-\ntive prompting solutions. In contrast, our approach is mind-\nful of architecture and anchored on pruning, which concep-\ntually sets it apart from the methods discussed above.\n3. Our E2VPT Approach\nIn this section, we introduce E2VPT , a novel visual\nprompt tuning approach for effective and efficient large-\nscale transformer-based model fine-tuning. We first define\nthe problem and notations in §3.1. The effective prompt\ntuning with the designing of visual and key-value prompts\nis presented in §3.2, followed by the efficient prompt prun-ing in §3.3. The overall framework is shown in Fig. 2.\n3.1. Problem Definition\nIn this section, we define the problem of E2VPT and\nprovide the notations. Assuming we have a backbone vi-\nsion transformer model T, which is pretrained on a large\nset of data and tasks. The input to the vision transformer is\na sequence of image patches I={I1, I2, . . . , I m}, where\nmis the total number of image patches. Each patch is\nthen projected into a d-dimensional embedding with posi-\ntional encoding, i.e.,E={Ej|1≤j≤m}withEj=\nEmb( Ij). The vision transformer Tconsists of Nidentical\ntransformer layers, represented as:\nZ1=L1(E)\nZi=Li(Zi−1)i= 2,3, . . . , N(1)\nhere each transformer layer is a stack of multi-head self-\nattention (MSA) and feed-forward network (FFN):\nL(·) =FFN (MSA (·) ) (2)\nGiven a new vision task, the objective is to fine-tune a model\nˆTthat can deliver good performance on the task, while\nonly tuning a small amount of parameters. In the context\nof visual prompt tuning, ˆT={T,P}which includes a frozen\nbackbone T, and trainable prompts Pwith very few tunable\nparameters.\n3.2. Effective Prompting\nMost existing prompt tuning approaches focus on tun-\ning a set of visual prompts by prepending them to the in-\nput sequence in transformer layers, without considering the\ninternal design of transformer architectures. However, to\nenhance the effectiveness of prompt tuning and achieve op-\ntimal fine-tuning performance, we propose a new approach\nthat incorporates a set of key-value prompts ( PKandPV)\nin addition to input visual prompts ( PI) within our vi-\nsual prompt tuning framework. Intuitively, the input visual\nprompts are inserted to the input sequence of each encoder\nlayer, which learn to represent of the new task. The key-\nvalue prompts are concatenated with the key and value pa-\nrameter matrices in the self-attention module, which learn\nto capture the new attention pattern from the data.\nVisual Prompts. Visual prompts are a set of d-dimensional\nembedding vectors that have the same dimensionality with\nthe input visual tokens. They are prepended to the input se-\nquence of each transformer encoder layer and interact with\nall the input tokens. Visual prompts play a similar role\nto those prompt tokens in traditional prompt tuning meth-\nods [34, 42], which learn task-specific embeddings to guide\nthe model performing on the new task.\nFormally, these visual prompts are defined as P I=\n{P1\nI, P2\nI, . . . , PN\nI}, where Pi\nIdenotes the learnable visual\nSelf-attention layer Self-attention layerMatMul\nSoftmaxLinear\nSelf-attention LayerConcatMLP\nLayerNorm\nMSA\nLayerNormTransformer Encoder Layer\nTransformer Encoder Layer\nTransformer Encoder LayerHead\nScale\nMatmul\nCLS\n(a) Self -attention Layer(b) Multi -head Attention \n(MSA)(c) Transformer Encoder \nLayer(e) Effective and Efficient Visual \nPrompt Tuning\nLinear Linear Linear\nTuned\nFrozen\n... ...\n......\n(d) PruningToken -wise PruningSegment -wise Pruning\n... ...... ...... ...\n... ... ...... ... ...... ... ...\n... ......\nLN\nL2\nL1ρl1ρl1ρl1\nρlnρlnρln\nρlNρlNρlNσ1σ2σM\nQ K V\nFigure 2. Overview of our E2VPT framework. Under the pretrain-then-finetune paradigm, only the prompts in the transformer’s input\nand backbone (§3.2), are updated during the fine-tuning process, while all other components remain frozen. We further introduce pruning\n(§3.3) at two levels of granularity ( i.e., token-wise and segment-wise) in (d) to eliminate unfavorable input prompts during rewinding.\nprompts in the ithencoder layer, and Nis the total number\nof layers. Then the encoder layers are represented as:\nZ1=L1(P1\nI, E)\nZi=Li(Pi\nI, Zi−1)i= 2,3, . . . , N(3)\nwhere Zirepresents the contextual embeddings computed\nby the ithencoder layer. The different colors indicate train-\nable and frozen parameters, respectively. For the embed-\ndings of the input image patches E, they are initialized with\nfrozen Emb projection from the backbone.\nKey-Value Prompts. Visual prompts are useful in learning\nknowledge about new tasks. However, they are insufficient\nin guiding information interaction within transformer en-\ncoder layers. The reason is that when fine-tuning on new\ndata, the image distribution may significantly differ from\nthose in the image examples used for pretraining the back-\nbone model. As a result, it is crucial to enhance the model’s\ncapability to capture new information from the fine-tuning\ndata and conduct more effective attention among input to-\nkens to learn new patterns.\nTo this end, we propose a new approach by introduc-\ning a novel set of key-value prompts, P Kand P V, which\nare incorporated into the attention module within each en-\ncoder layer (as shown in Fig. 2(a). These key-value prompts\nare small matrices that have only a few columns but share\nthe same number of rows as the key and value matrices in\nthe original attention module. To perform the new attention\ncomputations, the key and value matrices are concatenated\nwith their corresponding P Kand P Vprompts, respectively.\nThis process is defined as follows:\nL(·) =FFN (MSA (·))\nMSA (·) = concat(softmax(QhK′\nhT\n√\nd)V′\nh)(4)where FFN is the feed-forward network and MSA is the\nmulti-head attention inside the encoder layer. hrepresents\nthehthhead. K′andV′are the new key and value embed-\nding matrices defined as:\nK′= concat( K, P K), V′= concat( V , P V) (5)\nwhere KandVrepresent the original key and value ma-\ntrices in the backbone. In this way, the key-value prompts\ncan help guide the model adaptation to the new data. In\nour implementation, we take it a step further by enabling\nparameter sharing of the P Kand P Vprompts within each\ntransformer layer instead of tuning separate learnable vec-\ntors. Our motivation is twofold: First, our experimental\nresults show that with the shared prompts, the fine-tuning\nperformance consistently improves across instances; Sec-\nond, using shared prompt vectors reduces the parameter us-\nage in the learnable transformer part by half, making it more\nparameter-efficient. We provide discussion on exploring the\nprompt locations (i.e., before or after KandV) in §4.3.\nIt is worth noting that the query matrix Qis another\ncritical element in the self-attention mechanism. However,\nadditional prompting on Qis not desired for two reasons:\nFirst, prompting on Qis similar to prepending on Kfor\ncomputing attention scores between each pair of QandK\nTherefore, prompting on both QandKis unnecessary; Sec-\nond, changes in Qaffect the output shape of the attention\nmap, necessitating an additional linear projection for un-\nmatched dimensions in the following layer. This is not af-\nfordable under the parameter-efficient design. More exper-\niments and discussions will be provided in the Appendix.\n3.3. Efficient Prompting\nOur approach to effective prompting aims to enhance\nthe performance of the fine-tuned model. However, a nat-\nural question arises: Can we reduce the number of tun-\nable prompts without sacrificing model performance? The\nlottery ticket hypothesis (LTH) [16, 102] states that there\nexists a sub-network that can achieve the same test per-\nformance as the original over-parameterized network for a\ngiven task, without the need for unnecessary weights. Mo-\ntivated by this hypothesis, we conducted an experiment in\nwhich we masked different visual prompts and found that\nvarious prompts have varying effects on the model perfor-\nmance, with some even having a negative impact. This ob-\nservation is consistent with previous research [43, 57].\nBased on our findings, we propose a prompt pruning\nmethod on visual prompts. The primary objective of this\nmethod is to retain the most influential prompts while elim-\ninating redundant or unnecessary ones. By removing less\nimportant prompts, we can significantly improve the effi-\nciency of prompt tuning while maintaining performance.\nTo achieve this goal, we design a cascade pruning strat-\negy that operates at two levels of granularity, namely token-\nwise pruning and segment-wise pruning, as illustrated in\nFig. 2(d). Token-wise pruning initially identifies and re-\nmoves the least important visual prompts. After this step,\nsegment-wise pruning divides each remaining prompt into\nmultiple segments and filters out negative segments. By\njointly reducing the parameter usage in learnable visual\nprompts, our two-level pruning approach creates soft fil-\ntered prompts that can be re-trained in the rewinding stage.\nToken-wise Pruning. We introduce a learnable mask vari-\nableρ={ρ1, ρ2, . . . , ρ M}(Mis the length of visual\nprompts) and associate it with the input visual prompts in\neach transformer layer. Here ρk∈ {0,1}, where 0 means\nthe corresponding learnable input prompt is pruned. Then\nthe masked version of the visual prompts becomes fPk=\nρk·Pk. To determine the pruning position, we calculate the\nimportance score [16, 57] of each prompt token and elim-\ninate those positions with lowest scores. The importance\nscore is defined as the expected sensitivity of the model to\nthe mask variables ρk[60]:\nSPk=Ex∼Dx\f\f\f\f∂L(x)\n∂ρk\f\f\f\f(6)\nwhere Lis the loss function, and Dxis the training data\ndistribution [60]. The importance score assigned to each vi-\nsual prompt reflects its contribution to the fine-tuning per-\nformance. A low importance score indicates that the prompt\nhas a minor or even negative contribution to the fine-tuning\nprocess. Conversely, a high importance score suggests that\nthe prompt is a meaningful and useful one that significantly\ncontributes to the fine-tuning process.\nSegment-wise Pruning. We further investigate the\nsegment-wise pruning to preclude the negative prompt seg-\nments within each prompt. The embedding of each prompt\ntoken is first equally divided into Rparts. Each part is\ntreated as an isolated unit which can be optimized jointly.\nSimilar to the token-wise pruning, we then assign a maskvariable to each segment inside the prompt token and filter\nout those segments with low importance scores.\nRewinding. After performing the two-level cascade prun-\ning, the weight rewinding stage focuses on re-training the\nsoft filtered prompt tokens. This process involves rank-\ning the importance scores for each layer during the prun-\ning stage and setting the corresponding mask variables to\n0 when their importance scores are relatively low. Next,\nthe soft filtered input prompts are re-trained along with\nother learnable parameters using the original combination\nof learning rate and weight decay during fine-tuning.\n4. Experiment\n4.1. Experimental Setup\nDatasets . Our experiments are carried out on two image clas-\nsification benchmarks. VTAB-1k [96] collects 19 bench-\nmarked Visual Task Adaptation, categorized into three\ngroups: (1) Natural contains natural images captured by\nstandard cameras, (2) Specialized includes images taken by\nspecialized equipment, and (3) Structured covers tasks re-\nquiring geometric comprehension ( i.e., counting, distance).\nEach task of VTAB-1k contains 1000 training examples.\nFollowing [34, 96], we apply the 800-200 split for train-\ning set on hyperparameter tuning. The final evaluation is\nrun on the full training data. FGVC contains 5 bench-\nmarked Fine-Grained Visual Classification, including CUB-\n200-2011 [81], NABirds [76], Oxford Flowers [63], Stan-\nford Dogs [39] and Stanford Cars [19]. Following [34],\nthe training set is randomly split into 90% train and 10%\nval. We use val for hyperparameter tuning.\nBaselines. For fair comparison, we follow [34] and com-\npareE2VPT with other widely applied parameter-efficient\nfine-tuning methods. Results of two vision transformer ar-\nchitectures, Vision transformer [12] (ViT) and Swin trans-\nformer [54] (Swin), on image classification are discussed in\n§4.2. We also apply E2VPT to two self-supervised objec-\ntives: MAE [24] and MoCo v3 [10].\nTraining. Following [34, 58], we conduct grid search to\nmatch the best tuning hyperparameters, learning rate ( i.e.,\n[50, 25, 10, 5, 2.5, 1, 0.5, 0.25, 0.1, 0.05]), and weight decay\n(i.e., [0.01, 0.001, 0.0001, 0.0]) on val set for each task.\nNotably, E2VPT does not require specific-designed large\nlearning rate in [34]. For all models, the learning rate is\nscheduled following a cosine decay policy and trained for\n100 epochs (including 10 warm-up epochs). We follow the\nsame batch size setting in [34]: 64/128 for ViT-Base/16 and\n80 for Swin-Base, respectively. The number of segments\nfor each token (§3.3) is set to 8. The percentages for the\npruning stage are searched linearly between 10% and 90%\nwith 10% intervals. The rewinding stage applies once to\nre-train the pruned input prompts.\nReproducibility. E2VPT is implemented in Pytorch [67].\nTable 1. Image classification accuracy for ViT-Base/16 [12] pretrained on supervised ImageNet-21k. Following [34], we report the\naverage test accuracy (three runs) on FGVC [34] and VTAB-1k [96] benchmarks, and “Number of Wins” in [ ·] compared to full fine-\ntuning (Full) [32]. “Tuned/Total” is the average percentage of tuned parameters required by 24 tasks. “Scope” indicates the tuning scope\nof each method. “Additional parameters” is the existence of parameters in addition to the pretrained backbone and linear head. The highest\naccuracy among all approaches except FULL are shown in bold .E2VPT outperforms the full fine-tuning in 19 of 24 instances with far\nfewer trainable parameters. More impressively, we further report “Number of Wins to VPT” in {·}. Our method beats VPT in 21 of 24\ncases with considerably lower parameters. Per-task results are available in Appendix. Same for Table 2 and 3.\nViT-Base/16 [12] Tuned/ Scope Extra VTAB-1k [96] [19]\n(85.8M) Total Input Backbone paramsFGVC [34] [5]Natural [7] Specialized [4] Structured [8]\nFull [CVPR22] [32] 100.00% ✓ 88.54% 75.88% 83.36% 47.64%\nLinear [CVPR22] [32] 0.08% 79.32% [0] 68.93% [1] 77.16% [1] 26.84% [0]\nPartial-1 [NeurIPS14] [91] 8.34% 82.63% [0] 69.44% [2] 78.53% [0] 34.17% [0]\nMLP-3 [CVPR20] [9] 1.44% ✓ 79.80% [0] 67.80% [2] 72.83% [0] 30.62% [0]\nSidetune [ECCV20] [98] 10.08% ✓ ✓ 78.35% [0] 58.21% [0] 68.12% [0] 23.41% [0]\nBias [NeurIPS17] [70] 0.80% ✓ 88.41% [3] 73.30% [3] 78.25% [0] 44.09% [2]\nAdapter [NeurIPS20] [6] 1.02% ✓ ✓ 85.66% [2] 70.39% [4] 77.11% [0] 33.43% [0]\nVPT [ECCV22] [34] 0.73% ✓ ✓ 89.11% [4] 78.48% [6] 82.43% [2] 54.98% [8]\nOurs 0.39% ✓ ✓ ✓ 89.22% [4]{4}80.01% [6]{5}84.43% [3]{4}57.39% [8]{7}\nTable 2. Image classification accuracy for Swin-Base [54] pre-\ntrained on supervised ImageNet-21k.\nSwin-Base [54] Tuned/ VTAB-1k [96] [19]\n(86.7M) Total Natural [7] Specialized [4] Structured [8]\nFull [ICLR23] [71] 100.00% 79.10% 86.21% 59.65%\nLinear [ICLR23] [71] 0.06% 73.52% [5] 80.77% [0] 33.52% [0]\nPartial-1 [NeurIPS14] [91] 14.58% 73.11% [4] 81.70% [0] 34.96% [0]\nMLP-3 [CVPR20] [9] 2.42% 73.56% [5] 75.21% [0] 35.69% [0]\nBias [NeurIPS17] [70] 0.29% 74.19% [2] 80.14% [0] 42.42% [0]\nVPT [ECCV22] [34] 0.25% 76.78% [6] 83.33% [0] 51.85% [0]\nOurs 0.21% 83.31% [6]{6}84.95% [2]{3}57.35% [3]{7}\nExperiments are conducted on NVIDIA A100-40GB GPUs.\nTo guarantee reproducibility, our full implementation will\nbe publicly released.\n4.2. Comparison with State-of-the-Arts\nWe respectively examine the performance and robust-\nness of E2VPT on ViT [12], Swin [54], and two self-\nsupervised objectives — MAE [24] and MoCo v3 [10]. For\nreference, we provide the individual per-task results for Ta-\nble 1, 2 and 3 in Appendix.\nE2VPT on ViT. We report the average accuracy score on\nVTAB-1k and FGVC benchmarks across four diverse task\ngroups for three runs in Table 1, considering E2VPT to the\nother eight tuning protocols under pretrain-then-finetune\nparadigm. Specifically, Full [32] updates both backbone\nand classification head; Linear [32], Parital- 1[91] (top\nlayer) and MLP- 3[9] (3 MLP layers) are partial tuning\nmethods that only update partial parameters. Sidetune [98],\nBias [70] and Adapter [6] are extra module methods which\nadd new trainable parameters to backbone for adaptation;\nVPT [34] is a most recent visual prompt tuning method.\nThere are several key observations from these results. First ,\nE2VPT is able to outperform the full fine-tuning method in\nmost cases, 21 out of 24 tasks. For example, our model\nachieves 0.68% improvement on FGVC and 9.75% im-\nprovements on VTAB-1k Structured respectively. This ob-servation demonstrates the effectiveness of our approach\nfor fast large-scale vision model adaptation. On the other\nhand, our model only trains 0.39% parameters in the back-\nbone, which is much more parameter efficient than the\nfull fine-tuned model. Second , it is not surprising to see\nthat the prompt tuning based approaches generally outper-\nform the other parameter efficient methods, such as par-\ntial fine-tuning (Partial-1) and extra module (Adapter), in-\ndicating the superior adaptability of prompt tuning methods\non large-scale vision models. Again, the number of tun-\nable parameters in prompt tuning methods is also smaller\ncompared to the other methods. Third , our approach con-\nsistently outperforms the strong VPT model with less tun-\nable prompts, demonstrating the effective design of the key-\nvalue prompting and the efficient prompt pruning. The rea-\nson is that VPT only focus on design input visual prompts,\nwhich fail to capture the accurate interactions between im-\nage patches in the new data. In contrast, the key-value\nprompts in E2VPT effectively bridge this gap.\nE2VPT on Hierarchical Transformer. To prove the ef-\nfectiveness and generalization of our architectural design,\nwe further extend E2VPT to a hierarchical transformer\n— Swin [54], where the MSA layer is employed in lo-\ncal shifted windows and patch embeddings are merged at\ndeeper layers. For generality, we follow the same settings\nin ViT [12] architecture to prepend K-V learnable pairs and\n[34] for altering input vectors ( i.e., these learnable vectors\nare attended within the local windows and ignored during\npatch merging). For pruning, we notice performance drop\nwhen incorporating within the deeper local windows. We\ntherefore assign pruning stage only to the first stage. As\nSwin does not use [CLS] and apply the global pooling as\ninput for classification head [34, 54], we follow this design\nwhen adapting our method. The exclusive experiments are\ndeployed on the ImageNet-21k supervised pretrained Swin-\nBase [54]. E2VPT consistently outperform all the other\nTable 3. Image Classification accuracy for different pretrained objectives — MAE [24] and MoCo v3 [10] with ViT-Base [12] as\nbackbone. Our method enjoys significant performance gains to VPT [34] while having lower parameter usage.\nPretrained objectives MAE [24] MoCo v3 [10]\nTuned/ VTAB-1k [96] [19] Tuned/ VTAB-1k [96] [19]\nMethodsParms & Data\nTotal Natural [7] Specialized [4] Structured [8] Total Natural [7] Specialized [4] Structured [8]\nFull [CVPR22] [32] 100.00% 59.31% 79.68% 53.82% 100.00% 71.95% 84.72% 51.98%\nLinear [CVPR22] [32] 0.04% 18.87% [0] 53.72% [0] 23.70% [0] 0.04% 67.46% [4] 81.08% [0] 30.33% [0]\nPartial-1 [NeurIPS14] [91] 8.30% 58.44% [5] 78.28% [1] 47.64% [1] 8.30% 72.31% [5] 84.58% [2] 47.89% [1]\nBias [NeurIPS17] [70] 0.16% 54.55% [1] 75.68% [1] 47.70% [0] 0.16% 72.89% [3] 81.14% [0] 53.43% [4]\nAdapter [NeurIPS20] [6] 0.87% 54.90% [3] 75.19% [1] 38.98% [0] 1.12% 74.19% [4] 82.66% [1] 47.69% [2]\nVPT [ECCV22] [34] 0.10% 36.02% [0] 60.61% [1] 26.57% [0] 0.06% 70.27% [4] 83.04% [0] 42.38% [0]\nOurs 0.07% 59.52% [4] {6}77.80% [1] {2}44.65% [3] {8} 0.13% 76.47% [4]{7}87.28% [2]{4}54.91% [6]{8}\nTable 4. Impact of different components inE2VPT on two instances: VTAB-1k Natural SVHN [62] and FGVC NABirds [77].\nFine-tuning Techniques VTAB-1k Natural SVHN [62] FGVC NABirds [77]\nVisual Prompts Key-Value Prompts Pruning & Rewinding Pruning Tuned / Total Accuracy Pruning Tuned / Total Accuracy\n✓ 0.0% 0.54% 78.1% 0.0% 1.02% 84.2%\n✓ ✓ 0.0% 0.55% 83.8% 0.0% 1.05% 84.5%\n✓ ✓ 56.3% 0.42% 79.0% 34.4% 0.63% 84.2%\n✓ ✓ ✓ 62.5% 0.43% 85.3% 40.0% 0.65% 84.6%\nparameter-efficient methods on all three VTAB-1k prob-\nlem classes and for the first time surpasses full fine-tuning\non VTAB-1k Specialized andStructured using significantly\nfewer parameters ( i.e.,0.21% ).\nDifferent Pretraining Methods. We conducted experi-\nments with two self-supervised objectives, MAE [24] and\nMoCo v3 [10], on backbones pretrained without labeled\ndata, following the approach of VPT [34]. While VPT\nyielded inconclusive results on these objectives, our pro-\nposed method, E2VPT , outperformed other methods and\nachieved competitive performance to full fine-tuning ( 8\nof 19 instances under MAE, and 12 of 19 instances un-\nder MoCo v3), using significantly fewer model parameters\n(0.07% on MAE and 0.13% on MoCo v3). Our method\nalso outperformed VPT by a large margin ( 59.52% vs.\n36.02% under MAE on VTAB-1k Natural ). We lever-\naged the gap discussed in VPT, which indicates that self-\nsupervised ViTs are fundamentally different from the super-\nvised ones, and demonstrated the generality of our method\nto both pretraining objectives.\n4.3. Diagnostic Experiments\nImpact of Different Components. To investigate the im-\npact of different components in E2VPT , including visual\nprompts, key-value prompts, and pruning and rewinding,\nwe conducted experiments on two tasks in the benchmarks.\nThe results are summarized in Table 4. For SVHN [62], we\nfound that the model with visual prompts alone achieved\nan accuracy of 78.1%. Adding key-value prompts and ap-\nplying pruning and rewinding techniques individually led\nto additional gains ( 5.7% and0.9% ), demonstrating the ef-\nfectiveness of our key-value prompt tuning technique in the\nself-attention module as well as the pruning mechanism. Fi-\nnally, combining all components together yielded the best\nperformance, with an accuracy of 85.3%. We observed sim-\nilar trends on FGVC NABirds [77].Table 5. Prompt location and Initialization results on VTAB-\n1k [96] in three runs. Per-task results are available in Appendix.\nViT-Base/16 [12] VTAB-1k [96] [19]\n(85.8M) Natural [7]Specialized [4]Structured [8]\nAfter 80.67% [6] 84.30% [3] 56.76% [8](a)Before 80.01% [6] 84.43% [3] 57.39% [8]\nTrunc. Norm. [67] 79.77% [6] 84.30% [3] 56.36% [8](b)He[25] 80.01% [6] 84.43% [3] 57.39% [8]\nPrompt Location. An fundamental distinction between\nE2VPT and other methods is the learnable key-value\nprompts introduced to self-attention. In our implementa-\ntion, we prepend the key-value prompts to the sequence\nof Key and Value matrices. Further investigation is re-\nquired to determine the appropriate placement of the learn-\nable prompts. We provide ablation results on VTAB-1k\nexhaustively in Table 5(a). We show that both prepend-\ning learnable prompts before or after Key and Value ma-\ntrices show competitive results, validating the robustness of\nour approach on prompt locations. We choose “Before” as\nour baseline method in all our experiments since it achieves\nslightly better results on average ( i.e., 73.94% vs73.91%).\nInitialization. Table 5(b) reports the performance of our\napproach with respect to two widely adopted initializa-\ntion methods: truncated normal [61, 67] and He initializa-\ntion[25] on VTAB-1k benchmark. The results show that He\ninitialization generally provides more stable and preferable\nperformances on average, though we observe that in some\nspecific tasks ( i.e.,truncated normal is 1.1% higher in accu-\nracy over Heon VTAB-1k Specialized Diabetic Retinopa-\nthy Detection [13]) truncated normal gets slightly better re-\nsults. In conclusion, E2VPT shows robustness on different\ninitialization methods and is able to achieve consistent per-\nformance with full fine-tuning.\nPrompt Length. Prompt length is the only hyper-parameter\nneeded to tune in E2VPT . To further analyze the impact\nof different prompt lengths on the model performance, we\nFGVC CUB -200-2011VPT Ours\nFGVC Oxford FlowersVPT Ours VPT Ours\nFGVC Stanford Dogs\nFigure 3. Hyperbolic visualization results from VPT [34] and ours on 3 FGVC tasks ( i.e., FGVC CUB-200-2011 [81], Oxford Flow-\ners [63] and Stanford Dogs [39]). Our method shows consistently better clustering pushed to the border of the Poincar ´e disk.\n5\n10\n20\n301 5 10 20\n50Length of Key -value PromptLength of Visual Prompt81.0 83.6 82.4 74.0\n81.6 84.8 84.8 82.8\n81.3 84.2 77.1 81.8\n82.0 84.3 83.1 80.1\n85.3 85.0 80.2 84.785\n80\n75\n70\n65\n60\n55\n50\nFigure 4. Sensitivity of input prompt and key-value prompt\nlengths. We vary the number of prompts for different combina-\ntions, and show their results on VTAB-1k Natural SVHN [62].\nconducted a comprehensive study on the lengths of visual\nprompts and key-value prompts for a better understanding\nof their characteristics on VTAB-1k Natural SVHN [62].\nThe length of visual prompts is typically limited to [5, 10,\n20, 30, 50], while the length of key-value prompts is re-\nstricted to [1, 5, 10, 50], which is a standard configuration\nfor most datasets. The model performance results on dif-\nferent prompt length combinations are reported in Fig. 4. It\ncan be seen that, when using 50 visual prompts, a relative\nshorter key-value prompt can benefit performance notably\n(i.e., 84.7% when introducing one key-value prompt vs\n78.1% without key-value prompts), while further increas-\ning the length of the key-value prompt yields a small perfor-\nmance gain ( i.e., 85.3% when using 5 key-value prompts).\nWe also notice that using a large number of key-value\nprompts lead to subpar results ( i.e., 80.2% with 20 key-\nvalue prompts). Similar patterns are observed with other\nvisual prompt lengths. We argue that a heavy parameter en-\ngineering in self-attention layer might distort the original\nattention map and does harm to adaptation.\n4.4. Visualization\nFollowing [2, 14, 17, 40, 68], we show hyperbolic visu-\nalizations results on training set for VPT and ours on three\ntasks in FGVC ( i.e., CUB-200-2011 [81], Oxford Flow-\ners [63], and Stanford Dogs [39]). Hyperbolic space, to bespecific, is a Riemannian manifold of constant negative cur-\nvature. While there are several isometric models of hyper-\nbolic space, we follow previous work [14, 17] and stick to\nthe Poincar ´e ball model. Similar to [14], we use UMAP [59]\nwith the “hyperboloid” distance metric to reduce the di-\nmensionality to 2D. ViT-Base plays as an encoder with two\ntypes of pretraining ( i.e., tuned models under VPT, and ours\nafter rewinding, respectively). We freeze the models during\nfine-tuning and output embeddings are mapped to hyper-\nbolic space. Adam optimizer [55] with a learning rate of\n3×10−5is applied to all settings. The weight decay is 0.01\nwith batch size equals to 900. All models are trained for 50\nsteps for fair comparison, with a gradient clip by norm 3.\nFig. 3 illustrates how learned embeddings are arranged\non the Poincar ´e disk. We can see that in E2VPT , samples\nare clustered according to labels, and each cluster is pushed\ncloser to the border of the disk, indicating that the encoder\nseparates class well. On the other hand, we observe in VPT\nthat some of the samples move towards the center and inter-\nmix [14], indicating possible confusion during projection.\nWe also follow [14, 68, 40] and present the Recall@K met-\nric in Appendix for reference. These visualization results\nfurther validate the effectiveness of the proposed E2VPT\napproach in generating separatable embeddings from the in-\nput images in the new tasks.\n5. Conclusion and Discussion\nThe vast majority of current efforts under the pretrain-\nthen-finetune paradigm seek to reduce parameter usage\nwhile overlooking the inner design of transformer-based ar-\nchitecture. In light of this view, we present E2VPT , a\nnew parameter-efficient visual prompt tuning approach to\nmodel the transformer architecture during adaptation. It\nenjoys several advantages: i)consider self-attention mech-\nanism during tuning for superior performance to current\nparameter-efficient fine-tuning; and ii)apply pruning and\nrewinding stages to reduce parameter usage in input visual\nprompts. The systemic merits enable an effective yet effi-\ncient algorithm. As a whole, we conclude that the outcomes\nelucidated in this paper impart essential understandings and\nnecessitate further exploration within this realm.\nAcknowledgements. This research was supported by the\nNational Science Foundation under Grant No. 2242243.\nReferences\n[1] Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen\nSun, Mario Lu ˇci´c, and Cordelia Schmid. Vivit: A video\nvision transformer. In ICCV , 2021. 3\n[2] Mina Ghadimi Atigh, Julian Schoep, Erman Acar, Nanne\nVan Noord, and Pascal Mettes. Hyperbolic image segmen-\ntation. In CVPR , 2022. 8\n[3] Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. Beit:\nBert pre-training of image transformers. In ICLR , 2022. 2\n[4] Josh Beal, Eric Kim, Eric Tzeng, Dong Huk Park, Andrew\nZhai, and Dmitry Kislyuk. Toward transformer-based ob-\nject detection. arXiv preprint arXiv:2012.09958 , 2020. 2\n[5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Sub-\nbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakan-\ntan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.\nLanguage models are few-shot learners. In NeurIPS , 2020.\n1, 2\n[6] Han Cai, Chuang Gan, Ligeng Zhu, and Song Han. Tinytl:\nReduce memory, not parameters for efficient on-device\nlearning. In NeurIPS , 2020. 1, 2, 3, 6, 7\n[7] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nico-\nlas Usunier, Alexander Kirillov, and Sergey Zagoruyko.\nEnd-to-end object detection with transformers. In ECCV ,\n2020. 2\n[8] Chun-Fu Richard Chen, Quanfu Fan, and Rameswar Panda.\nCrossvit: Cross-attention multi-scale vision transformer for\nimage classification. In ICCV , 2021. 3\n[9] Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He.\nImproved baselines with momentum contrastive learning.\nInCVPR , 2020. 3, 6\n[10] Xinlei Chen, Saining Xie, and Kaiming He. An empiri-\ncal study of training self-supervised vision transformers. In\nICCV , 2021. 2, 3, 5, 6, 7\n[11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina\nToutanova. Bert: Pre-training of deep bidirectional trans-\nformers for language understanding. In NAACL-HLT , 2018.\n2\n[12] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov,\nDirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner,\nMostafa Dehghani, Matthias Minderer, Georg Heigold,\nSylvain Gelly, et al. An image is worth 16x16 words: Trans-\nformers for image recognition at scale. In ICLR , 2021. 1,\n2, 3, 5, 6, 7\n[13] Will Cukierski Emma Dugas, Jared Jorge. Diabetic\nretinopathy detection, 2015. 7\n[14] Aleksandr Ermolov, Leyla Mirvakhabova, Valentin\nKhrulkov, Nicu Sebe, and Ivan Oseledets. Hyperbolic\nvision transformers: Combining improvements in metric\nlearning. In CVPR , 2022. 8\n[15] Quentin Fournier, Ga ´etan Marceau Caron, and Daniel\nAloise. A practical survey on faster and lighter transform-\ners.arXiv preprint arXiv:2103.14636 , 2021. 2\n[16] Jonathan Frankle and Michael Carbin. The lottery ticket\nhypothesis: Finding sparse, trainable neural networks. In\nICLR , 2019. 2, 5\n[17] Octavian Ganea, Gary B ´ecigneul, and Thomas Hofmann.\nHyperbolic neural networks. In NeurIPS , 2018. 8[18] Yunhe Gao, Xingjian Shi, Yi Zhu, Hao Wang, Zhiqiang\nTang, Xiong Zhou, Mu Li, and Dimitris N Metaxas. Vi-\nsual prompt tuning for test-time domain adaptation. arXiv\npreprint arXiv:2210.04831 , 2022. 3\n[19] Timnit Gebru, Jonathan Krause, Yilun Wang, Duyun Chen,\nJia Deng, and Li Fei-Fei. Fine-grained car detection for\nvisual census estimation. In AAAI , 2017. 5\n[20] Demi Guo, Alexander M Rush, and Yoon Kim. Parameter-\nefficient transfer learning with diff pruning. In ICML , 2021.\n1\n[21] Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen,\nJianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chun-\njing Xu, Yixing Xu, et al. A survey on vision transformer.\nIEEE TPAMI , 2022. 2\n[22] Song Han, Jeff Pool, John Tran, and William Dally. Learn-\ning both weights and connections for efficient neural net-\nwork. NeurIPS , 2015. 2\n[23] Babak Hassibi and David Stork. Second order derivatives\nfor network pruning: Optimal brain surgeon. In NeurIPS ,\n1992. 2\n[24] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr\nDoll´ar, and Ross Girshick. Masked autoencoders are scal-\nable vision learners. In CVPR , 2022. 2, 3, 5, 6, 7\n[25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.\nDelving deep into rectifiers: Surpassing human-level per-\nformance on imagenet classification. In ICCV , 2015. 7\n[26] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.\nDeep residual learning for image recognition. In CVPR ,\n2016. 1\n[27] Xuehai He, Chunyuan Li, Pengchuan Zhang, Jianwei Yang,\nand Xin Eric Wang. Parameter-efficient fine-tuning for vi-\nsion transformers. arXiv preprint arXiv:2203.16329 , 2022.\n2, 3\n[28] Yun He, Steven Zheng, Yi Tay, Jai Gupta, Yu Du, Vamsi\nAribandi, Zhe Zhao, YaGuang Li, Zhao Chen, Donald Met-\nzler, et al. Hyperprompt: Prompt-based task-conditioning\nof transformers. In ICML , 2022. 3\n[29] Lin Huang, Jianchao Tan, Ji Liu, and Junsong Yuan. Hand-\ntransformer: non-autoregressive structured modeling for 3d\nhand pose estimation. In ECCV , 2020. 2\n[30] Lin Huang, Jianchao Tan, Jingjing Meng, Ji Liu, and Jun-\nsong Yuan. Hot-net: Non-autoregressive transformer for 3d\nhand-object pose estimation. In ACMMM , 2020. 2\n[31] Mike Innes, Alan Edelman, Keno Fischer, Chris Rack-\nauckas, Elliot Saba, Viral B Shah, and Will Teb-\nbutt. A differentiable programming system to bridge ma-\nchine learning and scientific computing. arXiv preprint\narXiv:1907.07587 , 2019. 1\n[32] Eugenia Iofinova, Alexandra Peste, Mark Kurtz, and Dan\nAlistarh. How well do sparse imagenet models transfer? In\nCVPR , 2022. 3, 6, 7\n[33] Khawar Islam. Recent advances in vision transformer:\nA survey and outlook of recent work. arXiv preprint\narXiv:2203.01536 , 2022. 2\n[34] Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie,\nSerge Belongie, Bharath Hariharan, and Ser-Nam Lim. Vi-\nsual prompt tuning. In ECCV , 2022. 1, 2, 3, 5, 6, 7, 8\n[35] Menglin Jia, Zuxuan Wu, Austin Reiter, Claire Cardie,\nSerge Belongie, and Ser-Nam Lim. Exploring visual en-\ngagement signals for representation learning. In ICCV ,\n2021. 2, 3\n[36] Chen Ju, Tengda Han, Kunhao Zheng, Ya Zhang, and Weidi\nXie. Prompting visual-language models for efficient video\nunderstanding. In ECCV , 2022. 2\n[37] Lukasz Kaiser, Aidan N Gomez, Noam Shazeer, Ashish\nVaswani, Niki Parmar, Llion Jones, and Jakob Uszkoreit.\nOne model to learn them all. In ICML , 2017. 1\n[38] Salman Khan, Muzammal Naseer, Munawar Hayat,\nSyed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak\nShah. Transformers in vision: A survey. ACM Comput-\ning Surveys , 54(10s):1–41, 2022. 2\n[39] Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng\nYao, and Fei-Fei Li. Novel dataset for fine-grained image\ncategorization: Stanford dogs. In CVPR Workshop , 2011.\n5, 8\n[40] Valentin Khrulkov, Leyla Mirvakhabova, Evgeniya Usti-\nnova, Ivan Oseledets, and Victor Lempitsky. Hyperbolic\nimage embeddings. In CVPR , 2020. 8\n[41] Yann LeCun, John Denker, and Sara Solla. Optimal brain\ndamage. In NeurIPS , 1989. 2\n[42] Brian Lester, Rami Al-Rfou, and Noah Constant. The\npower of scale for parameter-efficient prompt tuning. In\nEMNLP , 2021. 3\n[43] Changlin Li, Bohan Zhuang, Guangrun Wang, Xiaodan\nLiang, Xiaojun Chang, and Yi Yang. Automated progres-\nsive learning for efficient training of vision transformers. In\nCVPR , 2022. 2, 5\n[44] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and\nHans Peter Graf. Pruning filters for efficient convnets.\narXiv preprint arXiv:1608.08710 , 2016. 2\n[45] Yanghao Li, Chao-Yuan Wu, Haoqi Fan, Karttikeya Man-\ngalam, Bo Xiong, Jitendra Malik, and Christoph Feichten-\nhofer. Mvitv2: Improved multiscale vision transformers for\nclassification and detection. In CVPR , 2022. 2\n[46] James Liang, Tianfei Zhou, Dongfang Liu, and Wenguan\nWang. Clustseg: Clustering for universal segmentation.\narXiv preprint arXiv:2305.02187 , 2023. 2\n[47] Jinfeng Lin, Yalin Liu, Qingkai Zeng, Meng Jiang, and\nJane Cleland-Huang. Traceability transformed: Generating\nmore accurate links with pre-trained bert models. In ICSE ,\n2021. 1\n[48] Kevin Lin, Lijuan Wang, and Zicheng Liu. End-to-end hu-\nman pose and mesh reconstruction with transformers. In\nCVPR , 2021. 2\n[49] Tianyang Lin, Yuxin Wang, Xiangyang Liu, and Xipeng\nQiu. A survey of transformers. AI Open , 2022. 2\n[50] Dongfang Liu, Yiming Cui, Yingjie Chen, Jiyong Zhang,\nand Bin Fan. Video object detection for autonomous\ndriving: Motion-aid feature calibration. Neurocomputing ,\n409:1–11, 2020. 2\n[51] Dongfang Liu, Yiming Cui, Wenbo Tan, and Yingjie Chen.\nSg-net: Spatial granularity network for one-stage video in-\nstance segmentation. In CVPR , 2021. 2[52] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar\nJoshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettle-\nmoyer, and Veselin Stoyanov. Roberta: A robustly opti-\nmized bert pretraining approach. In ICLR , 2020. 2\n[53] Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie,\nYixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong,\net al. Swin transformer v2: Scaling up capacity and resolu-\ntion. In CVPR , 2022. 2\n[54] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng\nZhang, Stephen Lin, and Baining Guo. Swin transformer:\nHierarchical vision transformer using shifted windows. In\nICCV , 2021. 1, 2, 3, 5, 6\n[55] Ilya Loshchilov and Frank Hutter. Decoupled weight decay\nregularization. In ICML , 2017. 8\n[56] Yawen Lu, Qifan Wang, Siqi Ma, Tong Geng, Yingjie Vic-\ntor Chen, Huaijin Chen, and Dongfang Liu. Transflow:\nTransformer as flow learner. In CVPR , 2023. 2\n[57] Fang Ma, Chen Zhang, Lei Ren, Jingang Wang, Qifan\nWang, Wei Wu, Xiaojun Quan, and Dawei Song. Xprompt:\nExploring the extreme of prompt tuning. In EMNLP , 2022.\n3, 5\n[58] Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan,\nKaiming He, Manohar Paluri, Yixuan Li, Ashwin\nBharambe, and Laurens Van Der Maaten. Exploring the\nlimits of weakly supervised pretraining. In ECCV , 2018. 2,\n3, 5\n[59] Leland McInnes, John Healy, and James Melville. Umap:\nUniform manifold approximation and projection for dimen-\nsion reduction. arXiv preprint arXiv:1802.03426 , 2018. 8\n[60] Paul Michel, Omer Levy, and Graham Neubig. Are sixteen\nheads really better than one? In NeurIPS , 2019. 5\n[61] Meenal V Narkhede, Prashant P Bartakke, and Mukul S\nSutaone. A review on weight initialization strategies for\nneural networks. Artificial Intelligence Review , 2022. 7\n[62] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bis-\nsacco, Bo Wu, and Andrew Y Ng. Reading digits in natural\nimages with unsupervised feature learning. 2011. 7, 8\n[63] Maria-Elena Nilsback and Andrew Zisserman. Automated\nflower classification over a large number of classes. In In-\ndian Conference on Computer Vision, Graphics & Image\nProcessing , 2008. 5, 8\n[64] Rohit Nishant, Mike Kennedy, and Jacqueline Corbett. Ar-\ntificial intelligence for sustainability: Challenges, opportu-\nnities, and a research agenda. International Journal of In-\nformation Management , 2020. 1\n[65] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of\nvisual representations by solving jigsaw puzzles. In ECCV ,\n2016. 3\n[66] Xuran Pan, Zhuofan Xia, Shiji Song, Li Erran Li, and Gao\nHuang. 3d object detection with pointformer. In CVPR ,\n2021. 2\n[67] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer,\nJames Bradbury, Gregory Chanan, Trevor Killeen, Zeming\nLin, Natalia Gimelshein, Luca Antiga, Alban Desmaison,\nAndreas Kopf, Edward Yang, Zachary DeVito, Martin Rai-\nson, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner,\nLu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An\nimperative style, high-performance deep learning library. In\nNeurIPS , 2019. 5, 7\n[68] Wei Peng, Tuomas Varanka, Abdelrahman Mostafa,\nHenglin Shi, and Guoying Zhao. Hyperbolic deep neural\nnetworks: A survey. IEEE TPAMI , 2021. 8\n[69] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine\nLee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li,\nand Peter J Liu. Exploring the limits of transfer learning\nwith a unified text-to-text transformer. The Journal of Ma-\nchine Learning Research , 2020. 2\n[70] Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea\nVedaldi. Learning multiple visual domains with residual\nadapters. In NeurIPS , 2017. 2, 3, 6, 7\n[71] Yi Ren, Shangmin Guo, Wonho Bae, and Danica J Suther-\nland. How to prepare your task head for finetuning. In\nICLR , 2023. 6\n[72] Carlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim\nNeumann, Rodolphe Jenatton, Andr ´e Susano Pinto, Daniel\nKeysers, and Neil Houlsby. Scaling vision with sparse mix-\nture of experts. In NeurIPS , 2021. 2\n[73] Victor Sanh, Lysandre Debut, Julien Chaumond, and\nThomas Wolf. Distilbert, a distilled version of bert:\nsmaller, faster, cheaper and lighter. arXiv preprint\narXiv:1910.01108 , 2019. 1\n[74] Robin Strudel, Ricardo Garcia, Ivan Laptev, and Cordelia\nSchmid. Segmenter: Transformer for semantic segmenta-\ntion. In ICCV , 2021. 2\n[75] Nima Tajbakhsh, Jae Y Shin, Suryakanth R Gurudu, R Todd\nHurst, Christopher B Kendall, Michael B Gotway, and Jian-\nming Liang. Convolutional neural networks for medical\nimage analysis: Full training or fine tuning? IEEE Trans-\nactions on Medical Imaging , 2016. 1, 3\n[76] Grant Van Horn, Steve Branson, Ryan Farrell, Scott Haber,\nJessie Barry, Panos Ipeirotis, Pietro Perona, and Serge Be-\nlongie. Building a bird recognition app and large scale\ndataset with citizen scientists: The fine print in fine-grained\ndataset collection. In CVPR , 2015. 5\n[77] Grant Van Horn, Steve Branson, Ryan Farrell, Scott Haber,\nJessie Barry, Panos Ipeirotis, Pietro Perona, and Serge Be-\nlongie. Building a bird recognition app and large scale\ndataset with citizen scientists: The fine print in fine-grained\ndataset collection. In CVPR , 2015. 7\n[78] Aimee Van Wynsberghe. Sustainable ai: Ai for sustainabil-\nity and the sustainability of ai. AI and Ethics , 2021. 1\n[79] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser,\nand Illia Polosukhin. Attention is all you need. In NeurIPS ,\n2017. 2\n[80] Ricardo Vinuesa, Hossein Azizpour, Iolanda Leite, Made-\nline Balaam, Virginia Dignum, Sami Domisch, Anna\nFell¨ander, Simone Daniela Langhans, Max Tegmark, and\nFrancesco Fuso Nerini. The role of artificial intelligence in\nachieving the sustainable development goals. Nature Com-\nmunications , 2020. 1\n[81] Catherine Wah, Steve Branson, Peter Welinder, Pietro Per-\nona, and Serge Belongie. The caltech-ucsd birds-200-2011\ndataset. 2011. 5, 8[82] Huiyu Wang, Yukun Zhu, Hartwig Adam, Alan Yuille, and\nLiang-Chieh Chen. Max-deeplab: End-to-end panoptic\nsegmentation with mask transformers. In CVPR , 2021. 2\n[83] Wenguan Wang, Cheng Han, Tianfei Zhou, and Dongfang\nLiu. Visual recognition with deep nearest centroids. In\nICLR , 2022. 2\n[84] Wenguan Wang, James Liang, and Dongfang Liu. Learning\nequivariant segmentation with instance-unique querying. In\nNeurIPS , 2022. 2\n[85] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao\nSong, Ding Liang, Tong Lu, Ping Luo, and Ling Shao.\nPyramid vision transformer: A versatile backbone for dense\nprediction without convolutions. In ICCV , 2021. 3\n[86] Yuqing Wang, Zhaoliang Xu, Xinlong Wang, Chunhua\nShen, Baoshan Cheng, Hao Shen, and Huaxia Xia. End-\nto-end video instance segmentation with transformers. In\nCVPR , 2021. 2\n[87] Carole-Jean Wu, Ramya Raghavendra, Udit Gupta, Bilge\nAcun, Newsha Ardalani, Kiwan Maeng, Gloria Chang,\nFiona Aga, Jinshi Huang, Charles Bai, et al. Sustainable\nai: Environmental implications, challenges and opportuni-\nties. Proceedings of Machine Learning and Systems , 2022.\n1\n[88] Yinghui Xing, Qirui Wu, De Cheng, Shizhou Zhang, Guo-\nqiang Liang, and Yanning Zhang. Class-aware visual\nprompt tuning for vision-language pre-trained model. arXiv\npreprint arXiv:2208.08340 , 2022. 3\n[89] Li Yang, Qifan Wang, Jingang Wang, Xiaojun Quan, Fuli\nFeng, Yu Chen, Madian Khabsa, Sinong Wang, Zenglin Xu,\nand Dongfang Liu. Mixpave: Mix-prompt tuning for few-\nshot product attribute value extraction. In ACL, 2023. 3\n[90] Sen Yang, Zhibin Quan, Mu Nie, and Wankou Yang. Trans-\npose: Keypoint localization via transformer. In ICCV , 2021.\n2\n[91] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lip-\nson. How transferable are features in deep neural networks?\nInNeurIPS , 2014. 1, 3, 6, 7\n[92] Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv\nKumar, Srinadh Bhojanapalli, Xiaodan Song, James Dem-\nmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch opti-\nmization for deep learning: Training bert in 76 minutes. In\nICLR , 2020. 1\n[93] Zhenxun Yuan, Xiao Song, Lei Bai, Zhe Wang, and Wanli\nOuyang. Temporal-channel transformer for 3d lidar-based\nvideo object detection for autonomous driving. IEEE\nTransactions on Circuits and Systems for Video Technology ,\n2021. 2\n[94] Yuhang Zang, Wei Li, Kaiyang Zhou, Chen Huang, and\nChen Change Loy. Unified vision and language prompt\nlearning. arXiv preprint arXiv:2210.07225 , 2022. 2\n[95] Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and\nLucas Beyer. Scaling vision transformers. In CVPR , 2022.\n2\n[96] Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov,\nPierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djo-\nlonga, Andre Susano Pinto, Maxim Neumann, Alexey\nDosovitskiy, et al. A large-scale study of representation\nlearning with the visual task adaptation benchmark. arXiv\npreprint arXiv:1910.04867 , 2019. 1, 5, 6, 7\n[97] Cheng Zhang, Haocheng Wan, Xinyi Shen, and Zizhao Wu.\nPatchformer: An efficient point transformer with patch at-\ntention. In CVPR , 2022. 2\n[98] Jeffrey O Zhang, Alexander Sax, Amir Zamir, Leonidas\nGuibas, and Jitendra Malik. Side-tuning: a baseline for\nnetwork adaptation via additive side networks. In ECCV ,\n2020. 2, 3, 6\n[99] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful\nimage colorization. In ECCV , 2016. 3\n[100] Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu,\nZekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao\nXiang, Philip HS Torr, et al. Rethinking semantic segmen-\ntation from a sequence-to-sequence perspective with trans-\nformers. In CVPR , 2021. 2\n[101] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang,\nand Jifeng Dai. Deformable detr: Deformable transformers\nfor end-to-end object detection. In ICLR , 2021. 2\n[102] Bohan Zhuang, Jing Liu, Zizheng Pan, Haoyu He, Yuetian\nWeng, and Chunhua Shen. A survey on efficient training of\ntransformers. arXiv preprint arXiv:2302.01107 , 2023. 2, 5",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "KsizCjtrSx",
"year": null,
"venue": "CoRR 2023",
"pdf_link": "http://arxiv.org/pdf/2307.13770v1",
"forum_link": "https://openreview.net/forum?id=KsizCjtrSx",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning",
"authors": [
"Cheng Han",
"Qifan Wang",
"Yiming Cui",
"Zhiwen Cao",
"Wenguan Wang",
"Siyuan Qi",
"Dongfang Liu"
],
"abstract": "As the size of transformer-based models continues to grow, fine-tuning these large-scale pretrained vision models for new tasks has become increasingly parameter-intensive. Parameter-efficient learning has been developed to reduce the number of tunable parameters during fine-tuning. Although these methods show promising results, there is still a significant performance gap compared to full fine-tuning. To address this challenge, we propose an Effective and Efficient Visual Prompt Tuning (E^2VPT) approach for large-scale transformer-based model adaptation. Specifically, we introduce a set of learnable key-value prompts and visual prompts into self-attention and input layers, respectively, to improve the effectiveness of model fine-tuning. Moreover, we design a prompt pruning procedure to systematically prune low importance prompts while preserving model performance, which largely enhances the model's efficiency. Empirical results demonstrate that our approach outperforms several state-of-the-art baselines on two benchmarks, with considerably low parameter usage (e.g., 0.32% of model parameters on VTAB-1k). Our code is available at https://github.com/ChengHan111/E2VPT.",
"keywords": [],
"raw_extracted_content": "E2VPT: An Effective and Efficient Approach for Visual Prompt Tuning\nCheng Han1, Qifan Wang2, Yiming Cui3, Zhiwen Cao4, Wenguan Wang5, Siyuan Qi6, Dongfang Liu1*\nRochester Institute of Technology1, Meta AI2, University of Florida3, Purdue University4\nZhejiang University5, BIGAI6†\n{ch7858, dongfang.liu }@rit.edu, [email protected], [email protected], [email protected]\[email protected], [email protected]\nAbstract\nAs the size of transformer-based models continues to\ngrow, fine-tuning these large-scale pretrained vision models\nfor new tasks has become increasingly parameter-intensive.\nParameter-efficient learning has been developed to reduce\nthe number of tunable parameters during fine-tuning. Al-\nthough these methods show promising results, there is still\na significant performance gap compared to full fine-tuning.\nTo address this challenge, we propose an Effective and Ef-\nficient Visual Prompt Tuning (E2VPT) approach for large-\nscale transformer-based model adaptation. Specifically, we\nintroduce a set of learnable key-value prompts and visual\nprompts into self-attention and input layers, respectively, to\nimprove the effectiveness of model fine-tuning. Moreover,\nwe design a prompt pruning procedure to systematically\nprune low importance prompts while preserving model per-\nformance, which largely enhances the model’s efficiency.\nEmpirical results demonstrate that our approach outper-\nforms several state-of-the-art baselines on two benchmarks,\nwith considerably low parameter usage ( e.g., 0.32% of\nmodel parameters on VTAB-1k). Our code is available at\nhttps://github.com/ChengHan111/E2VPT.\n1. Introduction\nThe development of artificial intelligence (AI) should\nnot only prioritize performance advances, but also empha-\nsize sustainable deployment [64, 78, 80, 87]. Despite the\ncaptivating pursuit of performance improvements in visual-\nrelated tasks, the size of present models has been rapidly\nincreasing, resulting in energy-intensive and computation-\nally expensive training [31, 73, 92]. Transformer-based ar-\nchitectures currently dominate visual-related models, such\nas ViT-Huge [12] (632M) and Swin-Large [54] (197M),\nwith significantly more parameters than the Convolutional\n*Corresponding author\n†National Key Laboratory of General Artificial Intelligence, Beijing\nInstitute for General Artificial Intelligence\n...\n......\n...\n......\n(d) Ours\n (c) Prompt tuning\n...\n(a) Partial tuning\n...\n(b) Extra module\nAccuracy (%)\nPrompt T .\n OursFull F. T.\nPartial T.\nExtra M.\nTunable Parameters (%)100 101 102 4050607080\n10-1 VTAB -1k Natural\nVTAB -1k StructuredVTAB -1k SpecializedLN\nL2\nL1LN\nL2\nL1LN\nL2\nL1LN\nL2\nL1Figure 1. E2VPT (ours) vsconcurrent arts (i.e.,\n partial tun-\ning [91],\n extra module [6], and\n prompt tuning [34] meth-\nods) under pretrain-then-finetune paradigm. Our method yields\nsolid performance gains over state-of-the-art fine-tuning methods\nand competitive to full fine-tuning on a wide range of classifica-\ntion tasks adapting the pretrained ViT-Base/16 [12] as backbone\nwith considerable lower parameter usage (see Table 1).\ncolors represent results on VTAB-1k [96] Specialized ,Natural and\nStructure , respectively.\nNeural Networks (CNN) like ResNet [26] (25M). Training\nsuch large models from scratch presents challenges such as\nlimited data [5, 20, 75] and slow convergence at low ac-\ncuracy [37, 47]. A common paradigm to overcome these\nchallenges is pretrain-then-finetune , which reduces the need\nfor vast amounts of training data and speeds up processing\nof various visual tasks. However, the traditional full fine-\ntuning involves storing and deploying a complete copy of\nthe backbone parameters for every single task [34], which\nremains computationally expensive and not suitable for fast\nmodel deployment.\nTo address this issue, various approaches have been de-arXiv:2307.13770v1 [cs.CV] 25 Jul 2023\nveloped, which can be divided into three main categories\n(see Fig. 1): partial tuning, extra module, and prompt tuning\nmethods. Partial tuning methods [10, 35, 58] only fine-tune\npart of the backbone, such as the classifier head or last few\nlayers, while freezing the others. Extra module methods in-\nsert learnable bias term [6] or additional adapters [70, 98]\nto the network for adaptation. Prompt tuning methods add\nprompt tokens [34, 36, 94] to the input layer of the trans-\nformer without changing or fine-tuning the backbone it-\nself. All of these methods operate within the pretrain-then-\nfinetune paradigm, which reduces the number of learnable\nparameters compared to full fine-tuning [10, 35, 58, 70, 98].\nHowever, despite achieving promising results, there are two\nmain limitations in existing parameter-efficient methods.\nFirstly , they do not scrutinize the core architecture of the\ntransformer’s self-attention mechanism, resulting in a large\nperformance gap with full fine-tuning. Secondly , they usu-\nally need to fine-tune a relatively large number of parame-\nters to achieve reasonable performance and fail to explore\nthe extremes of parameter efficiency.\nThe perspective outlined above leads to two fundamen-\ntal questions: ❶How can we establish the effectiveness\nof prompt tuning for large-scale transformer-based vision\nmodels? ❷How can we explore the extremes of parame-\nterefficiency to reduce the number of tunable parameters?\nThese two questions are the foundation of our work. The\nintuition is that instead of solely focusing on modifying\ninputs, as in previous prompt tuning methods, we should\nexplicitly investigate the potential of improving the self-\nattention mechanism during fine-tuning, and explore the ex-\ntremes of parameter efficiency.\nIn response to question ❶, we discuss and analyze the\nself-attention mechanism of the transformer, which is cru-\ncial in capturing long-range token dependencies within a\nglobal context [21, 38, 49]. In additional to the input visual\nprompts, we introduce learnable key-value prompts and in-\ntegrate them into the Key and Value matrices in the self-\nattention layers. The key-value prompts are jointly learned\nwith the input visual prompts during fine-tuning. This ap-\nproach effectively leverages the well-designed prompt ar-\nchitecture of the transformer, resulting in significant perfor-\nmance improvements. Moreover, it provides a generic plug-\nand-play prompt module for current transformer architec-\ntures, and its fine-tuning solution is conceptually different\nfrom all aforementioned arts in the vision domain.\nMotivated by ❷, we propose a pruning strategy to fur-\nther reduce the number of parameters while maintaining the\nmodel performance. Our approach draws inspiration from\nthe lottery ticket hypothesis (LTH) [16, 102], which posits\nthat for a given task, there exists a sub-network that can\nmatch the test accuracy of the original over-parameterized\nnetwork without the unnecessary weights [22, 23, 41, 43,\n44]. Building on this paradigm, we revisit the core designof prompt tuning methods and further reduce the number\nof learnable parameters. Specifically, we aim to retain the\nprompt tokens that contribute significantly to the perfor-\nmance, while pruning the prompt tokens that are redundant\nor unnecessary during fine-tuning. By pruning these unnec-\nessary prompts, we can significantly improve the prompt\ntuning efficiency while maintaining the performance.\nTo answer question ❶-❷, we propose E2VPT , namely\nEffective and Efficient Visual Prompt Tuning. E2VPT is\na novel prompt tuning framework that is both architecture-\naware and pruning-anchored (see Fig. 1). In §2, we con-\nduct a literature review and discuss relevant works. Our\nproposed approach is presented in §3, where we describe\nin detail how we design visual and key-value prompts\nto achieve superior performance with fewer parameters.\nIn §4, we present compelling experimental results on various\nbenchmarks, backbones, and different pretraining objectives.\nSpecifically, our approach achieves an average improvement\nof5.85% in accuracy on VTAB-1k compared to full fine-\ntuning, and 1.99% compared to VPT [34]. Moreover, our\napproach uses considerably fewer learnable parameters than\nexisting methods, accounting for an average of only 0.32%\nof the backbone parameters on VTAB-1k, whereas VPT on\naverage requires 0.68% (see Fig. 1). We further demonstrate\nand explain the superiority of our approach over VPT with\nhyperbolic visualization. Finally, we demonstrate the strong\nalgorithmic generalization of our approach to the language\ndomain in the Appendix. We trust that this work provides\nvaluable insights into related fields.\n2. Related Work\n2.1. Vision Transformers\nInspired by the remarkable success of transformers in\nnatural language processing (NLP) [5, 11, 52, 69, 79, 83],\nresearchers have extended the transformer architecture to\nvarious supervised vision tasks, including image classifi-\ncation [12, 53, 54, 56], image segmentation [46, 51, 74,\n82, 84, 86, 100], object detection [4, 7, 50, 66, 93, 101]\nand pose estimation [29, 30, 48, 90]). Self-supervised pre-\ntraining paradigms [3, 10, 24] has also been explored, lead-\ning to state-of-the-art results. transformers dominate in\nvisual-related disciplines due to their superior performance\nand scalability compared to convolutional neural networks\n(CNNs) [27, 34]. However, the significant computational\nand parameter overhead required to adapt transformers to\nvarious vision tasks cannot be ignored [15, 33, 97]. For in-\nstance, recent transformer-based models such as MViTv2-\nLarge [45] (218M), ViT-G [95] (1.8B), SwinV2-G [53]\n(3.0B), and V-MoE [72] (14.7B) incur substantial compu-\ntational costs. Therefore, we propose E2VPT , which is\ndesigned to reduce the computational cost of transformer-\nbased architectures while maintaining high performance in\nthepretrain-then-finetune paradigm.\n2.2. Parameter-efficient Fine-tuning\nEfficient model training has drawn much attention in\nthe vision community, particularly with the rise of Vision\nTransformers [1, 8, 12, 54, 85]. However, despite their\neffectiveness and widespread use, these models are often\ntoo large for practical deployment and adaptation. As a re-\nsult, the pretrain-then-finetune paradigm is commonly em-\nployed. While full fine-tuning ensures strong performance,\nit is an expensive approach that involves updating all net-\nwork parameters [27, 75]. To overcome this challenge, re-\nsearchers are exploring alternatives that balance parameter-\nefficiency and robust performance, which can be broadly\ncategorized into three groups: partial tuning ,extra module\nandprompt tuning methods.\nPartial tuning methods are widely used for parameter-\nefficient fine-tuning. These methods involve freezing most\nof the backbone and only fine-tune a small portion of the\nparameters, such as linear [32] or MLP heads [9], or a\nfew blocks/layers of the backbone [24, 65, 91, 99]. While\nthese methods are straightforward and simple to imple-\nment [10, 35, 58], they often have a large performance\ngap compared to full fine-tuning. Extra module methods\ndesign additional learnable plug-in architecture for fine-\ntuning. For example, the work in [98] introduces a side\nstructure alternatively while freezing the original network.\nThe works in [6, 70] insert additional residual units into\nthe backbone. However, one drawback of these methods\nis that the inserted modules are often customized for spe-\ncific architectures and might not be generalized to others.\nAdditionally, these modules usually consume even more\nparameters compared to partial tuning methods. Prompt\ntuning or prompting [28, 42, 57, 89] has been originally\nproposed for fast model adaptation in the language do-\nmain. These methods prepend a set of learnable vec-\ntors to the input of the backbone and only update these\ntask-specific prompts during fine-tuning. Recently, visual-\nrelated prompting [18, 34, 88] is introduced in vision do-\nmain, which designs visual prompts in the input sequence\nand shows competitive performance with full fine-tuning.\nHowever, current methods do not consider the inner design\nof transformer-based architectures, resulting in less effec-\ntive prompting solutions. In contrast, our approach is mind-\nful of architecture and anchored on pruning, which concep-\ntually sets it apart from the methods discussed above.\n3. Our E2VPT Approach\nIn this section, we introduce E2VPT , a novel visual\nprompt tuning approach for effective and efficient large-\nscale transformer-based model fine-tuning. We first define\nthe problem and notations in §3.1. The effective prompt\ntuning with the designing of visual and key-value prompts\nis presented in §3.2, followed by the efficient prompt prun-ing in §3.3. The overall framework is shown in Fig. 2.\n3.1. Problem Definition\nIn this section, we define the problem of E2VPT and\nprovide the notations. Assuming we have a backbone vi-\nsion transformer model T, which is pretrained on a large\nset of data and tasks. The input to the vision transformer is\na sequence of image patches I={I1, I2, . . . , I m}, where\nmis the total number of image patches. Each patch is\nthen projected into a d-dimensional embedding with posi-\ntional encoding, i.e.,E={Ej|1≤j≤m}withEj=\nEmb( Ij). The vision transformer Tconsists of Nidentical\ntransformer layers, represented as:\nZ1=L1(E)\nZi=Li(Zi−1)i= 2,3, . . . , N(1)\nhere each transformer layer is a stack of multi-head self-\nattention (MSA) and feed-forward network (FFN):\nL(·) =FFN (MSA (·) ) (2)\nGiven a new vision task, the objective is to fine-tune a model\nˆTthat can deliver good performance on the task, while\nonly tuning a small amount of parameters. In the context\nof visual prompt tuning, ˆT={T,P}which includes a frozen\nbackbone T, and trainable prompts Pwith very few tunable\nparameters.\n3.2. Effective Prompting\nMost existing prompt tuning approaches focus on tun-\ning a set of visual prompts by prepending them to the in-\nput sequence in transformer layers, without considering the\ninternal design of transformer architectures. However, to\nenhance the effectiveness of prompt tuning and achieve op-\ntimal fine-tuning performance, we propose a new approach\nthat incorporates a set of key-value prompts ( PKandPV)\nin addition to input visual prompts ( PI) within our vi-\nsual prompt tuning framework. Intuitively, the input visual\nprompts are inserted to the input sequence of each encoder\nlayer, which learn to represent of the new task. The key-\nvalue prompts are concatenated with the key and value pa-\nrameter matrices in the self-attention module, which learn\nto capture the new attention pattern from the data.\nVisual Prompts. Visual prompts are a set of d-dimensional\nembedding vectors that have the same dimensionality with\nthe input visual tokens. They are prepended to the input se-\nquence of each transformer encoder layer and interact with\nall the input tokens. Visual prompts play a similar role\nto those prompt tokens in traditional prompt tuning meth-\nods [34, 42], which learn task-specific embeddings to guide\nthe model performing on the new task.\nFormally, these visual prompts are defined as P I=\n{P1\nI, P2\nI, . . . , PN\nI}, where Pi\nIdenotes the learnable visual\nSelf-attention layer Self-attention layerMatMul\nSoftmaxLinear\nSelf-attention LayerConcatMLP\nLayerNorm\nMSA\nLayerNormTransformer Encoder Layer\nTransformer Encoder Layer\nTransformer Encoder LayerHead\nScale\nMatmul\nCLS\n(a) Self -attention Layer(b) Multi -head Attention \n(MSA)(c) Transformer Encoder \nLayer(e) Effective and Efficient Visual \nPrompt Tuning\nLinear Linear Linear\nTuned\nFrozen\n... ...\n......\n(d) PruningToken -wise PruningSegment -wise Pruning\n... ...... ...... ...\n... ... ...... ... ...... ... ...\n... ......\nLN\nL2\nL1ρl1ρl1ρl1\nρlnρlnρln\nρlNρlNρlNσ1σ2σM\nQ K V\nFigure 2. Overview of our E2VPT framework. Under the pretrain-then-finetune paradigm, only the prompts in the transformer’s input\nand backbone (§3.2), are updated during the fine-tuning process, while all other components remain frozen. We further introduce pruning\n(§3.3) at two levels of granularity ( i.e., token-wise and segment-wise) in (d) to eliminate unfavorable input prompts during rewinding.\nprompts in the ithencoder layer, and Nis the total number\nof layers. Then the encoder layers are represented as:\nZ1=L1(P1\nI, E)\nZi=Li(Pi\nI, Zi−1)i= 2,3, . . . , N(3)\nwhere Zirepresents the contextual embeddings computed\nby the ithencoder layer. The different colors indicate train-\nable and frozen parameters, respectively. For the embed-\ndings of the input image patches E, they are initialized with\nfrozen Emb projection from the backbone.\nKey-Value Prompts. Visual prompts are useful in learning\nknowledge about new tasks. However, they are insufficient\nin guiding information interaction within transformer en-\ncoder layers. The reason is that when fine-tuning on new\ndata, the image distribution may significantly differ from\nthose in the image examples used for pretraining the back-\nbone model. As a result, it is crucial to enhance the model’s\ncapability to capture new information from the fine-tuning\ndata and conduct more effective attention among input to-\nkens to learn new patterns.\nTo this end, we propose a new approach by introduc-\ning a novel set of key-value prompts, P Kand P V, which\nare incorporated into the attention module within each en-\ncoder layer (as shown in Fig. 2(a). These key-value prompts\nare small matrices that have only a few columns but share\nthe same number of rows as the key and value matrices in\nthe original attention module. To perform the new attention\ncomputations, the key and value matrices are concatenated\nwith their corresponding P Kand P Vprompts, respectively.\nThis process is defined as follows:\nL(·) =FFN (MSA (·))\nMSA (·) = concat(softmax(QhK′\nhT\n√\nd)V′\nh)(4)where FFN is the feed-forward network and MSA is the\nmulti-head attention inside the encoder layer. hrepresents\nthehthhead. K′andV′are the new key and value embed-\nding matrices defined as:\nK′= concat( K, P K), V′= concat( V , P V) (5)\nwhere KandVrepresent the original key and value ma-\ntrices in the backbone. In this way, the key-value prompts\ncan help guide the model adaptation to the new data. In\nour implementation, we take it a step further by enabling\nparameter sharing of the P Kand P Vprompts within each\ntransformer layer instead of tuning separate learnable vec-\ntors. Our motivation is twofold: First, our experimental\nresults show that with the shared prompts, the fine-tuning\nperformance consistently improves across instances; Sec-\nond, using shared prompt vectors reduces the parameter us-\nage in the learnable transformer part by half, making it more\nparameter-efficient. We provide discussion on exploring the\nprompt locations (i.e., before or after KandV) in §4.3.\nIt is worth noting that the query matrix Qis another\ncritical element in the self-attention mechanism. However,\nadditional prompting on Qis not desired for two reasons:\nFirst, prompting on Qis similar to prepending on Kfor\ncomputing attention scores between each pair of QandK\nTherefore, prompting on both QandKis unnecessary; Sec-\nond, changes in Qaffect the output shape of the attention\nmap, necessitating an additional linear projection for un-\nmatched dimensions in the following layer. This is not af-\nfordable under the parameter-efficient design. More exper-\niments and discussions will be provided in the Appendix.\n3.3. Efficient Prompting\nOur approach to effective prompting aims to enhance\nthe performance of the fine-tuned model. However, a nat-\nural question arises: Can we reduce the number of tun-\nable prompts without sacrificing model performance? The\nlottery ticket hypothesis (LTH) [16, 102] states that there\nexists a sub-network that can achieve the same test per-\nformance as the original over-parameterized network for a\ngiven task, without the need for unnecessary weights. Mo-\ntivated by this hypothesis, we conducted an experiment in\nwhich we masked different visual prompts and found that\nvarious prompts have varying effects on the model perfor-\nmance, with some even having a negative impact. This ob-\nservation is consistent with previous research [43, 57].\nBased on our findings, we propose a prompt pruning\nmethod on visual prompts. The primary objective of this\nmethod is to retain the most influential prompts while elim-\ninating redundant or unnecessary ones. By removing less\nimportant prompts, we can significantly improve the effi-\nciency of prompt tuning while maintaining performance.\nTo achieve this goal, we design a cascade pruning strat-\negy that operates at two levels of granularity, namely token-\nwise pruning and segment-wise pruning, as illustrated in\nFig. 2(d). Token-wise pruning initially identifies and re-\nmoves the least important visual prompts. After this step,\nsegment-wise pruning divides each remaining prompt into\nmultiple segments and filters out negative segments. By\njointly reducing the parameter usage in learnable visual\nprompts, our two-level pruning approach creates soft fil-\ntered prompts that can be re-trained in the rewinding stage.\nToken-wise Pruning. We introduce a learnable mask vari-\nableρ={ρ1, ρ2, . . . , ρ M}(Mis the length of visual\nprompts) and associate it with the input visual prompts in\neach transformer layer. Here ρk∈ {0,1}, where 0 means\nthe corresponding learnable input prompt is pruned. Then\nthe masked version of the visual prompts becomes fPk=\nρk·Pk. To determine the pruning position, we calculate the\nimportance score [16, 57] of each prompt token and elim-\ninate those positions with lowest scores. The importance\nscore is defined as the expected sensitivity of the model to\nthe mask variables ρk[60]:\nSPk=Ex∼Dx\f\f\f\f∂L(x)\n∂ρk\f\f\f\f(6)\nwhere Lis the loss function, and Dxis the training data\ndistribution [60]. The importance score assigned to each vi-\nsual prompt reflects its contribution to the fine-tuning per-\nformance. A low importance score indicates that the prompt\nhas a minor or even negative contribution to the fine-tuning\nprocess. Conversely, a high importance score suggests that\nthe prompt is a meaningful and useful one that significantly\ncontributes to the fine-tuning process.\nSegment-wise Pruning. We further investigate the\nsegment-wise pruning to preclude the negative prompt seg-\nments within each prompt. The embedding of each prompt\ntoken is first equally divided into Rparts. Each part is\ntreated as an isolated unit which can be optimized jointly.\nSimilar to the token-wise pruning, we then assign a maskvariable to each segment inside the prompt token and filter\nout those segments with low importance scores.\nRewinding. After performing the two-level cascade prun-\ning, the weight rewinding stage focuses on re-training the\nsoft filtered prompt tokens. This process involves rank-\ning the importance scores for each layer during the prun-\ning stage and setting the corresponding mask variables to\n0 when their importance scores are relatively low. Next,\nthe soft filtered input prompts are re-trained along with\nother learnable parameters using the original combination\nof learning rate and weight decay during fine-tuning.\n4. Experiment\n4.1. Experimental Setup\nDatasets . Our experiments are carried out on two image clas-\nsification benchmarks. VTAB-1k [96] collects 19 bench-\nmarked Visual Task Adaptation, categorized into three\ngroups: (1) Natural contains natural images captured by\nstandard cameras, (2) Specialized includes images taken by\nspecialized equipment, and (3) Structured covers tasks re-\nquiring geometric comprehension ( i.e., counting, distance).\nEach task of VTAB-1k contains 1000 training examples.\nFollowing [34, 96], we apply the 800-200 split for train-\ning set on hyperparameter tuning. The final evaluation is\nrun on the full training data. FGVC contains 5 bench-\nmarked Fine-Grained Visual Classification, including CUB-\n200-2011 [81], NABirds [76], Oxford Flowers [63], Stan-\nford Dogs [39] and Stanford Cars [19]. Following [34],\nthe training set is randomly split into 90% train and 10%\nval. We use val for hyperparameter tuning.\nBaselines. For fair comparison, we follow [34] and com-\npareE2VPT with other widely applied parameter-efficient\nfine-tuning methods. Results of two vision transformer ar-\nchitectures, Vision transformer [12] (ViT) and Swin trans-\nformer [54] (Swin), on image classification are discussed in\n§4.2. We also apply E2VPT to two self-supervised objec-\ntives: MAE [24] and MoCo v3 [10].\nTraining. Following [34, 58], we conduct grid search to\nmatch the best tuning hyperparameters, learning rate ( i.e.,\n[50, 25, 10, 5, 2.5, 1, 0.5, 0.25, 0.1, 0.05]), and weight decay\n(i.e., [0.01, 0.001, 0.0001, 0.0]) on val set for each task.\nNotably, E2VPT does not require specific-designed large\nlearning rate in [34]. For all models, the learning rate is\nscheduled following a cosine decay policy and trained for\n100 epochs (including 10 warm-up epochs). We follow the\nsame batch size setting in [34]: 64/128 for ViT-Base/16 and\n80 for Swin-Base, respectively. The number of segments\nfor each token (§3.3) is set to 8. The percentages for the\npruning stage are searched linearly between 10% and 90%\nwith 10% intervals. The rewinding stage applies once to\nre-train the pruned input prompts.\nReproducibility. E2VPT is implemented in Pytorch [67].\nTable 1. Image classification accuracy for ViT-Base/16 [12] pretrained on supervised ImageNet-21k. Following [34], we report the\naverage test accuracy (three runs) on FGVC [34] and VTAB-1k [96] benchmarks, and “Number of Wins” in [ ·] compared to full fine-\ntuning (Full) [32]. “Tuned/Total” is the average percentage of tuned parameters required by 24 tasks. “Scope” indicates the tuning scope\nof each method. “Additional parameters” is the existence of parameters in addition to the pretrained backbone and linear head. The highest\naccuracy among all approaches except FULL are shown in bold .E2VPT outperforms the full fine-tuning in 19 of 24 instances with far\nfewer trainable parameters. More impressively, we further report “Number of Wins to VPT” in {·}. Our method beats VPT in 21 of 24\ncases with considerably lower parameters. Per-task results are available in Appendix. Same for Table 2 and 3.\nViT-Base/16 [12] Tuned/ Scope Extra VTAB-1k [96] [19]\n(85.8M) Total Input Backbone paramsFGVC [34] [5]Natural [7] Specialized [4] Structured [8]\nFull [CVPR22] [32] 100.00% ✓ 88.54% 75.88% 83.36% 47.64%\nLinear [CVPR22] [32] 0.08% 79.32% [0] 68.93% [1] 77.16% [1] 26.84% [0]\nPartial-1 [NeurIPS14] [91] 8.34% 82.63% [0] 69.44% [2] 78.53% [0] 34.17% [0]\nMLP-3 [CVPR20] [9] 1.44% ✓ 79.80% [0] 67.80% [2] 72.83% [0] 30.62% [0]\nSidetune [ECCV20] [98] 10.08% ✓ ✓ 78.35% [0] 58.21% [0] 68.12% [0] 23.41% [0]\nBias [NeurIPS17] [70] 0.80% ✓ 88.41% [3] 73.30% [3] 78.25% [0] 44.09% [2]\nAdapter [NeurIPS20] [6] 1.02% ✓ ✓ 85.66% [2] 70.39% [4] 77.11% [0] 33.43% [0]\nVPT [ECCV22] [34] 0.73% ✓ ✓ 89.11% [4] 78.48% [6] 82.43% [2] 54.98% [8]\nOurs 0.39% ✓ ✓ ✓ 89.22% [4]{4}80.01% [6]{5}84.43% [3]{4}57.39% [8]{7}\nTable 2. Image classification accuracy for Swin-Base [54] pre-\ntrained on supervised ImageNet-21k.\nSwin-Base [54] Tuned/ VTAB-1k [96] [19]\n(86.7M) Total Natural [7] Specialized [4] Structured [8]\nFull [ICLR23] [71] 100.00% 79.10% 86.21% 59.65%\nLinear [ICLR23] [71] 0.06% 73.52% [5] 80.77% [0] 33.52% [0]\nPartial-1 [NeurIPS14] [91] 14.58% 73.11% [4] 81.70% [0] 34.96% [0]\nMLP-3 [CVPR20] [9] 2.42% 73.56% [5] 75.21% [0] 35.69% [0]\nBias [NeurIPS17] [70] 0.29% 74.19% [2] 80.14% [0] 42.42% [0]\nVPT [ECCV22] [34] 0.25% 76.78% [6] 83.33% [0] 51.85% [0]\nOurs 0.21% 83.31% [6]{6}84.95% [2]{3}57.35% [3]{7}\nExperiments are conducted on NVIDIA A100-40GB GPUs.\nTo guarantee reproducibility, our full implementation will\nbe publicly released.\n4.2. Comparison with State-of-the-Arts\nWe respectively examine the performance and robust-\nness of E2VPT on ViT [12], Swin [54], and two self-\nsupervised objectives — MAE [24] and MoCo v3 [10]. For\nreference, we provide the individual per-task results for Ta-\nble 1, 2 and 3 in Appendix.\nE2VPT on ViT. We report the average accuracy score on\nVTAB-1k and FGVC benchmarks across four diverse task\ngroups for three runs in Table 1, considering E2VPT to the\nother eight tuning protocols under pretrain-then-finetune\nparadigm. Specifically, Full [32] updates both backbone\nand classification head; Linear [32], Parital- 1[91] (top\nlayer) and MLP- 3[9] (3 MLP layers) are partial tuning\nmethods that only update partial parameters. Sidetune [98],\nBias [70] and Adapter [6] are extra module methods which\nadd new trainable parameters to backbone for adaptation;\nVPT [34] is a most recent visual prompt tuning method.\nThere are several key observations from these results. First ,\nE2VPT is able to outperform the full fine-tuning method in\nmost cases, 21 out of 24 tasks. For example, our model\nachieves 0.68% improvement on FGVC and 9.75% im-\nprovements on VTAB-1k Structured respectively. This ob-servation demonstrates the effectiveness of our approach\nfor fast large-scale vision model adaptation. On the other\nhand, our model only trains 0.39% parameters in the back-\nbone, which is much more parameter efficient than the\nfull fine-tuned model. Second , it is not surprising to see\nthat the prompt tuning based approaches generally outper-\nform the other parameter efficient methods, such as par-\ntial fine-tuning (Partial-1) and extra module (Adapter), in-\ndicating the superior adaptability of prompt tuning methods\non large-scale vision models. Again, the number of tun-\nable parameters in prompt tuning methods is also smaller\ncompared to the other methods. Third , our approach con-\nsistently outperforms the strong VPT model with less tun-\nable prompts, demonstrating the effective design of the key-\nvalue prompting and the efficient prompt pruning. The rea-\nson is that VPT only focus on design input visual prompts,\nwhich fail to capture the accurate interactions between im-\nage patches in the new data. In contrast, the key-value\nprompts in E2VPT effectively bridge this gap.\nE2VPT on Hierarchical Transformer. To prove the ef-\nfectiveness and generalization of our architectural design,\nwe further extend E2VPT to a hierarchical transformer\n— Swin [54], where the MSA layer is employed in lo-\ncal shifted windows and patch embeddings are merged at\ndeeper layers. For generality, we follow the same settings\nin ViT [12] architecture to prepend K-V learnable pairs and\n[34] for altering input vectors ( i.e., these learnable vectors\nare attended within the local windows and ignored during\npatch merging). For pruning, we notice performance drop\nwhen incorporating within the deeper local windows. We\ntherefore assign pruning stage only to the first stage. As\nSwin does not use [CLS] and apply the global pooling as\ninput for classification head [34, 54], we follow this design\nwhen adapting our method. The exclusive experiments are\ndeployed on the ImageNet-21k supervised pretrained Swin-\nBase [54]. E2VPT consistently outperform all the other\nTable 3. Image Classification accuracy for different pretrained objectives — MAE [24] and MoCo v3 [10] with ViT-Base [12] as\nbackbone. Our method enjoys significant performance gains to VPT [34] while having lower parameter usage.\nPretrained objectives MAE [24] MoCo v3 [10]\nTuned/ VTAB-1k [96] [19] Tuned/ VTAB-1k [96] [19]\nMethodsParms & Data\nTotal Natural [7] Specialized [4] Structured [8] Total Natural [7] Specialized [4] Structured [8]\nFull [CVPR22] [32] 100.00% 59.31% 79.68% 53.82% 100.00% 71.95% 84.72% 51.98%\nLinear [CVPR22] [32] 0.04% 18.87% [0] 53.72% [0] 23.70% [0] 0.04% 67.46% [4] 81.08% [0] 30.33% [0]\nPartial-1 [NeurIPS14] [91] 8.30% 58.44% [5] 78.28% [1] 47.64% [1] 8.30% 72.31% [5] 84.58% [2] 47.89% [1]\nBias [NeurIPS17] [70] 0.16% 54.55% [1] 75.68% [1] 47.70% [0] 0.16% 72.89% [3] 81.14% [0] 53.43% [4]\nAdapter [NeurIPS20] [6] 0.87% 54.90% [3] 75.19% [1] 38.98% [0] 1.12% 74.19% [4] 82.66% [1] 47.69% [2]\nVPT [ECCV22] [34] 0.10% 36.02% [0] 60.61% [1] 26.57% [0] 0.06% 70.27% [4] 83.04% [0] 42.38% [0]\nOurs 0.07% 59.52% [4] {6}77.80% [1] {2}44.65% [3] {8} 0.13% 76.47% [4]{7}87.28% [2]{4}54.91% [6]{8}\nTable 4. Impact of different components inE2VPT on two instances: VTAB-1k Natural SVHN [62] and FGVC NABirds [77].\nFine-tuning Techniques VTAB-1k Natural SVHN [62] FGVC NABirds [77]\nVisual Prompts Key-Value Prompts Pruning & Rewinding Pruning Tuned / Total Accuracy Pruning Tuned / Total Accuracy\n✓ 0.0% 0.54% 78.1% 0.0% 1.02% 84.2%\n✓ ✓ 0.0% 0.55% 83.8% 0.0% 1.05% 84.5%\n✓ ✓ 56.3% 0.42% 79.0% 34.4% 0.63% 84.2%\n✓ ✓ ✓ 62.5% 0.43% 85.3% 40.0% 0.65% 84.6%\nparameter-efficient methods on all three VTAB-1k prob-\nlem classes and for the first time surpasses full fine-tuning\non VTAB-1k Specialized andStructured using significantly\nfewer parameters ( i.e.,0.21% ).\nDifferent Pretraining Methods. We conducted experi-\nments with two self-supervised objectives, MAE [24] and\nMoCo v3 [10], on backbones pretrained without labeled\ndata, following the approach of VPT [34]. While VPT\nyielded inconclusive results on these objectives, our pro-\nposed method, E2VPT , outperformed other methods and\nachieved competitive performance to full fine-tuning ( 8\nof 19 instances under MAE, and 12 of 19 instances un-\nder MoCo v3), using significantly fewer model parameters\n(0.07% on MAE and 0.13% on MoCo v3). Our method\nalso outperformed VPT by a large margin ( 59.52% vs.\n36.02% under MAE on VTAB-1k Natural ). We lever-\naged the gap discussed in VPT, which indicates that self-\nsupervised ViTs are fundamentally different from the super-\nvised ones, and demonstrated the generality of our method\nto both pretraining objectives.\n4.3. Diagnostic Experiments\nImpact of Different Components. To investigate the im-\npact of different components in E2VPT , including visual\nprompts, key-value prompts, and pruning and rewinding,\nwe conducted experiments on two tasks in the benchmarks.\nThe results are summarized in Table 4. For SVHN [62], we\nfound that the model with visual prompts alone achieved\nan accuracy of 78.1%. Adding key-value prompts and ap-\nplying pruning and rewinding techniques individually led\nto additional gains ( 5.7% and0.9% ), demonstrating the ef-\nfectiveness of our key-value prompt tuning technique in the\nself-attention module as well as the pruning mechanism. Fi-\nnally, combining all components together yielded the best\nperformance, with an accuracy of 85.3%. We observed sim-\nilar trends on FGVC NABirds [77].Table 5. Prompt location and Initialization results on VTAB-\n1k [96] in three runs. Per-task results are available in Appendix.\nViT-Base/16 [12] VTAB-1k [96] [19]\n(85.8M) Natural [7]Specialized [4]Structured [8]\nAfter 80.67% [6] 84.30% [3] 56.76% [8](a)Before 80.01% [6] 84.43% [3] 57.39% [8]\nTrunc. Norm. [67] 79.77% [6] 84.30% [3] 56.36% [8](b)He[25] 80.01% [6] 84.43% [3] 57.39% [8]\nPrompt Location. An fundamental distinction between\nE2VPT and other methods is the learnable key-value\nprompts introduced to self-attention. In our implementa-\ntion, we prepend the key-value prompts to the sequence\nof Key and Value matrices. Further investigation is re-\nquired to determine the appropriate placement of the learn-\nable prompts. We provide ablation results on VTAB-1k\nexhaustively in Table 5(a). We show that both prepend-\ning learnable prompts before or after Key and Value ma-\ntrices show competitive results, validating the robustness of\nour approach on prompt locations. We choose “Before” as\nour baseline method in all our experiments since it achieves\nslightly better results on average ( i.e., 73.94% vs73.91%).\nInitialization. Table 5(b) reports the performance of our\napproach with respect to two widely adopted initializa-\ntion methods: truncated normal [61, 67] and He initializa-\ntion[25] on VTAB-1k benchmark. The results show that He\ninitialization generally provides more stable and preferable\nperformances on average, though we observe that in some\nspecific tasks ( i.e.,truncated normal is 1.1% higher in accu-\nracy over Heon VTAB-1k Specialized Diabetic Retinopa-\nthy Detection [13]) truncated normal gets slightly better re-\nsults. In conclusion, E2VPT shows robustness on different\ninitialization methods and is able to achieve consistent per-\nformance with full fine-tuning.\nPrompt Length. Prompt length is the only hyper-parameter\nneeded to tune in E2VPT . To further analyze the impact\nof different prompt lengths on the model performance, we\nFGVC CUB -200-2011VPT Ours\nFGVC Oxford FlowersVPT Ours VPT Ours\nFGVC Stanford Dogs\nFigure 3. Hyperbolic visualization results from VPT [34] and ours on 3 FGVC tasks ( i.e., FGVC CUB-200-2011 [81], Oxford Flow-\ners [63] and Stanford Dogs [39]). Our method shows consistently better clustering pushed to the border of the Poincar ´e disk.\n5\n10\n20\n301 5 10 20\n50Length of Key -value PromptLength of Visual Prompt81.0 83.6 82.4 74.0\n81.6 84.8 84.8 82.8\n81.3 84.2 77.1 81.8\n82.0 84.3 83.1 80.1\n85.3 85.0 80.2 84.785\n80\n75\n70\n65\n60\n55\n50\nFigure 4. Sensitivity of input prompt and key-value prompt\nlengths. We vary the number of prompts for different combina-\ntions, and show their results on VTAB-1k Natural SVHN [62].\nconducted a comprehensive study on the lengths of visual\nprompts and key-value prompts for a better understanding\nof their characteristics on VTAB-1k Natural SVHN [62].\nThe length of visual prompts is typically limited to [5, 10,\n20, 30, 50], while the length of key-value prompts is re-\nstricted to [1, 5, 10, 50], which is a standard configuration\nfor most datasets. The model performance results on dif-\nferent prompt length combinations are reported in Fig. 4. It\ncan be seen that, when using 50 visual prompts, a relative\nshorter key-value prompt can benefit performance notably\n(i.e., 84.7% when introducing one key-value prompt vs\n78.1% without key-value prompts), while further increas-\ning the length of the key-value prompt yields a small perfor-\nmance gain ( i.e., 85.3% when using 5 key-value prompts).\nWe also notice that using a large number of key-value\nprompts lead to subpar results ( i.e., 80.2% with 20 key-\nvalue prompts). Similar patterns are observed with other\nvisual prompt lengths. We argue that a heavy parameter en-\ngineering in self-attention layer might distort the original\nattention map and does harm to adaptation.\n4.4. Visualization\nFollowing [2, 14, 17, 40, 68], we show hyperbolic visu-\nalizations results on training set for VPT and ours on three\ntasks in FGVC ( i.e., CUB-200-2011 [81], Oxford Flow-\ners [63], and Stanford Dogs [39]). Hyperbolic space, to bespecific, is a Riemannian manifold of constant negative cur-\nvature. While there are several isometric models of hyper-\nbolic space, we follow previous work [14, 17] and stick to\nthe Poincar ´e ball model. Similar to [14], we use UMAP [59]\nwith the “hyperboloid” distance metric to reduce the di-\nmensionality to 2D. ViT-Base plays as an encoder with two\ntypes of pretraining ( i.e., tuned models under VPT, and ours\nafter rewinding, respectively). We freeze the models during\nfine-tuning and output embeddings are mapped to hyper-\nbolic space. Adam optimizer [55] with a learning rate of\n3×10−5is applied to all settings. The weight decay is 0.01\nwith batch size equals to 900. All models are trained for 50\nsteps for fair comparison, with a gradient clip by norm 3.\nFig. 3 illustrates how learned embeddings are arranged\non the Poincar ´e disk. We can see that in E2VPT , samples\nare clustered according to labels, and each cluster is pushed\ncloser to the border of the disk, indicating that the encoder\nseparates class well. On the other hand, we observe in VPT\nthat some of the samples move towards the center and inter-\nmix [14], indicating possible confusion during projection.\nWe also follow [14, 68, 40] and present the Recall@K met-\nric in Appendix for reference. These visualization results\nfurther validate the effectiveness of the proposed E2VPT\napproach in generating separatable embeddings from the in-\nput images in the new tasks.\n5. Conclusion and Discussion\nThe vast majority of current efforts under the pretrain-\nthen-finetune paradigm seek to reduce parameter usage\nwhile overlooking the inner design of transformer-based ar-\nchitecture. In light of this view, we present E2VPT , a\nnew parameter-efficient visual prompt tuning approach to\nmodel the transformer architecture during adaptation. It\nenjoys several advantages: i)consider self-attention mech-\nanism during tuning for superior performance to current\nparameter-efficient fine-tuning; and ii)apply pruning and\nrewinding stages to reduce parameter usage in input visual\nprompts. The systemic merits enable an effective yet effi-\ncient algorithm. As a whole, we conclude that the outcomes\nelucidated in this paper impart essential understandings and\nnecessitate further exploration within this realm.\nAcknowledgements. This research was supported by the\nNational Science Foundation under Grant No. 2242243.\nReferences\n[1] Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen\nSun, Mario Lu ˇci´c, and Cordelia Schmid. Vivit: A video\nvision transformer. In ICCV , 2021. 3\n[2] Mina Ghadimi Atigh, Julian Schoep, Erman Acar, Nanne\nVan Noord, and Pascal Mettes. Hyperbolic image segmen-\ntation. In CVPR , 2022. 8\n[3] Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. Beit:\nBert pre-training of image transformers. In ICLR , 2022. 2\n[4] Josh Beal, Eric Kim, Eric Tzeng, Dong Huk Park, Andrew\nZhai, and Dmitry Kislyuk. Toward transformer-based ob-\nject detection. arXiv preprint arXiv:2012.09958 , 2020. 2\n[5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Sub-\nbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakan-\ntan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.\nLanguage models are few-shot learners. In NeurIPS , 2020.\n1, 2\n[6] Han Cai, Chuang Gan, Ligeng Zhu, and Song Han. Tinytl:\nReduce memory, not parameters for efficient on-device\nlearning. In NeurIPS , 2020. 1, 2, 3, 6, 7\n[7] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nico-\nlas Usunier, Alexander Kirillov, and Sergey Zagoruyko.\nEnd-to-end object detection with transformers. In ECCV ,\n2020. 2\n[8] Chun-Fu Richard Chen, Quanfu Fan, and Rameswar Panda.\nCrossvit: Cross-attention multi-scale vision transformer for\nimage classification. In ICCV , 2021. 3\n[9] Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He.\nImproved baselines with momentum contrastive learning.\nInCVPR , 2020. 3, 6\n[10] Xinlei Chen, Saining Xie, and Kaiming He. An empiri-\ncal study of training self-supervised vision transformers. In\nICCV , 2021. 2, 3, 5, 6, 7\n[11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina\nToutanova. Bert: Pre-training of deep bidirectional trans-\nformers for language understanding. In NAACL-HLT , 2018.\n2\n[12] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov,\nDirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner,\nMostafa Dehghani, Matthias Minderer, Georg Heigold,\nSylvain Gelly, et al. An image is worth 16x16 words: Trans-\nformers for image recognition at scale. In ICLR , 2021. 1,\n2, 3, 5, 6, 7\n[13] Will Cukierski Emma Dugas, Jared Jorge. Diabetic\nretinopathy detection, 2015. 7\n[14] Aleksandr Ermolov, Leyla Mirvakhabova, Valentin\nKhrulkov, Nicu Sebe, and Ivan Oseledets. Hyperbolic\nvision transformers: Combining improvements in metric\nlearning. In CVPR , 2022. 8\n[15] Quentin Fournier, Ga ´etan Marceau Caron, and Daniel\nAloise. A practical survey on faster and lighter transform-\ners.arXiv preprint arXiv:2103.14636 , 2021. 2\n[16] Jonathan Frankle and Michael Carbin. The lottery ticket\nhypothesis: Finding sparse, trainable neural networks. In\nICLR , 2019. 2, 5\n[17] Octavian Ganea, Gary B ´ecigneul, and Thomas Hofmann.\nHyperbolic neural networks. In NeurIPS , 2018. 8[18] Yunhe Gao, Xingjian Shi, Yi Zhu, Hao Wang, Zhiqiang\nTang, Xiong Zhou, Mu Li, and Dimitris N Metaxas. Vi-\nsual prompt tuning for test-time domain adaptation. arXiv\npreprint arXiv:2210.04831 , 2022. 3\n[19] Timnit Gebru, Jonathan Krause, Yilun Wang, Duyun Chen,\nJia Deng, and Li Fei-Fei. Fine-grained car detection for\nvisual census estimation. In AAAI , 2017. 5\n[20] Demi Guo, Alexander M Rush, and Yoon Kim. Parameter-\nefficient transfer learning with diff pruning. In ICML , 2021.\n1\n[21] Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen,\nJianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chun-\njing Xu, Yixing Xu, et al. A survey on vision transformer.\nIEEE TPAMI , 2022. 2\n[22] Song Han, Jeff Pool, John Tran, and William Dally. Learn-\ning both weights and connections for efficient neural net-\nwork. NeurIPS , 2015. 2\n[23] Babak Hassibi and David Stork. Second order derivatives\nfor network pruning: Optimal brain surgeon. In NeurIPS ,\n1992. 2\n[24] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr\nDoll´ar, and Ross Girshick. Masked autoencoders are scal-\nable vision learners. In CVPR , 2022. 2, 3, 5, 6, 7\n[25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.\nDelving deep into rectifiers: Surpassing human-level per-\nformance on imagenet classification. In ICCV , 2015. 7\n[26] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.\nDeep residual learning for image recognition. In CVPR ,\n2016. 1\n[27] Xuehai He, Chunyuan Li, Pengchuan Zhang, Jianwei Yang,\nand Xin Eric Wang. Parameter-efficient fine-tuning for vi-\nsion transformers. arXiv preprint arXiv:2203.16329 , 2022.\n2, 3\n[28] Yun He, Steven Zheng, Yi Tay, Jai Gupta, Yu Du, Vamsi\nAribandi, Zhe Zhao, YaGuang Li, Zhao Chen, Donald Met-\nzler, et al. Hyperprompt: Prompt-based task-conditioning\nof transformers. In ICML , 2022. 3\n[29] Lin Huang, Jianchao Tan, Ji Liu, and Junsong Yuan. Hand-\ntransformer: non-autoregressive structured modeling for 3d\nhand pose estimation. In ECCV , 2020. 2\n[30] Lin Huang, Jianchao Tan, Jingjing Meng, Ji Liu, and Jun-\nsong Yuan. Hot-net: Non-autoregressive transformer for 3d\nhand-object pose estimation. In ACMMM , 2020. 2\n[31] Mike Innes, Alan Edelman, Keno Fischer, Chris Rack-\nauckas, Elliot Saba, Viral B Shah, and Will Teb-\nbutt. A differentiable programming system to bridge ma-\nchine learning and scientific computing. arXiv preprint\narXiv:1907.07587 , 2019. 1\n[32] Eugenia Iofinova, Alexandra Peste, Mark Kurtz, and Dan\nAlistarh. How well do sparse imagenet models transfer? In\nCVPR , 2022. 3, 6, 7\n[33] Khawar Islam. Recent advances in vision transformer:\nA survey and outlook of recent work. arXiv preprint\narXiv:2203.01536 , 2022. 2\n[34] Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie,\nSerge Belongie, Bharath Hariharan, and Ser-Nam Lim. Vi-\nsual prompt tuning. In ECCV , 2022. 1, 2, 3, 5, 6, 7, 8\n[35] Menglin Jia, Zuxuan Wu, Austin Reiter, Claire Cardie,\nSerge Belongie, and Ser-Nam Lim. Exploring visual en-\ngagement signals for representation learning. In ICCV ,\n2021. 2, 3\n[36] Chen Ju, Tengda Han, Kunhao Zheng, Ya Zhang, and Weidi\nXie. Prompting visual-language models for efficient video\nunderstanding. In ECCV , 2022. 2\n[37] Lukasz Kaiser, Aidan N Gomez, Noam Shazeer, Ashish\nVaswani, Niki Parmar, Llion Jones, and Jakob Uszkoreit.\nOne model to learn them all. In ICML , 2017. 1\n[38] Salman Khan, Muzammal Naseer, Munawar Hayat,\nSyed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak\nShah. Transformers in vision: A survey. ACM Comput-\ning Surveys , 54(10s):1–41, 2022. 2\n[39] Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng\nYao, and Fei-Fei Li. Novel dataset for fine-grained image\ncategorization: Stanford dogs. In CVPR Workshop , 2011.\n5, 8\n[40] Valentin Khrulkov, Leyla Mirvakhabova, Evgeniya Usti-\nnova, Ivan Oseledets, and Victor Lempitsky. Hyperbolic\nimage embeddings. In CVPR , 2020. 8\n[41] Yann LeCun, John Denker, and Sara Solla. Optimal brain\ndamage. In NeurIPS , 1989. 2\n[42] Brian Lester, Rami Al-Rfou, and Noah Constant. The\npower of scale for parameter-efficient prompt tuning. In\nEMNLP , 2021. 3\n[43] Changlin Li, Bohan Zhuang, Guangrun Wang, Xiaodan\nLiang, Xiaojun Chang, and Yi Yang. Automated progres-\nsive learning for efficient training of vision transformers. In\nCVPR , 2022. 2, 5\n[44] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and\nHans Peter Graf. Pruning filters for efficient convnets.\narXiv preprint arXiv:1608.08710 , 2016. 2\n[45] Yanghao Li, Chao-Yuan Wu, Haoqi Fan, Karttikeya Man-\ngalam, Bo Xiong, Jitendra Malik, and Christoph Feichten-\nhofer. Mvitv2: Improved multiscale vision transformers for\nclassification and detection. In CVPR , 2022. 2\n[46] James Liang, Tianfei Zhou, Dongfang Liu, and Wenguan\nWang. Clustseg: Clustering for universal segmentation.\narXiv preprint arXiv:2305.02187 , 2023. 2\n[47] Jinfeng Lin, Yalin Liu, Qingkai Zeng, Meng Jiang, and\nJane Cleland-Huang. Traceability transformed: Generating\nmore accurate links with pre-trained bert models. In ICSE ,\n2021. 1\n[48] Kevin Lin, Lijuan Wang, and Zicheng Liu. End-to-end hu-\nman pose and mesh reconstruction with transformers. In\nCVPR , 2021. 2\n[49] Tianyang Lin, Yuxin Wang, Xiangyang Liu, and Xipeng\nQiu. A survey of transformers. AI Open , 2022. 2\n[50] Dongfang Liu, Yiming Cui, Yingjie Chen, Jiyong Zhang,\nand Bin Fan. Video object detection for autonomous\ndriving: Motion-aid feature calibration. Neurocomputing ,\n409:1–11, 2020. 2\n[51] Dongfang Liu, Yiming Cui, Wenbo Tan, and Yingjie Chen.\nSg-net: Spatial granularity network for one-stage video in-\nstance segmentation. In CVPR , 2021. 2[52] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar\nJoshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettle-\nmoyer, and Veselin Stoyanov. Roberta: A robustly opti-\nmized bert pretraining approach. In ICLR , 2020. 2\n[53] Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie,\nYixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong,\net al. Swin transformer v2: Scaling up capacity and resolu-\ntion. In CVPR , 2022. 2\n[54] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng\nZhang, Stephen Lin, and Baining Guo. Swin transformer:\nHierarchical vision transformer using shifted windows. In\nICCV , 2021. 1, 2, 3, 5, 6\n[55] Ilya Loshchilov and Frank Hutter. Decoupled weight decay\nregularization. In ICML , 2017. 8\n[56] Yawen Lu, Qifan Wang, Siqi Ma, Tong Geng, Yingjie Vic-\ntor Chen, Huaijin Chen, and Dongfang Liu. Transflow:\nTransformer as flow learner. In CVPR , 2023. 2\n[57] Fang Ma, Chen Zhang, Lei Ren, Jingang Wang, Qifan\nWang, Wei Wu, Xiaojun Quan, and Dawei Song. Xprompt:\nExploring the extreme of prompt tuning. In EMNLP , 2022.\n3, 5\n[58] Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan,\nKaiming He, Manohar Paluri, Yixuan Li, Ashwin\nBharambe, and Laurens Van Der Maaten. Exploring the\nlimits of weakly supervised pretraining. In ECCV , 2018. 2,\n3, 5\n[59] Leland McInnes, John Healy, and James Melville. Umap:\nUniform manifold approximation and projection for dimen-\nsion reduction. arXiv preprint arXiv:1802.03426 , 2018. 8\n[60] Paul Michel, Omer Levy, and Graham Neubig. Are sixteen\nheads really better than one? In NeurIPS , 2019. 5\n[61] Meenal V Narkhede, Prashant P Bartakke, and Mukul S\nSutaone. A review on weight initialization strategies for\nneural networks. Artificial Intelligence Review , 2022. 7\n[62] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bis-\nsacco, Bo Wu, and Andrew Y Ng. Reading digits in natural\nimages with unsupervised feature learning. 2011. 7, 8\n[63] Maria-Elena Nilsback and Andrew Zisserman. Automated\nflower classification over a large number of classes. In In-\ndian Conference on Computer Vision, Graphics & Image\nProcessing , 2008. 5, 8\n[64] Rohit Nishant, Mike Kennedy, and Jacqueline Corbett. Ar-\ntificial intelligence for sustainability: Challenges, opportu-\nnities, and a research agenda. International Journal of In-\nformation Management , 2020. 1\n[65] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of\nvisual representations by solving jigsaw puzzles. In ECCV ,\n2016. 3\n[66] Xuran Pan, Zhuofan Xia, Shiji Song, Li Erran Li, and Gao\nHuang. 3d object detection with pointformer. In CVPR ,\n2021. 2\n[67] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer,\nJames Bradbury, Gregory Chanan, Trevor Killeen, Zeming\nLin, Natalia Gimelshein, Luca Antiga, Alban Desmaison,\nAndreas Kopf, Edward Yang, Zachary DeVito, Martin Rai-\nson, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner,\nLu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An\nimperative style, high-performance deep learning library. In\nNeurIPS , 2019. 5, 7\n[68] Wei Peng, Tuomas Varanka, Abdelrahman Mostafa,\nHenglin Shi, and Guoying Zhao. Hyperbolic deep neural\nnetworks: A survey. IEEE TPAMI , 2021. 8\n[69] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine\nLee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li,\nand Peter J Liu. Exploring the limits of transfer learning\nwith a unified text-to-text transformer. The Journal of Ma-\nchine Learning Research , 2020. 2\n[70] Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea\nVedaldi. Learning multiple visual domains with residual\nadapters. In NeurIPS , 2017. 2, 3, 6, 7\n[71] Yi Ren, Shangmin Guo, Wonho Bae, and Danica J Suther-\nland. How to prepare your task head for finetuning. In\nICLR , 2023. 6\n[72] Carlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim\nNeumann, Rodolphe Jenatton, Andr ´e Susano Pinto, Daniel\nKeysers, and Neil Houlsby. Scaling vision with sparse mix-\nture of experts. In NeurIPS , 2021. 2\n[73] Victor Sanh, Lysandre Debut, Julien Chaumond, and\nThomas Wolf. Distilbert, a distilled version of bert:\nsmaller, faster, cheaper and lighter. arXiv preprint\narXiv:1910.01108 , 2019. 1\n[74] Robin Strudel, Ricardo Garcia, Ivan Laptev, and Cordelia\nSchmid. Segmenter: Transformer for semantic segmenta-\ntion. In ICCV , 2021. 2\n[75] Nima Tajbakhsh, Jae Y Shin, Suryakanth R Gurudu, R Todd\nHurst, Christopher B Kendall, Michael B Gotway, and Jian-\nming Liang. Convolutional neural networks for medical\nimage analysis: Full training or fine tuning? IEEE Trans-\nactions on Medical Imaging , 2016. 1, 3\n[76] Grant Van Horn, Steve Branson, Ryan Farrell, Scott Haber,\nJessie Barry, Panos Ipeirotis, Pietro Perona, and Serge Be-\nlongie. Building a bird recognition app and large scale\ndataset with citizen scientists: The fine print in fine-grained\ndataset collection. In CVPR , 2015. 5\n[77] Grant Van Horn, Steve Branson, Ryan Farrell, Scott Haber,\nJessie Barry, Panos Ipeirotis, Pietro Perona, and Serge Be-\nlongie. Building a bird recognition app and large scale\ndataset with citizen scientists: The fine print in fine-grained\ndataset collection. In CVPR , 2015. 7\n[78] Aimee Van Wynsberghe. Sustainable ai: Ai for sustainabil-\nity and the sustainability of ai. AI and Ethics , 2021. 1\n[79] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser,\nand Illia Polosukhin. Attention is all you need. In NeurIPS ,\n2017. 2\n[80] Ricardo Vinuesa, Hossein Azizpour, Iolanda Leite, Made-\nline Balaam, Virginia Dignum, Sami Domisch, Anna\nFell¨ander, Simone Daniela Langhans, Max Tegmark, and\nFrancesco Fuso Nerini. The role of artificial intelligence in\nachieving the sustainable development goals. Nature Com-\nmunications , 2020. 1\n[81] Catherine Wah, Steve Branson, Peter Welinder, Pietro Per-\nona, and Serge Belongie. The caltech-ucsd birds-200-2011\ndataset. 2011. 5, 8[82] Huiyu Wang, Yukun Zhu, Hartwig Adam, Alan Yuille, and\nLiang-Chieh Chen. Max-deeplab: End-to-end panoptic\nsegmentation with mask transformers. In CVPR , 2021. 2\n[83] Wenguan Wang, Cheng Han, Tianfei Zhou, and Dongfang\nLiu. Visual recognition with deep nearest centroids. In\nICLR , 2022. 2\n[84] Wenguan Wang, James Liang, and Dongfang Liu. Learning\nequivariant segmentation with instance-unique querying. In\nNeurIPS , 2022. 2\n[85] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao\nSong, Ding Liang, Tong Lu, Ping Luo, and Ling Shao.\nPyramid vision transformer: A versatile backbone for dense\nprediction without convolutions. In ICCV , 2021. 3\n[86] Yuqing Wang, Zhaoliang Xu, Xinlong Wang, Chunhua\nShen, Baoshan Cheng, Hao Shen, and Huaxia Xia. End-\nto-end video instance segmentation with transformers. In\nCVPR , 2021. 2\n[87] Carole-Jean Wu, Ramya Raghavendra, Udit Gupta, Bilge\nAcun, Newsha Ardalani, Kiwan Maeng, Gloria Chang,\nFiona Aga, Jinshi Huang, Charles Bai, et al. Sustainable\nai: Environmental implications, challenges and opportuni-\nties. Proceedings of Machine Learning and Systems , 2022.\n1\n[88] Yinghui Xing, Qirui Wu, De Cheng, Shizhou Zhang, Guo-\nqiang Liang, and Yanning Zhang. Class-aware visual\nprompt tuning for vision-language pre-trained model. arXiv\npreprint arXiv:2208.08340 , 2022. 3\n[89] Li Yang, Qifan Wang, Jingang Wang, Xiaojun Quan, Fuli\nFeng, Yu Chen, Madian Khabsa, Sinong Wang, Zenglin Xu,\nand Dongfang Liu. Mixpave: Mix-prompt tuning for few-\nshot product attribute value extraction. In ACL, 2023. 3\n[90] Sen Yang, Zhibin Quan, Mu Nie, and Wankou Yang. Trans-\npose: Keypoint localization via transformer. In ICCV , 2021.\n2\n[91] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lip-\nson. How transferable are features in deep neural networks?\nInNeurIPS , 2014. 1, 3, 6, 7\n[92] Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv\nKumar, Srinadh Bhojanapalli, Xiaodan Song, James Dem-\nmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch opti-\nmization for deep learning: Training bert in 76 minutes. In\nICLR , 2020. 1\n[93] Zhenxun Yuan, Xiao Song, Lei Bai, Zhe Wang, and Wanli\nOuyang. Temporal-channel transformer for 3d lidar-based\nvideo object detection for autonomous driving. IEEE\nTransactions on Circuits and Systems for Video Technology ,\n2021. 2\n[94] Yuhang Zang, Wei Li, Kaiyang Zhou, Chen Huang, and\nChen Change Loy. Unified vision and language prompt\nlearning. arXiv preprint arXiv:2210.07225 , 2022. 2\n[95] Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and\nLucas Beyer. Scaling vision transformers. In CVPR , 2022.\n2\n[96] Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov,\nPierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djo-\nlonga, Andre Susano Pinto, Maxim Neumann, Alexey\nDosovitskiy, et al. A large-scale study of representation\nlearning with the visual task adaptation benchmark. arXiv\npreprint arXiv:1910.04867 , 2019. 1, 5, 6, 7\n[97] Cheng Zhang, Haocheng Wan, Xinyi Shen, and Zizhao Wu.\nPatchformer: An efficient point transformer with patch at-\ntention. In CVPR , 2022. 2\n[98] Jeffrey O Zhang, Alexander Sax, Amir Zamir, Leonidas\nGuibas, and Jitendra Malik. Side-tuning: a baseline for\nnetwork adaptation via additive side networks. In ECCV ,\n2020. 2, 3, 6\n[99] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful\nimage colorization. In ECCV , 2016. 3\n[100] Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu,\nZekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao\nXiang, Philip HS Torr, et al. Rethinking semantic segmen-\ntation from a sequence-to-sequence perspective with trans-\nformers. In CVPR , 2021. 2\n[101] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang,\nand Jifeng Dai. Deformable detr: Deformable transformers\nfor end-to-end object detection. In ICLR , 2021. 2\n[102] Bohan Zhuang, Jing Liu, Zizheng Pan, Haoyu He, Yuetian\nWeng, and Chunhua Shen. A survey on efficient training of\ntransformers. arXiv preprint arXiv:2302.01107 , 2023. 2, 5",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.