title
string
paper_id
int64
abstract
string
authors
list
year
float64
arxiv_id
string
acl_id
string
pmc_id
string
pubmed_id
string
doi
string
venue
string
journal
string
mag_id
string
outbound_citations
sequence
inbound_citations
sequence
has_outbound_citations
bool
has_inbound_citations
bool
has_pdf_parse
bool
s2_url
string
has_pdf_body_text
float64
has_pdf_parsed_abstract
float64
has_pdf_parsed_body_text
float64
has_pdf_parsed_bib_entries
float64
has_pdf_parsed_ref_entries
float64
Sensation Seeking, Self Forgetfulness, and Computer Game Enjoyment
43,638,216
This paper investigates the relationship between enjoyment of computer game play and two personality traits (sensation seeking and self-forgetfulness). Hypotheses were proposed based on a review of computer game enjoyment, game characteristics, personality theories, and effects of computer game play. A survey is conducted in two US universities. Results and implica-tions are discussed.
[ { "first": "Xiaowen", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Fan", "middle": [], "last": "Zhao", "suffix": "" } ]
2,009
10.1007/978-3-642-02559-4_69
HCI
144022762
[]
[ "22478240", "52121940" ]
false
true
false
https://api.semanticscholar.org/CorpusID:43638216
null
null
null
null
null
Significant New Researcher Award
40,507,588
ACM SIGGRAPH is delighted to present the 2010 Significant New Researcher award to Alexei "Alyosha" Efros, in recognition of his pioneering contributions at the intersection of computer graphics and computer vision, particularly his work in texture synthesis and in leveraging huge image databases.
[ { "first": "Alexei", "middle": [ "\"Alyosha\"" ], "last": "Efros", "suffix": "" } ]
2,010
10.1145/1836809.1836811
SIGGRAPH 2010
1996490679
[]
[]
false
false
true
https://api.semanticscholar.org/CorpusID:40507588
0
0
0
1
0
Proceedings of the Eurographics/IEEE VGTC Workshop on Volume Graphics 2006, Boston, Massachusetts, USA, July 30-31, 2006
45,881,463
[]
2,006
10.2312/471
Volume Graphics
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:45881463
null
null
null
null
null
A method to resolve time delay in telepresence system based on virtual reality
61,819,528
A new method is proposed in this paper to resolve the time delay in telepresence system based on virtual reality. This method reduces the time spent by transporting object's visual factors not by transporting the whole image. The telepresence system based on virtual reality is introduced and the key technology is present in the paper.
[ { "first": "Hai-yan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Dongmu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Daxin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ke-ou", "middle": [], "last": "Song", "suffix": "" } ]
2,004
10.1117/12.561242
International Conference On Virtual Reality and Its Applications in Industry
2145019169
[ "124089042" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:61819528
0
0
0
1
0
The Impact of Different Visual Feedback Presentation Methods in a Wearable Computing Scenario
9,071,095
Interfaces for wearable computing applications have to be tailored to task and usability demands. Critical information has to be presented in a way allowing for fast absorption by the user while not distraction from the primary task. In this work we evaluated the impact of different information presentation methods on the performance of users in a wearable computing scenario. The presented information was critical to fulfill the given task and was displayed on two different types of head mounted displays (HMD). Further the representations were divided into two groups. The first group consisted of qualitative representations while the second group focused on quantitative information. Only a weak significance could be determined for effect the different methods used have on the performance but there is evidence that familiarity has an effect. A significant effect was found for the type of HMD.
[ { "first": "Hendrik", "middle": [], "last": "Iben", "suffix": "" }, { "first": "Hendrik", "middle": [], "last": "Witt", "suffix": "" }, { "first": "Ernesto", "middle": [ "Morales" ], "last": "Kluge", "suffix": "" } ]
2,009
10.1007/978-3-642-02580-8_82
HCI
1517735736
[]
[ "18358246" ]
false
true
false
https://api.semanticscholar.org/CorpusID:9071095
null
null
null
null
null
Mastering the Art of Visual Storytelling
61,764,180
As a visual journalist who also really enjoys writing, I have often said that information graphics reporting provides the best of both worlds. At once a “word” person and a “design” junkie, I have always been fascinated by the notion that the combination of words and visuals within a story package have an extreme impact on catching a reader's attention, keeping it and even ensuring that he or she retains the information much longer than when a story is provided in the form of words alone. Information graphics generally stimulate more brainpower because they appeal to both the literal and visual regions of the brain. Information graphics can tell stories with a degree of detail that is often otherwise impossible. Information graphics provide consumers with an incredibly rich “reading” experience. And, information graphics provide journalists with a powerful tool for telling a variety of different kinds of stories.
[ { "first": "Jennifer", "middle": [], "last": "George-Palilonis", "suffix": "" } ]
2,006
10.1016/B978-0-240-80707-2.50004-X
A Practical Guide to Graphics Reporting
A Practical Guide to Graphics Reporting
2307054198
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:61764180
null
null
null
null
null
Volume rendering of pool fire data
61,766,620
We describe how techniques from computer graphics are used to visualize pool fire data and compute radiative effects from pool fires. The basic tools are ray casting and accurate line integration using the RADCAL program. Example images in the visible and infrared band are shown which are given of irradiation calculations and novel methods to visualize the results of irradiation calculations. >
[ { "first": "H.E.", "middle": [], "last": "Rushmeier", "suffix": "" }, { "first": "A.", "middle": [], "last": "Hamins", "suffix": "" }, { "first": "M.-Y.", "middle": [], "last": "Choi", "suffix": "" } ]
1,994
10.1109/VISUAL.1994.346291
Proceedings Visualization '94
Proceedings Visualization '94
2152948234
[ "6919758", "9226468", "9713881" ]
[ "7047303", "5413420", "8048185", "206806257", "16493684", "119735295" ]
true
true
true
https://api.semanticscholar.org/CorpusID:61766620
0
0
0
1
0
Scene Independent Real-Time Indirect Illumination
9,090,129
A novel method for real-time simulation of indirect illumination is presented in this paper. The method, which we call direct radiance mapping (DRM), is based on basal radiance calculations and does not impose any restrictions on scene geometry or dynamics. This makes the method tractable for real-time rendering of arbitrary dynamic environments and for interactive preview of feature animations. Through DRM we simulate two diffuse reflections of light, but can also, in combination with traditional real-time methods for specular reflections, simulate more complex light paths. DRM is a GPU-based method, which can draw further advantages from upcoming GPU functionalities. The method has been tested for moderately sized scenes with close to real-time frame rates and it scales with interactive frame rates for more complex scenes.
[ { "first": "Jeppe", "middle": [ "Revall" ], "last": "Frisvad", "suffix": "" }, { "first": "R.", "middle": [ "R." ], "last": "Frisvad", "suffix": "" }, { "first": "Niels", "middle": [ "Jørgen" ], "last": "Christensen", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Falster", "suffix": "" } ]
2,005
10.1109/CGI.2005.1500412
International 2005 Computer Graphics
International 2005 Computer Graphics
2172625127
[ "408793", "9112114", "564368", "59189035", "13208309", "6804547", "18645782", "10376535", "15136127", "14218438", "760185", "787768", "16607390", "5784854", "324277", "6541329", "2629883" ]
[ "29969332" ]
true
true
true
https://api.semanticscholar.org/CorpusID:9090129
1
1
1
1
1
SOM-empowered graph segmentation for fast automatic clustering of large and complex data
31,323,336
Many clustering methods, including modern graph segmentation algorithms, run into limitations when encountering “Big Data”, data with high feature dimensions, large volume, and complex structure. SOM-based clustering has been demonstrated to accurately capture many clusters of widely varying statistical properties in such data. While a number of automated SOM segmentations have been put forward, the best identifications of complex cluster structures to date are those performed interactively from informative visualizations of the learned SOM's knowledge. This does not scale for Big Data, large archives or near-real time analyses for fast decision-making. We present a new automated approach to SOM-segmentation which closely approximates the precision of the interactive method for complicated data, and at the same time is very fast and memory-efficient. We achieve this by infusing SOM knowledge into leading graph segmentation algorithms which, by themselves, produce extremely poor results segmenting the SOM prototypes. We use the SOM prototypes as input vectors and CONN similarity measure, derived from the SOM's knowledge of the data connectivity, as edge weighting to the graph segmentation algorithms. We demonstrate the effectiveness on synthetic data and on real spectral imagery.
[ { "first": "Erzsébet", "middle": [], "last": "Merényi", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Taylor", "suffix": "" } ]
2,017
10.1109/WSOM.2017.8020004
2017 12th International Workshop on Self-Organizing Maps and Learning Vector Quantization, Clustering and Data Visualization (WSOM)
2017 12th International Workshop on Self-Organizing Maps and Learning Vector Quantization, Clustering and Data Visualization (WSOM)
2750926789
[ "198168767", "13132697", "9360834", "5542349", "17226166", "41664420", "9605560", "197764", "282206", "1914181", "46028616", "41003807", "18837694", "18898859", "12360337", "19454725", "13277680", "1526800", "10211629", "8977721", "138996", "5809831", "32863022", "15478415", "334423", "16923281", "18915162", "28585256" ]
[ "3386933", "195240205" ]
true
true
true
https://api.semanticscholar.org/CorpusID:31323336
0
0
0
1
0
VIRTUAL REALITY THERAPY: AN EFFECTIVE TREATMENT FOR THE FEAR OF PUBLIC SPEAKING
53,885,965
The major goal of this research was to investigate the ::: efficacy of virtual reality therapy (VRT) in the treatment of ::: the fear of public speaking. After an extensive two-stage ::: screening process, sixteen subjects were selected from the ::: pool. They were assigned to two treatment conditions: ::: VRT (N=8) and comparison group (N=8). Fourteen ::: subjects completed the study. The VRT group was exposed ::: to the virtual reality public speaking scene while the ::: comparison group was exposed to a trivial virtual reality ::: scene and guided by the experimenters to manage their ::: phobia either by using visualization techniques or selfexposure ::: to the situation they feared. The VRT and ::: comparison group sessions were conducted on an individual ::: basis over a five week period. Two assessment measures ::: were used in this study. The first measure used was the ::: Attitude Towards Public Speaking (ATPS) Questionnaire. ::: The second measure used was the eleven-point Subjective ::: Units of Disturbance (SUD) scale. These measurements ::: assessed the anxiety, avoidance, attitudes and disturbance ::: associated with their fear of public speaking before and after ::: treatments. In addition, objective measures such as heart ::: rate was collected in each stage of the treatment. ::: Significant differences between the six subjects who ::: completed the VRT sessions and comparison group were ::: found on all measures. The VRT group showed significant ::: improvement after five weeks of treatment. The comparison ::: group did not show any meaningful changes. The authors ::: concluded that VRT was successful in reducing the fear of ::: the public speaking.
[ { "first": "Max", "middle": [ "M." ], "last": "North", "suffix": "" }, { "first": "Sarah", "middle": [ "M." ], "last": "North", "suffix": "" }, { "first": "Joseph", "middle": [ "R." ], "last": "Coble", "suffix": "" } ]
1,998
10.20870/ijvr.1998.3.3.2625
International Journal of Virtual Reality
1209638735
[]
[ "201832827", "13169472", "204082711", "16996459", "14912010", "10421064", "9183562", "14803284", "3184651", "14363820", "3283790", "10564036", "18711182", "41386410", "7654196", "3148729", "145726631", "149521782", "436354", "12040", "28903700", "10046699", "210931185", "23440335", "214666102", "8288099", "69389354", "7027480", "69638873", "8639350", "30779792", "20063003", "207225591", "39387548", "53389776", "22945176", "3390102", "17118256", "207959855", "21432871", "151855213", "26559885", "14098033", "209498205", "17702568", "18658547" ]
false
true
false
https://api.semanticscholar.org/CorpusID:53885965
null
null
null
null
null
Three-Dimensional Graphical Primitives
183,896,424
[ { "first": "Tom", "middle": [], "last": "Wickham-Jones", "suffix": "" } ]
1,994
10.1007/978-1-4612-2586-7_9
Mathematica Graphics
Mathematica Graphics
2493811302
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:183896424
null
null
null
null
null
The conversion of diagrams to knowledge bases
9,931,429
If future electronic documents are to be truly useful, one must devise ways to automatically turn them into knowledge bases. In particular, one must be able to do this for diagrams. This paper discusses biological diagrams. The author describes the three major aspects of diagrams: visual salience, domain conventions and pragmatics. He next describes the organization of diagrams into informational and substrate components. The latter are typically collections of objects related by generalized equivalence relations. To analyze diagrams, the author defines graphics constraint grammars (GCGs) that can be used for both syntactic and semantic analysis. Each grammar rule describes a rule object and consists of the production, describing the constituents of the object, constraints that must hold between the constituents and propagators that build properties of the rule object from the constituents. The author discusses how a mix of parsing and constraint satisfaction techniques are used to parse diagrams with GCGs. >
[ { "first": "R.P.", "middle": [], "last": "Futrelle", "suffix": "" } ]
1,992
10.1109/WVL.1992.275754
Proceedings IEEE Workshop on Visual Languages
Proceedings IEEE Workshop on Visual Languages
1521565430
[ "59913682", "12413533", "5993378" ]
[ "13434759" ]
true
true
true
https://api.semanticscholar.org/CorpusID:9931429
0
0
0
1
0
Real world video avatar: transmission and presentation of human figure
26,509,355
Video avatar (Ogi et al., 2001) is one methodology of interaction with people at a remote location. By using such video-based real-time human figures, participants can interact using nonverbal information such as gestures and eye contact. In traditional video avatar interaction, however, participants can interact only in "virtual" space. We have proposed the concept of a "real-world video avatar", that is, the concept of video avatar presentation in "real" space. One requirement of such a system is that the presented figure must be viewable from various directions, similarly to a real human. In this paper such a view is called "multiview". By presenting a real-time human figure with "multiview", many participants can interact with the figure from all directions, similarly to interaction in the real world. A system that supports "multiview" was proposed by Endo et al. (2000), however, this system cannot show real-time images. We have developed a display system which supports "multiview" (Maeda et al., 2002). In this paper, we discuss the evaluation of real-time presentation using the display system.
[ { "first": "H.", "middle": [], "last": "Maeda", "suffix": "" }, { "first": "T.", "middle": [], "last": "Tanikawa", "suffix": "" }, { "first": "J.", "middle": [], "last": "Yamashita", "suffix": "" }, { "first": "K.", "middle": [], "last": "Hirota", "suffix": "" }, { "first": "M.", "middle": [], "last": "Hirose", "suffix": "" } ]
2,004
10.1109/VR.2004.64
IEEE Virtual Reality 2004
IEEE Virtual Reality 2004
2168047577
[ "6299296" ]
[ "40540920", "29869923", "8587575", "1854088", "19180941" ]
true
true
true
https://api.semanticscholar.org/CorpusID:26509355
0
0
0
1
0
Image Edge Sharpening Based on the Peculiarity of Human Vision System
64,446,434
Human Vision System is more sensitive to the abrupt and irregular changes in image regions than smooth and regular fluctuations. This peculiarity of HVS together with the statistic characteristics of image background is used to set up a dynamic threshold of edge extraction. Experiment shows that the Roberts' gradient based approach of edge sharpening introduces noise from background, while HVS approach can effectively suppress the noise.
[ { "first": "Chen", "middle": [], "last": "Zhong", "suffix": "" } ]
2,001
Journal of Computer-aided Design & Computer Graphics
2382982373
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:64446434
null
null
null
null
null
Towards an Adaptive Communication Aid with Text Input from Ambiguous Keyboards
10,202,039
Ambiguous keyboards provide efficient typing with low motor demands. In our project1 concerning the development of a communication aid, we emphasize adaptation with respect to the sensory input. At the same time, we wish to impose individualized language models on the text determination process. UKO--II is an open architecture based on the Emacs text editor with a server/client interface for adaptive language models. Not only the group of motor impaired people but also users of watch--sized devices can profit from this ambiguous typing.
[ { "first": "Michael", "middle": [], "last": "Kühn", "suffix": "" }, { "first": "Jorn", "middle": [], "last": "Garbe", "suffix": "" } ]
2,001
E03-2006
10.3115/1067737.1067786
HCI
2112585214
[ "62177431", "8989479", "58069857", "142692851", "17260130", "63118375", "18256048", "15685517", "56503130", "6477825", "60722378", "59981179", "15001882", "59915284", "2820058", "62550214" ]
[ "43697329", "14256751", "6419250", "53104067", "22408696", "12175687", "18256048", "20332397", "63147060", "2616978", "16452441", "134006" ]
true
true
true
https://api.semanticscholar.org/CorpusID:10202039
1
1
1
1
1
Anthropometric Measurement of the Hands of Chinese Children
10,207,006
This paper presents the results of a nationwide anthropometric survey conducted on children in China. Eight hand anthropometric dimensions were measured from 20,000 children with age ranged from 4 to 17 years old. Mean values, standard deviations, and the 5th, 95th percentile for each dimension were estimated. The dimension difference between age, gender and difference between Chinese and Japanese were analyzed. It was found that the mean values of the dimensions showed a gradual increase by age. The dimensions had no significant difference between genders for the children from 4 to 12, but the difference became significant for the children from 13 to 17. Comparison between Chinese and Japanese children showed that Chinese children tended to have relatively longer and broader hands than Japanese children. These data, previously lacking in China, can benefit the children's products design.
[ { "first": "Linghua", "middle": [], "last": "Ran", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chuzhi", "middle": [], "last": "Chao", "suffix": "" }, { "first": "Taijie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tingting", "middle": [], "last": "Dong", "suffix": "" } ]
2,009
10.1007/978-3-642-02809-0_6
HCI
1574088522
[]
[ "59035594", "58705618", "124708019" ]
false
true
false
https://api.semanticscholar.org/CorpusID:10207006
null
null
null
null
null
Haptic vibrations for hands and bodies
38,374,435
Tactile sense directly makes an appeal to the deep feeling without language or words. Tactile sense has a language however the way of using the language is obscure. If we could have full command of the language of tactile sense, and express the feeling and tell it by using the language, cold media would be changing i n t o intimate organic ones.
[ { "first": "Yasuhiro", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Rieko", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Junji", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Ayano", "middle": [], "last": "Yoshida", "suffix": "" }, { "first": "Sakurazawa", "middle": [], "last": "Shigeru", "suffix": "" } ]
2,015
10.1145/2818384.2818389
SIGGRAPH Asia Haptic Media And Contents Design
2296631788
[ "1438530", "11624275", "15762651" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:38374435
0
0
0
1
0
Yarn: Generating Storyline Visualizations Using HTN Planning
44,063,988
[ { "first": "Kalpesh", "middle": [], "last": "Padia", "suffix": "" }, { "first": "Kaveen", "middle": [ "Herath" ], "last": "Bandara", "suffix": "" }, { "first": "Christopher", "middle": [ "G." ], "last": "Healey", "suffix": "" } ]
2,018
Graphics Interface
2998114654
[ "170518", "14970263", "89377", "684591", "17350043", "192216100", "2119961", "10127876", "8623866", "118898500", "195346574", "170456426", "10436985", "11443269", "17377497", "6983780", "2372981", "2209400", "26980457", "5883024", "191523823", "2224112", "1603489", "18189304", "15870536", "5410847", "142819652" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:44063988
0
0
0
1
0
Color Interpolation for Non-Euclidean Color Spaces
115,151,146
Color interpolation is critical to many applications across a variety of domains, like color mapping or image processing. Due to the characteristics of the human visual system, color spaces whose distance measure is designed to mimic perceptual color differences tend to be non-Euclidean. In this setting, a generalization of established interpolation schemes is not trivial. This paper presents an approach to generalize linear interpolation to colors for color spaces equipped with an arbitrary non-Euclidean distance measure. It makes use of the fact that in Euclidean spaces, a straight line coincides with the shortest path between two points. Additionally, we provide an interactive implementation of our method for the CIELAB color space using the CIEDE2000 distance measure integrated into VTK and ParaView.
[ { "first": "Max", "middle": [], "last": "Zeyen", "suffix": "" }, { "first": "Tobias", "middle": [], "last": "Post", "suffix": "" }, { "first": "Hans", "middle": [], "last": "Hagen", "suffix": "" }, { "first": "James", "middle": [], "last": "Ahrens", "suffix": "" }, { "first": "David", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Roxana", "middle": [], "last": "Bujack", "suffix": "" } ]
2,018
10.1109/SciVis.2018.8823597
2018 IEEE Scientific Visualization Conference (SciVis)
2018 IEEE Scientific Visualization Conference (SciVis)
2971587657
[ "61257019", "29008904", "39497315", "121989290", "20023327", "15630200", "62577768", "125407264", "122917960", "59744184", "120048540", "122953120", "120221231", "118303141", "42513268", "119458758", "17711899", "59044042", "120429150", "46777578", "62665066", "579822", "120654719", "120423664", "7626855", "16053636", "12024605", "206749265" ]
[ "202558776", "199405597" ]
true
true
true
https://api.semanticscholar.org/CorpusID:115151146
0
0
0
1
0
The Study of Abstraction of the Shape Feature of the Machine Part
63,495,321
This paper provides a method performed in the environment of three-dimensional modeling. Based on the Boolean operation and parameter design, each elementary bodies can be picked-up from the mechine part. The method works on the base of knowledge repository and fuzzy comprehensive evaluation. It simulates the thinking of human brain, filters out the main body of the part. The main feature of the part can be found, by using the structure the quadric factor of optimum direction. The main view of the body matching engineering requests is automatically created from the three-dimensional model without human intervention. It makes good foundations to automatically create two-dimensional drawings in three-dimensional design.
[ { "first": "Yao", "middle": [], "last": "Hui-xue", "suffix": "" } ]
2,002
Journal of Engineering Graphics
2372867041
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:63495321
null
null
null
null
null
Pen-and-Paper User Interfaces
43,322,023
Even at the beginning of the 21st century, we are far from becoming paperless. Pen and paper is still the only truly ubiquitous information processing technology. Pen-and-paper user interfaces bridge the gap between paper and the digital world. Rather than replacing paper with electronic media, they seamlessly integrate both worlds in a hybrid user interface. Classical paper documents become interactive. This opens up a huge field of novel computer applications at our workplaces and in our homes. ::: ::: This book provides readers with a broad and extensive overview of the field, so as to provide a full and up-to-date picture of pen-and-paper computing. It covers the underlying technologies, reviews the variety of modern interface concepts and discusses future directions of pen-and-paper computing. Based on the author’s award-winning dissertation, the book also provides the first theoretical interaction model of pen-and-paper user interfaces and an integrated set of interaction techniques for knowledge workers. The model proposes a ‘construction set’ of core interactions that are helpful in designing solutions that address the diversity of pen-and-paper environments. The interaction techniques, concrete instantiations of the model, provide innovative support for working with printed and digital documents. They integrate well-established paper-based practices with concepts derived from hypertext and social media. ::: ::: Researchers, practitioners who are considering deploying pen-and-paper user interfaces in real-world projects, and interested readers from other research disciplines will find the book an invaluable reference source. Also, it provides an introduction to pen-and-paper computing for the academic curriculum. ::: ::: The present book was overdue: a thorough, concise, and well-organized compendium of marriages between paper-based and electronic documents.
[ { "first": "Jürgen", "middle": [], "last": "Steimle", "suffix": "" } ]
2,012
10.1007/978-3-642-20276-6
Human-Computer Interaction Series
183795213,2483615756
[]
[ "215328905", "17977761", "53998020", "5218480" ]
false
true
false
https://api.semanticscholar.org/CorpusID:43322023
null
null
null
null
null
First Passage Percolation
59,829,488
[ { "first": "Vladas", "middle": [], "last": "Sidoravicius", "suffix": "" } ]
2,009
Computer Graphics Forum
62788130
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:59829488
null
null
null
null
null
Immersive Auralization Using Headphones
58,462,533
[ { "first": "Michele", "middle": [], "last": "Geronazzo", "suffix": "" } ]
2,019
10.1007/978-3-319-08234-9_257-1
Encyclopedia of Computer Graphics and Games
2795027797
[ "117099185", "67823799", "56322719", "206479322", "1361428", "122184101", "61633201", "116618800", "15949108", "106997834", "16125450" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:58462533
0
0
0
1
0
Information Classification and Organization Using Neuro-Fuzzy Model for Event Pattern Retrieval
69,946,611
Classifying the sentences that describe Events is an important task for many applications. In this chapter, Event patterns are identified and extracted at sentence level using term features. The terms that trigger Events along with the sentences are extracted from Web documents. The sentence structures are analysed using POS tags. A hierarchal sentence classification model is presented by considering specific term features of the sentence, and the rules are derived. The rules fail to define a clear boundary between the patterns and create ambiguity and impreciseness. To overcome this, suitable fuzzy rules are derived which give importance to all term features of the sentence. The fuzzy rules are constructed with more variables and generate sixteen patterns. Artificial neuro-fuzzy inference system (ANFIS) model is presented for training and classifying the sentence patterns for capturing the knowledge present in sentences. The obtained patterns are assigned linguistic grades based on previous classification knowledge. These grades represent the type and quality of information in the patterns. The membership function is used to evaluate the fuzzy rules. The patterns share the membership values between [0–1] which determines the weights for each pattern. Later, higher weighted patterns are considered to build Event Corpus, which helps in retrieving useful and interested information of Event Instances. The performance of the presented approach classification is evaluated for ‘Crime’ Event by crawling documents from WWW and also evaluated for benchmark dataset for ‘Die’ Event. It is found that the performance of the presented approach is encouraging when compared with recently proposed similar approaches.
[ { "first": "S.", "middle": [ "G." ], "last": "Shaila", "suffix": "" }, { "first": "A.", "middle": [], "last": "Vadivel", "suffix": "" } ]
2,018
10.1007/978-981-13-2559-5_2
Textual and Visual Information Retrieval using Query Refinement and Pattern Analysis
Textual and Visual Information Retrieval using Query Refinement and Pattern Analysis
2893717409
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:69946611
null
null
null
null
null
Towards a Framework for Adaptive Web Applications
13,859,722
We have developed a framework to support adaptive elements in Web pages. In particular we focus on adaptive menus. Developers are able to define rules for menu adaptation according to the features of the device and browser in use. This paper briefly describes the selected adaptation patterns and their implementation.
[ { "first": "Ana Isabel", "middle": [], "last": "Sampaio", "suffix": "" }, { "first": "José Creissac", "middle": [], "last": "Campos", "suffix": "" } ]
2,014
10.1007/978-3-319-07857-1_43
HCI
2340078811
[ "13215955", "2487546", "18862357" ]
[ "1054943", "49385352" ]
true
true
true
https://api.semanticscholar.org/CorpusID:13859722
1
1
1
1
1
Using 3D anthropometric data for the modelling of customised head immobilisation masks
133,044,861
ABSTRACTHead immobilization thermoplastic masks for radiotherapy purposes involve a distressful modelling procedure for the patient. To assess the possibility of using different acquisition and rec...
[ { "first": "Maria", "middle": [ "Amélia", "Ramos" ], "last": "Loja", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Sousa", "suffix": "" }, { "first": "Lina", "middle": [], "last": "Vieira", "suffix": "" }, { "first": "D.", "middle": [ "M.", "S." ], "last": "Costa", "suffix": "" }, { "first": "D.", "middle": [ "S." ], "last": "Craveiro", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Parafita", "suffix": "" }, { "first": "Durval", "middle": [ "C." ], "last": "Costa", "suffix": "" } ]
2,019
10.1080/21681163.2018.1507840
CMBBE: Imaging & Visualization
CMBBE: Imaging & Visualization
2931942349
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:133044861
null
null
null
null
null
Muntermacher - think and move interface and interaction design of a motion-based serious game for the generation plus
28,458,834
This paper presents a holistic approach to design a media system based on a new user interface and interaction device aimed to motivate seniors of the generation plus enhancing their daily physical activity. As a result of the newly designed game, the senior finds himself within a colorful world of a game in which he interacts with small lively figures using a newly designed interaction device accounting for physical activity. The combination of both design elements, lead to a gameplay that provides adequate mechanism for cognitive and physical activity, challenging representatives of the generation plus to exercise more.
[ { "first": "Holger", "middle": [], "last": "Graf", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Tamanini", "suffix": "" }, { "first": "Lukas", "middle": [], "last": "Geissler", "suffix": "" } ]
2,011
10.1007/978-3-642-21663-3_16
HCI
118439190
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:28458834
null
null
null
null
null
Madagascar: bringing a new visual style to the screen: Copyright restrictions prevent ACM from providing the full text for this work.
20,276,441
[ { "first": "Philippe", "middle": [], "last": "Gluckman", "suffix": "" }, { "first": "Denise", "middle": [], "last": "Minter", "suffix": "" }, { "first": "Kendal", "middle": [], "last": "Chronkhite", "suffix": "" }, { "first": "Cassidy", "middle": [], "last": "Curtis", "suffix": "" }, { "first": "Milana", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Vogt", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Singer", "suffix": "" } ]
2,005
10.1145/1198555.1198569
SIGGRAPH '05
2752022528,1998376160
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:20276441
null
null
null
null
null
Combining statistical independence testing, visual attribute selection and automated analysis to find relevant attributes for classification
33,571,854
We present an iterative strategy for finding a relevant subset of attributes for the purpose of classification in high-dimensional, heterogeneous data sets. The attribute subset is used for the construction of a classifier function. In order to cope with the challenge of scalability, the analysis is split into an overview of all attributes and a detailed analysis of small groups of attributes. The overview provides generic information on statistical dependencies between attributes. With this information the user can select groups of attributes and an analytical method for their detailed analysis. The detailed analysis involves the identification of redundant attributes (via classification or regression) and the creation of summarizing attributes (via clustering or dimension reduction). Our strategy does not prescribe specific analytical methods. Instead, we recursively combine the results of different methods to find or generate a subset of attributes to use for classification.
[ { "first": "Thorsten", "middle": [], "last": "May", "suffix": "" }, { "first": "James", "middle": [], "last": "Davey", "suffix": "" }, { "first": "Jorn", "middle": [], "last": "Kohlhammer", "suffix": "" } ]
2,010
10.1109/VAST.2010.5654445
2010 IEEE Symposium on Visual Analytics Science and Technology
2010 IEEE Symposium on Visual Analytics Science and Technology
1985213615
[ "379259", "6765950", "13258936", "122792460", "4516164" ]
[ "26243143" ]
true
true
true
https://api.semanticscholar.org/CorpusID:33571854
0
0
0
1
0
Clamping: A method of antialiasing textured surfaces by bandwidth limiting in object space
2,933,014
An object space method is given for interpolating between sampled and locally averaged signals, resulting in an antialiasing filter which provides a continuous transition from a sampled signal to its selectively dampened local averages. This method is applied to the three standard Euclidean dimensions and time, resulting in spatial and frame to frame coherence. The theory allows filtering of a variety of functions, including continuous and discrete representations of planar texture.
[ { "first": "Alan", "middle": [], "last": "Norton", "suffix": "" }, { "first": "Alyn", "middle": [ "P." ], "last": "Rockwood", "suffix": "" }, { "first": "Philip", "middle": [ "T." ], "last": "Skolmoski", "suffix": "" } ]
1,982
10.1145/800064.801252
SIGGRAPH '82
167041699,2000534534
[]
[ "14054835", "10718810", "6641429", "7059303", "17336541", "15856919", "16126756", "9829272", "11869713", "18948489", "17989731", "17755788", "9923317", "10683989", "1659545", "59794619", "41743042", "17858254", "9583738", "762012", "198489822", "5722686", "18364005", "14651892", "26330286", "14860000", "6982805", "10521521", "2210332", "8547089", "18446891", "7989624", "8908938", "15510431", "62665901", "733005", "5488898", "1778124", "18618144", "2069627", "18515474", "15828046", "8551941", "31581311", "6398235", "14343262", "14272435", "30298060", "3183229" ]
false
true
true
https://api.semanticscholar.org/CorpusID:2933014
0
0
0
1
0
Midair Ultrasound Fragrance Rendering
3,935,961
We propose a system that controls the spatial distribution of odors in an environment by generating electronically steerable ultrasound-driven narrow air flows. The proposed system is designed not only to remotely present a preset fragrance to a user, but also to provide applications that would be conventionally inconceivable, such as: 1) fetching the odor of a generic object placed at a location remote from the user and guiding it to his or her nostrils, or 2) nullifying the odor of an object near a user by carrying it away before it reaches his or her nostrils (Fig. 1). These are all accomplished with an ultrasound-driven air stream serving as an airborne carrier of fragrant substances. The flow originates from a point in midair located away from the ultrasound source and travels while accelerating and maintaining its narrow cross-sectional area. These properties differentiate the flow from conventional jet- or fan-driven flows and contribute to achieving a midair flow. In our system, we employed a phased array of ultrasound transducers so that the traveling direction of the flow could be electronically and instantaneously controlled. In this paper, we describe the physical principle of odor control, the system construction, and experiments conducted to evaluate remote fragrance presentation and fragrance tracking.
[ { "first": "Keisuke", "middle": [], "last": "Hasegawa", "suffix": "" }, { "first": "Liwei", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Hiroyuki", "middle": [], "last": "Shinoda", "suffix": "" } ]
2,018
10.1109/TVCG.2018.2794118
IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Visualization and Computer Graphics
2789958075
[ "2556491", "16862983", "78968", "18940591", "24431303", "11827733", "25132256", "9445165", "9839553", "9123127", "7380440", "122264492", "25608769", "7569587", "109646153", "3467880", "2078273", "30693159", "15182302" ]
[ "204229229", "85554614", "195357279", "51699252", "211040835", "54489179", "207947432", "52902491", "210178486" ]
true
true
true
https://api.semanticscholar.org/CorpusID:3935961
0
0
0
1
0
Streamline Selection for Comparative Visualization of 3D Fluid Simulation Result
30,553,934
Fluid dynamics simulation is often repeated while changing conditions, and therefore we need to compare a large amount of results. In order to compare results under different conditions, it is effective to overlap the streamlines generated from each condition in a single 3D space. Streamline is a curved line which represents a wind flow. This paper presents a technique to automatically select and visualize important streamlines suitable for comparison of the simulation results. In addition, we present an implementation to observe the flow fields in virtual reality spaces.
[ { "first": "Shoko", "middle": [], "last": "Sawada", "suffix": "" }, { "first": "Takayuki", "middle": [], "last": "Itoh", "suffix": "" }, { "first": "Takashi", "middle": [], "last": "Misaka", "suffix": "" }, { "first": "Shigeru", "middle": [], "last": "Obayashi", "suffix": "" }, { "first": "Tobias", "middle": [], "last": "Czauderna", "suffix": "" }, { "first": "Kingsley", "middle": [], "last": "Stephens", "suffix": "" } ]
2,017
10.1109/iV.2017.60
2017 21st International Conference Information Visualisation (IV)
2017 21st International Conference Information Visualisation (IV)
2769455389,2626092488
[ "61107808", "710147", "681081", "2749488" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:30553934
0
0
0
1
0
Cross-cultural understanding of content and interface in the context of E-learning systems
6,056,252
This paper describes a comparative study in understanding content and interface in the context of e-learning systems by using anthropologists' and designers' cultural dimensions. The purpose was to determine the differences between Belgian and Palestinian audiences, and to find the most important cultural dimensions to use for localizing / internationalizing e-learning systems. Results indicate differences in culture between the two groups, but not as much as expected. The outcome shows similar preferences, whilst others differ.
[ { "first": "Abdalghani", "middle": [], "last": "Mushtaha", "suffix": "" }, { "first": "Olga", "middle": [ "De" ], "last": "Troyer", "suffix": "" } ]
2,007
10.1007/978-3-540-73287-7_21
HCI
1528164757
[]
[ "201038155", "2811558", "12096059", "42761610", "11752898", "58910287", "2319674", "28130114", "159014353", "37247454", "195705525", "67794263", "16845986", "7610579", "6441298", "15008719", "35796520", "30322709", "16978303" ]
false
true
false
https://api.semanticscholar.org/CorpusID:6056252
null
null
null
null
null
Cascading classifier with discriminative multi-features for a specific 3D object real-time detection
3,272,192
Real-time specific 3D object detection plays an important role in intelligent service robots and intelligent surveillance fields. Compared to most existing approaches, which use simple template-matching methods, we present a novel discriminative learning-based method referred to as B-CST (BING - Colour + Shape + Texture) to detect a specific 3D object from a video in real time. Instead of the sliding-window technique, an original candidate extraction strategy is proposed, and that a new cascade classifier for recognition is also developed. In the candidate extraction stage, the rapid and high-quality objectness measure, binarised normed gradients, is modified to highlight the target candidate regions as well as to suppress undesirable background regions. In the recognition stage, each candidate region is then verified and further classified into different categories, which are denoted as positive, including multi-view images of target, or negative. The designed cascade classifiers conduct the recognition with discriminative multiple features, i.e. the novel dominant colour histogram, the histogram of oriented gradients and the original Gabor-CS-LTP feature, which is the centre-symmetric local ternary pattern of a special Gabor magnitude mapping. We evaluate our proposed method on our challenging new dataset consisting of 5 objects and two well-known public datasets and then compare it with other detection techniques for a single 3D object. A comparative study shows that our B-CST method is efficient in both high-quality detection results and detection speed, which can achieve the real-time processing requirements of video sequences (approximately 23 fps).
[ { "first": "Rui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Jing", "middle": [ "Wen" ], "last": "Xu", "suffix": "" }, { "first": "Zhi", "middle": [ "Hai" ], "last": "He", "suffix": "" } ]
2,018
10.1007/s00371-018-1472-3
The Visual Computer
The Visual Computer
2788912316
[ "12702037", "6979858", "3498589", "10351012", "16577175", "31266318", "61519247", "5832277", "207252185", "16657205", "8063966", "4649936", "206590483", "14712672", "44254715", "206787478", "3237692", "123534075", "53115578", "1782131", "189900", "4943234", "42281583", "116769286" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:3272192
0
0
0
1
0
Designing a graphical user interface to deliver a graphic design course online: a user-centered approach
26,893,388
[ { "first": "Uttam", "middle": [], "last": "Kokil", "suffix": "" } ]
2,006
10.1145/1179622.1179670
SIGGRAPH '06
1972795498
[]
[]
false
false
true
https://api.semanticscholar.org/CorpusID:26893388
0
0
0
0
0
Real-time marker prediction and CoR estimation in optical motion capture
15,102,027
Optical motion capture systems suffer from marker occlusions resulting in loss of useful information. This paper addresses the problem of real-time joint localisation of legged skeletons in the presence of such missing data. The data is assumed to be labelled 3d marker positions from a motion capture system. An integrated framework is presented which predicts the occluded marker positions using a Variable Turn Model within an Unscented Kalman filter. Inferred information from neighbouring markers is used as observation states; these constraints are efficient, simple, and real-time implementable. This work also takes advantage of the common case that missing markers are still visible to a single camera, by combining predictions with under-determined positions, resulting in more accurate predictions. An Inverse Kinematics technique is then applied ensuring that the bone lengths remain constant over time; the system can thereby maintain a continuous data-flow. The marker and Centre of Rotation (CoR) positions can be calculated with high accuracy even in cases where markers are occluded for a long period of time. Our methodology is tested against some of the most popular methods for marker prediction and the results confirm that our approach outperforms these methods in estimating both marker and CoR positions.
[ { "first": "Andreas", "middle": [], "last": "Aristidou", "suffix": "" }, { "first": "Joan", "middle": [], "last": "Lasenby", "suffix": "" } ]
2,011
10.1007/s00371-011-0671-y
The Visual Computer
The Visual Computer
2160194549
[ "39104715", "62478453", "27766429", "61260005", "1399408", "879716", "6860165", "1356255", "15462145", "32226762", "117898706", "11636306", "30176573", "3074310", "123487779", "1119340", "33826261", "24144460", "11904557", "517476", "1373495", "2137682", "11038004", "9675997", "7133465", "1395917", "122340800", "86766251", "13154909", "1200140", "122794686", "17166326", "18909287", "53247582", "13933853", "60622206", "13959160", "121204411", "1781652", "20713659", "15396988", "5897316", "7256978", "4551301", "51658067", "51653076", "12442866", "125256517", "2292769", "6450218", "20363920", "14189508", "17384355", "45895391", "10875229" ]
[ "206714053", "3328591", "14351079", "39659626", "215269549", "30804578", "15377737", "7367339", "3900057", "198186012", "64535067", "19216794", "68152234", "15379444", "10454312", "15233880", "2229011", "17148393", "1459596", "204917491", "71149943", "207291610", "202623445", "29621527", "5641284", "181531109", "195117", "17531000", "4900962", "214727772", "55592432" ]
true
true
true
https://api.semanticscholar.org/CorpusID:15102027
0
0
0
1
0
Regression modeling of reader's emotions induced by font based text signals
18,115,255
In this work we presented a mathematical model for the readers' emotional state responses triggered by font style, type and color. It is based on multiple regression analysis of the repeated measures from 45 students and for 35 textual stimuli using the Self-Assessment Manikin test. Based on the dimensional theory of emotions, we propose a model on how emotional dimensions Pleasure, Arousal, and Dominance vary according to the typographic text signals: font style, font type and font/background color combinations. We observe that "Pleasure" dimension is affected negatively by font type ("Arial" and "Times New Roman") and positively by color brightness difference of font/background color combinations. "Arousal" and "Dominance" are affected only by color brightness difference (negative correlation). According to the proposed model, font type "Arial" elicits more pleasant emotional state than "Times New Roman". The results can be applied to augment user interface experience or to add expressivity in Text-to-Speech systems and provide accessibility of typography induced text signals.
[ { "first": "Dimitrios", "middle": [], "last": "Tsonos", "suffix": "" }, { "first": "Georgios", "middle": [], "last": "Kouroupetroglou", "suffix": "" }, { "first": "Despina", "middle": [], "last": "Deligiorgi", "suffix": "" } ]
2,013
10.1007/978-3-642-39191-0_48
HCI
2140342749
[ "60866127", "58860926", "15792594", "192523241", "144172165", "40856983", "15442401", "144411065", "121247505", "117044005", "34178409" ]
[ "28400749", "33781653", "26963495" ]
true
true
true
https://api.semanticscholar.org/CorpusID:18115255
0
0
0
1
0
Visual Exploration of Complex Time-Varying Graphs
195,904,394
Quasi-trees, namely graphs with tree-like structure, appear in many application domains, including bioinformatics and computer networks. Our new SPF approach exploits the structure of these graphs with a two-level approach to drawing, where the graph is decomposed into a tree of biconnected components. The low-level biconnected components are drawn with a force-directed approach that uses a spanning tree skeleton as a starting point for the layout. The higher-level structure of the graph is a true tree with meta-nodes of variable size that contain each biconnected component. That tree is drawn with a new area-aware variant of a tree drawing algorithm that handles high-degree nodes gracefully, at the cost of allowing edge-node overlaps. SPF performs an order of magnitude faster than the best previous approaches, while producing drawings of commensurate or improved quality
[ { "first": "Daniel", "middle": [], "last": "Archambault", "suffix": "" }, { "first": "Tamara", "middle": [], "last": "Munzner", "suffix": "" }, { "first": "David", "middle": [], "last": "Auber", "suffix": "" } ]
2,006
10.1109/TVCG.2006.193
IEEE Transactions on Visualization and Computer Graphics
2146279766
[]
[ "32711398", "3205365", "16712004", "44496899", "6613540", "3129559", "18172991", "201058688", "64304330", "11534228", "27976645", "9428201", "56384705", "393273", "47932739", "5957904", "6231851", "19032191", "10515668", "9001272", "11543025", "43227371", "9055654", "18564515", "44759982", "16196522", "14413901", "14610275", "14880675", "16186837" ]
false
true
false
https://api.semanticscholar.org/CorpusID:195904394
null
null
null
null
null
Real-Time Simulation of Granular Materials Using Graphics Hardware
2,703,426
We present a method to compute friction in a particle-based simulation of granular materials on GPUs and its data structure. We use Distinct Element Method to compute the force between particles. There has been a method to accelerate Distinct Element Method using GPUs, but the method does not compute friction. We implemented friction into the DEM simulation on GPUs and this leads to the real-time simulation of granular materials.
[ { "first": "R.", "middle": [], "last": "Yasuda", "suffix": "" }, { "first": "T.", "middle": [], "last": "Harada", "suffix": "" }, { "first": "Y.", "middle": [], "last": "Kawaguchi", "suffix": "" } ]
2,008
10.1109/CGIV.2008.45
2008 Fifth International Conference on Computer Graphics, Imaging and Visualisation
2008 Fifth International Conference on Computer Graphics, Imaging and Visualisation
2160654868
[ "8752170", "128484753", "62756490", "760185" ]
[ "3230039", "2987185", "31953596" ]
true
true
true
https://api.semanticscholar.org/CorpusID:2703426
0
0
0
1
0
A generic tool for interactive complex image editing
9,921,383
Plenty of complex image editing techniques require certain per-pixel property or magnitude to be known, e.g., simulating depth of field effects requires a depth map. This work presents an efficient interaction paradigm that approximates any per-pixel magnitude from a few user strokes by propagating the sparse user input to each pixel of the image. The propagation scheme is based on a linear least-squares system of equations which represents local and neighboring restrictions over superpixels. After each user input, the system responds immediately, propagating the values and applying the corresponding filter. Our interaction paradigm is generic, enabling image editing applications to run at interactive rates by changing just the image processing algorithm, but keeping our proposed propagation scheme. We illustrate this through three interactive applications: depth of field simulation, dehazing and tone mapping.
[ { "first": "Ana", "middle": [ "B." ], "last": "Cambra", "suffix": "" }, { "first": "Ana", "middle": [ "C." ], "last": "Murillo", "suffix": "" }, { "first": "Adolfo", "middle": [], "last": "Muñoz", "suffix": "" } ]
2,017
10.1007/s00371-017-1422-5
The Visual Computer
The Visual Computer
2745188842
[ "1806278", "10941677", "13589806", "45964351", "18774783", "15128952", "2595484", "2430892", "6981517", "960858", "990065", "4125913", "11792230", "207217221", "26617945", "489789", "18981337", "18362618", "8616813", "10005489", "11713946", "12568445", "6340621", "11098912", "14749316", "294150", "41797944", "5928873", "2234474", "15174950", "207745496", "9766024", "7118095", "4986779", "1210309", "10387823", "6797138", "15843992", "17733631", "276625", "14057273", "1529316", "14824889" ]
[ "57573745", "59409862" ]
true
true
true
https://api.semanticscholar.org/CorpusID:9921383
1
1
1
1
1
New Baiyun International Airport, Guangzhou City, China
9,923,548
The Chinese government needed a public relations tool to promote this completely new major airport construction project to airline customers worldwide. The Architectural staff created the 3D exterior shell in FormZ. This was passed to the animation staff to detail the interior using 3D Studio Max. A seven-minute video shows how a passenger would travel from curbside through the terminal to aircraft boarding and then arriving passengers going thru to the baggage area. The 3D model was created and rendered using five-dual 933 PCs, the animation is 10,800 frames long and required six weeks to build and render. Copyright held by creator.
[ { "first": "Jeff", "middle": [], "last": "Coleman", "suffix": "" } ]
2,001
10.1145/945191.945232
SIGGRAPH 2001
2064337473
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:9923548
null
null
null
null
null
Modeling Amorphous Natural Features
58,223,093
[ { "first": "Geoffrey", "middle": [ "Y." ], "last": "Gardner", "suffix": "" } ]
1,994
SIGGRAPH 1994
90803681
[]
[ "1819689", "16420507" ]
false
true
false
https://api.semanticscholar.org/CorpusID:58223093
null
null
null
null
null
Vocational training with combined real/virtual environments
547,220
[ { "first": "Eva", "middle": [], "last": "Hornecker", "suffix": "" }, { "first": "Bernd", "middle": [], "last": "Robben", "suffix": "" } ]
1,999
HCI (2)
1648132227
[ "18893272", "29855447" ]
[ "2194323", "53644293", "14799098", "17040092" ]
true
true
true
https://api.semanticscholar.org/CorpusID:547220
0
0
0
1
0
An integrated environment to visually construct 3D animations
547,687
In this paper, we present an expressive 3D animation environment that enables users to rapidly and visually prototype animated worlds with a fully 3D user-interface. A 3D device allows the specification of complex 3D motion, while virtual tools are visible mediators that live in the same 3D space as application objects and supply the interaction metaphors to control them. In our environment, there is no intrinsic difference between user interface and application objects. Multi-way constraints provide the necessary tight coupling among components that makes it possible to seamlessly compose animated and interactive behaviors. By recording the effects of manipulations, all the expressive power of the 3D user interface is exploited to define animations. Effective editing of recorded manipulations is made possible by compacting all continuous parameter evolutions with an incremental data-reduction algorithm, designed to preserve both geometry and timing. The automatic generation of editable representations of interactive performances overcomes one of the major limitations of current performance animation systems. Novel interactive solutions to animation problems are made possible by the tight integration of all system components. In particular, animations can be synchronized by using constrained manipulation during playback. The accompanying video-tape illustrates our approach with interactive sequences showing the visual construction of 3D animated worlds. All the demonstrations in the video were recorded live and were not edited.
[ { "first": "Enrico", "middle": [], "last": "Gobbetti", "suffix": "" }, { "first": "Jean-Francis", "middle": [], "last": "Balaguer", "suffix": "" } ]
1,995
10.1145/218380.218494
SIGGRAPH '95
2089102779
[ "747563", "17759403", "18398773", "18672743", "10572105", "9197179", "14383080", "15997476", "1738577", "59673253", "16864525", "8760698", "2379273", "13261800", "5099686", "11938104", "10020673", "59627130", "2726539", "17455309", "12693109", "59660301", "13904811", "15063944", "29812963" ]
[ "14567018", "8020998", "16748364", "9598772", "46067310", "14309313", "12295592", "1761612", "16709997", "59805844", "8112626", "14996527", "14939281", "413656", "5532133", "18539357", "8012225", "15973610", "11078552", "17658559", "6868957", "817482", "837150", "204917605", "6123231", "5967586" ]
true
true
true
https://api.semanticscholar.org/CorpusID:547687
1
1
1
1
1
OpenGL 4.3, Shaders and the Programmable Pipeline: Liftoff
131,948,491
[ { "first": "Sumanta", "middle": [], "last": "Guha", "suffix": "" } ]
2,018
10.1201/9780429464171-15
Computer Graphics Through OpenGL
Computer Graphics Through OpenGL
2927537385
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:131948491
null
null
null
null
null
A Pilot Study: VR and Binaural Sounds for Mood Management
54,453,638
Virtual Reality is defined as the implementation of a virtual world that the user perceives as the real one. This can lead having the physical feeling of teleportation into another environment, forgetting the real world and even the physical body. This sensation of immersion affects the stimulus (visual, acoustic and haptic) perceived by the user and it is able to modify the brainwaves power. We think that this can be profitable for pain relief, as the patient feels many synchronized stimulus and he/she needs to be concentrated to process all the information and attenuate the pain sensation or change the initial mood. For that reason, this work proposes a pilot study of a VR environment combined with binaural beats, colors and movements to evaluate the perception the user has. It is believed that the use of different binaural beats in a long period of time can help patients to induce a relaxation state (mood) and consequently the perception to pain. The results of this work can be helpful for developing a pain management system with several configurable situations (VR scene, Colour & Sound combination, etc.). In this pilot study we apply 8 types of binaural sounds in a standard common VR scenario and we propose the end users to select the experimented feeling they felt in any case.
[ { "first": "Francisco", "middle": [ "J." ], "last": "Perales", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Sanchez", "suffix": "" }, { "first": "Laia", "middle": [], "last": "Riera", "suffix": "" }, { "first": "Silvia", "middle": [], "last": "Ramis", "suffix": "" } ]
2,018
10.1109/iV.2018.00083
2018 22nd International Conference Information Visualisation (IV)
2018 22nd International Conference Information Visualisation (IV)
2905519436
[ "22316153", "3648200", "15182749", "15344033", "17002960", "41523456", "34320590", "15724829", "8781932", "76589469", "8026613", "2866799", "1792120", "4821832", "27675387", "5708535", "4826345", "26574158", "4834981", "45085642", "25085536", "17015298", "241197", "927473", "38988215", "20023595", "4844295", "8211292", "41690853", "25606768", "145166133", "10660328", "51904205", "42421688", "107196534", "60227331" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:54453638
0
0
0
1
0
The evolution of three dimensional visualization for commanding the Mars rovers
31,741,152
NASA's Jet Propulsion Laboratory has built and operated four rovers on the surface of Mars. Two and three dimensional visualization has been extensively employed to command both the mobility and robotic arm operations of these rovers. Stereo visualization has been an important component in this set of visualization techniques. This paper discusses the progression of the implementation and use of visualization techniques for in-situ operations of these robotic missions. Illustrative examples will be drawn from the results of using these techniques over more than ten years of surface operations on Mars.
[ { "first": "Frank", "middle": [ "R." ], "last": "Hartman", "suffix": "" }, { "first": "John", "middle": [], "last": "Wright", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Cooper", "suffix": "" } ]
2,014
10.1109/3DVis.2014.7160100
2014 IEEE VIS International Workshop on 3DVis (3DVis)
2014 IEEE VIS International Workshop on 3DVis (3DVis)
1480727309
[ "55413957", "140141262", "128759291", "13203637", "1469206" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:31741152
0
0
0
1
0
HCI Architecting for System Reliability
60,337,530
The requirement for humans to be in control of machines i.e. complex systems gives rise to a need for human-computer interface (HCI) architecting to become part of the system design process. Current evidence tends to indicate that human error may be an increasing contributor to system failures. The degree to which human error appears to be on the increase may be related to the increased degree of system complexity. More specifically, complexity is a function of increased software intensiveness and computer hardware architecture dominance. HCI system architectures focus on the need for complex human-computer interface designs to be revisited to mitigate source conditions that contribute to human error.
[ { "first": "Raymond", "middle": [ "J." ], "last": "Martel", "suffix": "" } ]
1,996
10.1007/978-1-4613-1447-9_2
Human Interaction with Complex Systems
Human Interaction with Complex Systems
202910325
[]
[ "109024688" ]
false
true
false
https://api.semanticscholar.org/CorpusID:60337530
null
null
null
null
null
Predictive features for early cancer detection in Barrett's esophagus using volumetric laser endomicroscopy
19,087,061
The incidence of Barrett cancer is increasing rapidly and current screening protocols often miss the disease at an early, treatable stage. Volumetric Laser Endomicroscopy (VLE) is a promising new tool for finding this type of cancer early, capturing a full circumferential scan of Barrett's Esophagus (BE), up to 3-mm depth. However, the interpretation of these VLE scans can be complicated, due to the large amount of cross-sectional images and the subtle grayscale variations. Therefore, algorithms for automated analysis of VLE data can offer a valuable contribution to its overall interpretation. In this study, we broadly investigate the potential of Computer-Aided Detection (CADe) for the identification of early Barrett's cancer using VLE. We employ a histopathologically validated set of ex-vivo VLE images for evaluating and comparing a considerable set of widely-used image features and machine learning algorithms. In addition, we show that incorporating clinical knowledge in feature design, leads to a superior classification performance and additional benefits, such as low complexity and fast computation time. Furthermore, we identify an optimal tissue depth for classification of 0.5-1.0 mm, and propose an extension to the evaluated features that exploits this phenomenon, improving their predictive properties for cancer detection in VLE data. Finally, we compare the performance of the CADe methods with the classification accuracy of two VLE experts. With a maximum Area Under the Curve (AUC) in the range of 0.90-0.93 for the evaluated features and machine learning methods versus an AUC of 0.81 for the medical experts, our experiments show that computer-aided methods can achieve a considerably better performance than trained human observers in the analysis of VLE data.
[ { "first": "Fons", "middle": [], "last": "van der Sommen", "suffix": "" }, { "first": "Sander R.", "middle": [], "last": "Klomp", "suffix": "" }, { "first": "Anne-Fré", "middle": [], "last": "Swager", "suffix": "" }, { "first": "Svitlana", "middle": [], "last": "Zinger", "suffix": "" }, { "first": "Wouter L.", "middle": [], "last": "Curvers", "suffix": "" }, { "first": "Jacques J.G.H.M.", "middle": [], "last": "Bergman", "suffix": "" }, { "first": "Erik J.", "middle": [], "last": "Schoon", "suffix": "" }, { "first": "Peter H.N.", "middle": [], "last": "de With", "suffix": "" } ]
2,018
10.1016/j.compmedimag.2018.02.007
Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society
Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society
2797554225
[]
[ "57765673", "197434437", "203620587", "119301480", "5017616", "54581979", "201692900" ]
false
true
false
https://api.semanticscholar.org/CorpusID:19087061
null
null
null
null
null
Design of computer integrated safety and health management system
27,808,130
Diverse safety and health operation data collected and stored in different departments have not been fully integrated and utilized by managers due to poor design of safety and health information system. The safety and health management system can be depicted, conceptually, as an organic system with circulation of information flow which carries required data and information to specified workers and initiates appropriate responses respectively. This study is targeting to solve the problems of current safety and health management system through the integration of human information processing theory, certified safety and health management assessment guidelines and regulations, and IT techniques. The objective of this study is to propose a framework of computer integrated safety and health management system.
[ { "first": "Hunszu", "middle": [], "last": "Liu", "suffix": "" } ]
2,007
10.1007/978-3-540-73283-9_45
HCI
1546141442
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:27808130
null
null
null
null
null
Spatial bounding of self-affine iterated function system attractor sets
52,858,420
An algorithm is presented which, given the parameters of an Iterated Function System (IFS) which uses affine maps, constructs a closed ball which completely contains the attractor set of the IFS. These bounding balls are almost always smaller than those computed by existing methods, and are sometimes much smaller. The algorithm is numerical in form, involving the optimisation of centre-point and radius relationships between the overall bounding ball and a set of smaller, contained balls which are derived by analysis of the contractivemaps of the IFS. The algorithm is well-behaved, in that although it converges toward an optimal ball which it only achieves in the limit, the process may still be stopped after any finite number of steps, with a guarantee that the sub-optimal ball which is returned will still bound the attractor.
[ { "first": "Jonathan", "middle": [], "last": "Rice", "suffix": "" } ]
1,996
Graphics Interface
96688453
[]
[ "41687042", "13492809", "17022675", "16318489", "11087563", "708328", "12860931", "2389983", "12241382" ]
false
true
false
https://api.semanticscholar.org/CorpusID:52858420
null
null
null
null
null
Lower Order Krawtchouk Moment-Based Feature-Set for Hand Gesture Recognition
29,258,734
The capability of lower order Krawtchouk moment-based shape features has been analyzed. The behaviour of 1D and 2D Krawtchouk polynomials at lower orders is observed by varying Region of Interest ROI. The paper measures the effectiveness of shape recognition capability of 2D Krawtchouk features at lower orders on the basis of Jochen-Triesch’s database and hand gesture database of 10 Indian Sign Language ISL alphabets. Comparison of original and reduced feature-set is also done. Experimental results demonstrate that the reduced feature dimensionality gives competent accuracy as compared to the original feature-set for all the proposed classifiers. Thus, the Krawtchouk moment-based features prove to be effective in terms of shape recognition capability at lower orders.
[ { "first": "Bineet", "middle": [], "last": "Kaur", "suffix": "" }, { "first": "Garima", "middle": [], "last": "Joshi", "suffix": "" } ]
2,016
10.1155/2016/6727806
Adv. Human-Computer Interaction
Adv. Human-Computer Interaction
2299687342
[ "17359392", "16490295", "45832403", "6350297", "12544446", "207322316", "6431165", "123536979", "2176918", "7056275", "15145543", "122411997", "5895441", "18304587", "12145561", "57019115", "2478952", "1659109", "36623695", "12406156", "2930761", "15037168", "17794081", "7226530", "6103139", "43100365", "206651700", "22177610" ]
[ "211127766", "214899852", "51728673", "209380784" ]
true
true
true
https://api.semanticscholar.org/CorpusID:29258734
1
1
1
1
1
Isosurface Extraction and View-Dependent Filtering from Time-Varying Fields Using Persistent Time-Octree (PTOT)
14,354,326
We develop a new algorithm for isosurface extraction and view-dependent filtering from large time-varying fields, by using a novel persistent time-octree (PTOT) indexing structure. Previously, the persistent octree (POT) was proposed to perform isosurface extraction and view-dependent filtering, which combines the advantages of the interval tree (for optimal searches of active cells) and of the branch-on-need octree (BONO, for view-dependent filtering), but it only works for steady-state(i.e., single time step) data. For time-varying fields, a 4D version of POT, 4D-POT, was proposed for 4D isocontour slicing, where slicing on the time domain gives all active cells in the queried timestep and isovalue. However, such slicing is not output sensitive and thus the searching is sub-optimal. Moreover, it was not known how to support view-dependent filtering in addition to time-domain slicing.In this paper, we develop a novel persistent time-octree (PTOT) indexing structure, which has the advantages of POT and performs 4D isocontour slicing on the time domain with an output-sensitive and optimal searching. In addition, when we query the same iso value q over m consecutive time steps, there is no additional searching overhead (except for reporting the additional active cells) compared to querying just the first time step. Such searching performance for finding active cells is asymptotically optimal, with asymptotically optimal space and preprocessing time as well. Moreover, our PTOT supports view-dependent filtering in addition to time-domain slicing. We propose a simple and effective out-of-core scheme, where we integrate our PTOT with implicit occluders, batched occlusion queries and batched CUDA computing tasks, so that we can greatly reduce the I/O cost as well as increase the amount of data being concurrently computed in GPU.This results in an efficient algorithm for isosurface extraction with view-dependent filtering utilizing a state-of-the-art programmable GPU for time-varying fields larger than main memory. Our experiments on datasets as large as 192 GB (with 4 GB per time step) having no more than 870 MB of memory footprint in both preprocessing and run-time phases demonstrate the efficacy of our new technique.
[ { "first": "Cong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yi-Jen", "middle": [], "last": "Chiang", "suffix": "" } ]
2,009
10.1109/TVCG.2009.160
IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Visualization and Computer Graphics
2098866624
[ "59827852", "8991573", "4482228", "185078", "16880440", "1747585", "6399916", "14732408", "16473296", "364871", "513350", "14480848", "42562369", "14070116", "17496404", "15545924", "15535453", "61977045", "1654364", "62959868", "14923051", "7402616", "15855255", "16416720", "8900296", "4096641", "17184925", "61948084" ]
[ "18684043", "14886295", "221592", "38349883", "18552344", "15455309", "53629472" ]
true
true
true
https://api.semanticscholar.org/CorpusID:14354326
1
1
1
1
1
Symmetry Fields of Palladian Villas
118,816,727
[ { "first": "Matthew", "middle": [], "last": "Swarts", "suffix": "" } ]
2,013
10.5151/despro-sigradi2013-0016
Proceedings of the XVII Conference of the Iberoamerican Society of Digital Graphics - SIGraDi: Knowledge-based Design
Proceedings of the XVII Conference of the Iberoamerican Society of Digital Graphics - SIGraDi: Knowledge-based Design
1122427038
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:118816727
null
null
null
null
null
Research of navigation path planning algorithm in virtual scene
62,729,850
In the virtual scene, it is a complicated issue that the research of the navigation path plans in the current field of path ::: planning. The paper research a navigation path planning algorithm for the complex large-scale virtual scene. And ::: achieve virtual environment navigation with some path optimization algorithm in the mature robot field. The ::: experimental results show the effectiveness of the algorithm, in particular, in complex virtual scenes.
[ { "first": "Xiaoyue", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xueying", "middle": [], "last": "Liu", "suffix": "" } ]
2,011
10.1117/12.906319
2011 International Conference on Photonics, 3D-Imaging, and Visualization
2011 International Conference on Photonics, 3D-Imaging, and Visualization
2067880307
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:62729850
null
null
null
null
null
Interactive visual editing of grammars for procedural architecture
32,965,136
We introduce a real-time interactive visual editing paradigm for shape grammars, allowing the creation of rulebases from scratch without text file editing. In previous work, shape-grammar based procedural techniques were successfully applied to the creation of architectural models. However, those methods are text based, and may therefore be difficult to use for artists with little computer science background. Therefore the goal was to enable a visual work-flow combining the power of shape grammars with traditional modeling techniques. We extend previous shape grammar approaches by providing direct and persistent local control over the generated instances, avoiding the combinatorial explosion of grammar rules for modifications that should not affect all instances. The resulting visual editor is flexible: All elements of a complex state-of-the-art grammar can be created and modified visually.
[ { "first": "Markus", "middle": [], "last": "Lipp", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Wonka", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Wimmer", "suffix": "" } ]
2,008
10.1145/1360612.1360701
SIGGRAPH 2008
2011263905
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:32965136
null
null
null
null
null
Design and Evaluation of a Mixed-Reality Playground for Child‒Robot Games
57,784,901
In this article we present the Phygital Game project, a mixed-reality game platform in which children can play with or against a robot. The project was developed by adopting a human-centered design approach, characterized by the engagement of both children and parents in the design process, and situating the game platform in a real context—an educational center for children. We report the results of both the preliminary studies and the final testing session, which focused on the evaluation of usability factors. By providing a detailed description of the process and the results, this work aims at sharing the findings and the lessons learned about both the implications of adopting a human-centered approach across the whole design process and the specific challenges of developing a mixed-reality playground.
[ { "first": "Maria Luce", "middle": [], "last": "Lupetti", "suffix": "" }, { "first": "Giovanni", "middle": [], "last": "Piumatti", "suffix": "" }, { "first": "Claudio", "middle": [], "last": "Germak", "suffix": "" }, { "first": "and Fabrizio", "middle": [], "last": "Lamberti", "suffix": "" } ]
2,018
10.3390/mti2040069
Multimodal Technologies and Interaction
Multimodal Technologies and Interaction
2895003094
[ "12476939", "113544618", "67783451", "16329916", "4567714", "1460441", "18503588", "6375202", "1562116", "6847434", "14135817", "9160694", "114517594", "29481816", "52868393", "9092237", "62744419", "13623666", "5187418", "9057252", "3048854", "14623353", "59807153", "7902075", "977476", "15471080", "17710415", "36772224", "146757895", "28912378" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:57784901
1
1
1
1
1
Musictagger: exploiting user generated game data for music recommendation
9,369,789
The system "MusicTagger" is a game in which two players hear 30 seconds of a song, describe it independently and get points if they succeed in making the same descriptions. Additionally, it is a music recommendation system which compares songs with the help of the descriptions given in the game. MusicTagger is based on the principle of "human computation", meaning that problems (in this case, music recommendation) are solved by computers via eliciting human knowledge and making intelligent use of the aggregated information. This paper presents the design and implementation of the "MusicTagger" system together with results of an empirical lab study which demonstrates the potential of the recommendation engine.
[ { "first": "Hannes", "middle": [], "last": "Olivier", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Waselewsky", "suffix": "" }, { "first": "Niels", "middle": [], "last": "Pinkwart", "suffix": "" } ]
2,011
10.1007/978-3-642-21619-0_80
HCI
96066156
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:9369789
null
null
null
null
null
A Qualitative and Quantitative Characterisation of Style in Sign Language Gestures
12,419,592
[ { "first": "Alexis", "middle": [], "last": "Heloir", "suffix": "" }, { "first": "Sylvie", "middle": [], "last": "Gibet", "suffix": "" } ]
2,008
in "Advances in Gesture-Based Human-Computer Interaction and Simulation, GW 2007, Revised Selected Papers, Lecture Notes in Artificial Intelligence, LNAI 5085
2181154133
[ "61011455", "147096464", "199412908", "12383296", "53372689", "36327501", "26162826", "191641895", "10320835", "26271318", "17316622", "14235769", "1160180", "61430723", "64374543", "59859414", "557060", "5476378" ]
[ "14724943", "1675032", "17541074", "17541074", "6713916", "3818537", "21944237" ]
true
true
true
https://api.semanticscholar.org/CorpusID:12419592
0
0
0
1
0
Restoration of Partial Color Artifact and Blotches using histogram matching and sparse technique
14,183,975
In this paper we have proposed methods for restoration of artifacts called Partial Color Artifact(PCA) and Blotches which appear frequently in old video films. The PCA occurs due to partial loss of information in the upper color layers of the video film. As the inner most color layer is unaffected, the information present in this inner most color layer of the film aids in the reconstruction of damaged pixels from previously reconstructed frames. In Blotch artifact the pixel information is completely lost. The proposed Blotch reconstruction method is based on sparse recovery of signals from small number of measurements. Our blotch reconstruction process is computationally efficient because the image is segmented into non overlapping blocks and reconstruction is done block wise.
[ { "first": "V.", "middle": [], "last": "Narendra", "suffix": "" }, { "first": "Sumana", "middle": [], "last": "Gupta", "suffix": "" } ]
2,013
10.1109/NCVPRIPG.2013.6776171
2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)
2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)
1967740638
[ "16022652", "15777908", "15856203", "13919023", "9176374", "206737254", "12605120", "15777908" ]
[ "3144544" ]
true
true
true
https://api.semanticscholar.org/CorpusID:14183975
0
0
0
1
0
Augmented paper system: A framework for User's Personalized Workspace
11,996,532
In this paper, we are presenting a framework for “User's Personalized Workspace” by augmenting the physical paper and digital document. The paper based interactions are seamlessly integrated with digital document based interactions for reading as a activity. For instance when user is involved in reading activity, writing becomes complimentary. In a academic system, paper based presentation mode has facilitated such exercises. Despite rendering the annotation on digital document and store it onto the database, the content of the paper encircled or underlined is used to hyperlink the document. Synchronizing a physical paper and those of digital version in seamless fashion from a user's perspective is the main objective of this work. We have also compared the existing systems which focus on one activity or the other in our proposed system.
[ { "first": "Kavita", "middle": [], "last": "Bhardwaj", "suffix": "" }, { "first": "Santanu", "middle": [], "last": "Chaudhury", "suffix": "" }, { "first": "Sumantra", "middle": [ "Dutta" ], "last": "Roy", "suffix": "" } ]
2,013
10.1109/NCVPRIPG.2013.6776182
2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)
2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)
2170735775
[ "5200530", "5322959", "5767884", "28328779", "2350727", "1744458" ]
[ "54439029" ]
true
true
true
https://api.semanticscholar.org/CorpusID:11996532
0
0
0
1
0
Experimental investigation of the effect of control rod guide tubes on the breakup of a molten metal jet in the lower plenum of a boiling water reactor under isothermal conditions
581,377
It is important to clarify the molten material jet breakup process to estimate corium behavior in the lower plenum of a boiling water reactor (BWR). To identify the effect of control rod guide tubes (CRGTs) on the jet breakup behavior, a molten material (U-alloy) breakup experiment considering CRGTs in a BWR lower plenum was conducted under isothermal conditions. The experiment results show that jet breakup fraction for the case with CRGTs (pitch/diameter ratio of 1.37) was only approximately 20 % of that for the case without CRGTs. A coarser pitch/diameter ratio of 2.47 was also tested, but this configuration only slightly reduced the amount of jet breakup. The experiments also indicate that the CRGTs had no significant effect on the fragmentation of droplet diameter. Furthermore, the velocity distribution was measured around the jet with particle image velocimetry. The velocities of the water surrounding the jet for the cases with CRGTs were relatively larger than those in the case without CRGTs.Graphical abstract
[ { "first": "Hongyang", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Nejdet", "middle": [], "last": "Erkan", "suffix": "" }, { "first": "Koji", "middle": [], "last": "Okamoto", "suffix": "" } ]
2,017
10.1007/s12650-016-0390-6
Journal of Visualization
2515691100
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:581377
null
null
null
null
null
Interactive semi-automatic categorization for spinel group minerals
10,492,964
Spinel group minerals are excellent indicators of geological environments (tectonic settings). In 2001, Barnes and Roeder defined a set of contours corresponding to compositional fields for spinel group minerals. Geologists typically use this contours to estimate the tectonic environment where a particular spinel composition could have been formed. This task is prone to errors and requires tedious manual comparison of overlapping diagrams. We introduce a semi-automatic, interactive detection of tectonic settings for an arbitrary dataset based on the Barnes and Roeder contours. The new approach integrates the mentioned contours and includes a novel interaction called contour brush. The new methodology is integrated in the Spinel Explorer system and it improves the scientist's workflow significantly.
[ { "first": "Maria", "middle": [], "last": "Lujan Ganuza", "suffix": "" }, { "first": "Florencia", "middle": [], "last": "Gargiulo", "suffix": "" }, { "first": "Gabriela", "middle": [], "last": "Ferracutti", "suffix": "" }, { "first": "Silvia", "middle": [], "last": "Castro", "suffix": "" }, { "first": "Ernesto", "middle": [], "last": "Bjerg", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Groller", "suffix": "" }, { "first": "Kresimir", "middle": [], "last": "Matkovic", "suffix": "" } ]
2,015
10.1109/VAST.2015.7347676
2015 IEEE Conference on Visual Analytics Science and Technology (VAST)
2015 IEEE Conference on Visual Analytics Science and Technology (VAST)
2182594610
[ "684636", "12363364", "12226487", "132090796" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:10492964
0
0
0
1
0
Human motion retrieval using topic model
45,491,298
Content-based human motion retrieval is important for animators with the development of motion editing and synthesis, which need to search similar motions in large databases. Obtaining text-based representation from quantization of mocap data turned out to be efficient. It becomes a fundamental step of many researches in human motion analysis. Geometric features are one of these techniques, which involve much prior knowledge and reduce data redundancy of numerical data. We describe geometric features as basic unit to define human motions (also called mo-words) and view a human motion as a generative process. Therefore, we obtain topic motions, which possess more semantic information using latent Dirichlet allocation by learning from massive training examples in order to understand motions better. We combine probabilistic model with human motion retrieval and come up with a new representation of human motions and a new retrieval framework. Our experiments demonstrate its advantages, both for understanding motions and retrieval. Copyright © 2011 John Wiley & Sons, Ltd.
[ { "first": "Mingyang", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Huaijiang", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Rongyi", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Li", "suffix": "" } ]
2,012
10.1002/cav.432
Journal of Visualization and Computer Animation
Journal of Visualization and Computer Animation
1479989763
[]
[ "146112455", "17579377", "25061617" ]
false
true
false
https://api.semanticscholar.org/CorpusID:45491298
null
null
null
null
null
An automatic 3D registration method for rock mass point clouds based on plane detection and polygon matching
132,493,323
Point cloud registration is an essential step in the process of 3D reconstruction. Considering that the surface of rock mass is complex and mainly composed of planes, in this paper, we introduce a novel and automatic 3D registration method for rock mass point clouds based on plane detection and polygon matching. Firstly, planes are detected from rock mass point clouds by an efficient tripe-region growing method, and then, the corresponding polygons are calculated by concave hull method. Secondly, PCA-based polygon matching procedure is used for coarse registration. Finally, ICP method is applied to fine registration. The performance of this method was tested in different rock mass point clouds. Compared with the existing methods, the proposed method demonstrates a reliable and stable solution for accurately registering in rock mass scenes.
[ { "first": "Liang", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Wang", "suffix": "" } ]
2,019
10.1007/s00371-019-01648-z
The Visual Computer
The Visual Computer
2935599427
[ "129508814", "128876331", "8075713", "8063966", "23587594", "206908931", "10255588", "111038279", "140579905", "1482463", "126540390", "120335822", "15022990", "10797632", "14670920", "15002864", "31893904", "119635941", "46359433", "1194129", "122917928", "7421244", "17366636", "18709319", "12349413", "21188199", "14658635", "7132597", "1559928", "16256691", "21188199", "129965390", "16227406", "2621807" ]
[ "203929437" ]
true
true
true
https://api.semanticscholar.org/CorpusID:132493323
0
0
0
1
0
Knowledge Management for Rapidly Extensible Collaborative Robots.
195,825,308
It is difficult – but increasingly important – to build collaborative systems that can do a range of jobs in a range of ways in a range of conditions. Much of this difficulty stems from the broad scope of contextual and collaborative knowledge required to interact productively with humans amidst uncertainty and change. With respect to physical collaboration, human-robot interaction (HRI) studies have made progress on related problems by constraining uncertainty and dynamism, thus limiting the knowledge required. Our participation in an extreme context prevented us from adopting these approaches: The Defense Advanced Research Projects Agency (DARPA) Aircrew Labor In-Cockpit Automation System (ALIAS) program has a requirement that the prototype robotic copilot system it seeks to develop must be extensible in 30 days to function in a different, unspecified aircraft in a range of flight conditions and missions. To accommodate, we developed a knowledge management-inspired approach and system that allowed a variety of stakeholders to curate and rely on a dynamic body of flight-related knowledge. This had significant positive implications across design, test and use of the system, crucially enabling pilots to compose a robot- and human-legible plan for their interaction with the system using familiar conventions. Our contributions promise to accelerate quality development of extensible collaborative robotic systems for settings such as general medical assistance, disaster response and construction that require collaborative problem solving in highly uncertain, dynamic and variable conditions.
[ { "first": "Matthew", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Beane", "suffix": "" }, { "first": "David", "middle": [ "A." ], "last": "Mindell", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Ryan", "suffix": "" } ]
2,019
10.1007/978-3-030-22660-2_37
HCI
2953444187
[]
[ "212549833" ]
false
true
false
https://api.semanticscholar.org/CorpusID:195825308
null
null
null
null
null
A Method to Automatic Measuring Riding Comfort of Autonomous Vehicles: Based on Passenger Subjective Rating and Vehicle Parameters
196,610,746
As a milestone product of the AI era, the autonomous vehicle has attracted tremendous attention from the whole society. When autonomous vehicles (AV) provide transportation services as passenger vehicles in the future, a comfortable riding experience will be the fundamental element of usability. In such a case, it is necessary to establish an objective and sound evaluation system to evaluate the comfort level of autonomous vehicles. We hereby develop the comfort level model of autonomous vehicles with the following three steps: (a) Explore subjective evaluation indicators: Invite passengers to test autonomous vehicles and collect their ratings of the comfort level; (b) Establish the subjective comfort evaluation model: classify the evaluation indicators, continuously collect the evaluation data of the comfort level from the passengers during the testing process, and then use the structural modelling method to form a subjective evaluation model of the comfort level; (c) Develop the automatic scoring tool: collect subjective and objective data through data collection apps, form a calculation function with machine learning algorithm that fits the subjective and objective data, and develop an automatic scoring tool based on it. This precisely developed evaluation system and the empirical data-based scoring tool can be used to guide technological development, optimize algorithms, and improve strategies within the AV corporate. On the other hand, it can help to unify evaluation standard for AV industry, improving the experience of autonomous vehicle rides.
[ { "first": "Ya", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Qiuyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Lizhi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yunyan", "middle": [], "last": "Hu", "suffix": "" } ]
2,019
10.1007/978-3-030-23538-3_10
HCI
2960989186
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:196610746
null
null
null
null
null
Three-Dimensional Representation in Visual Communication of Science
196,610,842
Technology has been used to communicate complex information and brought opportunities for science development and science communication. Three-dimensional representation systems allow design to use image and video in innovative ways and enable organization of more suitable content for effective understanding. This paper addresses the use of three-dimensional representation systems to disseminate complex concepts related to science. We reviewed its course from scientific illustration of natural species, to the use of detailed infographics as direct communication in newspapers. We also undertook visual experiments to demonstrate the use of such three-dimensional representation systems in science communication, and proceeded with an evaluation from experts. This research allowed us to find connections between visual communication and complex concepts coming from science. Three-dimensional representation systems have features which generate benefits for science communication and understanding.
[ { "first": "Marco", "middle": [], "last": "Neves", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Gonçalves", "suffix": "" } ]
2,019
10.1007/978-3-030-23541-3_5
HCI
2959622297
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:196610842
null
null
null
null
null
Usability Test Based on Co-creation in Service Design
196,611,011
Service design often solves complex system problems. The diversity of users’ requirements and uncertainty of “interaction” in the service delivery process lead to the uncertainty of service usability. The focus of usability in service on design and test can avoid or minimize the uncertainty of service experience to the maximum extent. The concept and mechanism of “Service Co-creation” in this paper plays an important role in this process, including co-design in service planning activities and value co-creation in the process of service delivery. At the same time, after considering the definition of service design, specificity of service and usability factors of interactive products comprehensively, service usability can be summarized as 8 elements in the paper, including adaptability, standardization, flexibility, learnability, memorability, fault tolerance, efficiency and satisfaction. And then take the “Hotel Family Services Design” project as an example, it carries out usability design and test involved people (stakeholders), events (processes) and objects (touchpoints) in the service system. On the one hand, “multi-role stakeholders” participating in the “co-design workshop” can identify the precise needs of users and develop useful and usable services with more pertinence in the service planning stage. On the other hand, different forms and approaches of “prototype test” (Discussion Prototype, Simulation Prototype) to verify the usability and experience of service systems and processes in the service development and delivery phase help service designers and providers ultimately achieve their goals to improve service usefulness, usability and attraction through service iteration.
[ { "first": "Xiong", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Shan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jiajia", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Manhai", "middle": [], "last": "Li", "suffix": "" } ]
2,019
10.1007/978-3-030-23535-2_7
HCI
2961828279
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:196611011
null
null
null
null
null
Interactive Storytelling in V.R.: Coming Soon?
196,613,874
There have been many smart people throughout history who have misidentified the potential, or lack thereof, of new technologies. Thomas Edison, for all his genius, failed to anticipate the market for cinematic entertainment. His company’s early films lacked storytelling, and its film display technology, the Kinetoscope, permitted only one person to watch at a time. Perhaps there are lessons here for Virtual Reality (V.R.). Some have assumed that as entertainment becomes increasingly immersive, movies will somehow be absorbed into V.R. Even as many of the technical preconditions for this vision have fallen into place, there remain logistical and practical problems. Translating conventional forms of story authorship into the immersive, interactive context may not be sought-after. What is an interactive movie, after all? Even if strategies can be found to write and produce interactive V.R. movies, the results may be different from what people have been expecting.
[ { "first": "Andy", "middle": [], "last": "Deck", "suffix": "" } ]
2,019
10.1007/978-3-030-23541-3_31
HCI
2958881223
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:196613874
null
null
null
null
null
Discussion on Methods of Embedding Virtual 3D Models into PowerPoint Courseware
64,327,094
With the consummating of 3D design technology and the wide use of multimedia teaching,introducing virtual 3D model into CAI courseware becomes urgent for the development of multimedia assistant teaching software.This article conducts a deep discussion on the virtual reality technology and the quoting techniques of using eDrawings browser document in PowerPoint courseware development,which makes it possible to browse virtual 3D model in PowerPoint courseware conveniently and real time,and helps create conditions for the wide use of virtual 3D model in multimedia teaching.
[ { "first": "Zheng", "middle": [], "last": "Fang", "suffix": "" } ]
2,010
Journal of Engineering Graphics
2389154738
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:64327094
null
null
null
null
null
Extended spatial keyframing for complex character animation
3,095,863
[ { "first": "Byungkuk", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Mi", "middle": [], "last": "You", "suffix": "" }, { "first": "Junyong", "middle": [], "last": "Noh", "suffix": "" } ]
2,008
10.1002/cav.247
Journal of Visualization and Computer Animation
Journal of Visualization and Computer Animation
[ "53224527", "53222540", "13936317", "207647967", "53235288", "16748364", "14567018", "11853908", "422509", "5234604", "6776923", "117698796", "14238478", "118992555", "58318514", "196002781", "31651449" ]
[ "11020046" ]
true
true
true
https://api.semanticscholar.org/CorpusID:3095863
0
0
0
1
0
3D High Dynamic Range dense visual SLAM and its application to real-time object re-lighting
3,098,256
Acquiring High Dynamic Range (HDR) light-fields from several images with different exposures (sensor integration periods) has been widely considered for static camera positions. In this paper a new approach is proposed that enables 3D HDR environment maps to be acquired directly from a dynamic set of images in real-time. In particular a method will be proposed to use an RGB-D camera as a dynamic light-field sensor, based on a dense real-time 3D tracking and mapping approach, that avoids the need for a light-probe or the observation of reflective surfaces. The 6dof pose and dense scene structure will be estimated simultaneously with the observed dynamic range so as to compute the radiance map of the scene and fuse a stream of low dynamic range images (LDR) into an HDR image. This will then be used to create an arbitrary number of virtual omni-directional light-probes that will be placed at the positions where virtual augmented objects will be rendered. In addition, a solution is provided for the problem of automatic shutter variations in visual SLAM. Augmented reality results are provided which demonstrate real-time 3D HDR mapping, virtual light-probe synthesis and light source detection for rendering reflective objects with shadows seamlessly with the real video stream in real-time.
[ { "first": "Maxime", "middle": [], "last": "Meilland", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Barat", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Comport", "suffix": "" } ]
2,013
10.1109/ISMAR.2013.6671774
2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
2056478652
[ "1213673", "6278891", "14089219", "18810561", "12358833", "62690959", "62778715", "52852816", "538139", "1803005", "2845053", "206986922", "2084651", "2647907", "9975722", "10055749", "1363510", "974249", "27731568", "8559320", "15657774", "1762492", "1336659", "2815719", "7491882", "6496578", "10812759", "16718621", "10613267", "12290133", "2369288", "17173421", "17784061", "2715202", "16568724", "15016942" ]
[ "31368627", "199540866", "46767599", "6433493", "14084983", "9280868", "53236876", "52338013", "54092468", "17296228", "39034700", "507939", "19437719", "36134336", "207758654", "9874132", "25921098", "5031690", "204941144", "20481544", "11784121", "1595666", "196201321", "20669544", "8932165", "4618306", "52830335", "631737", "32292103", "664662", "7597683" ]
true
true
true
https://api.semanticscholar.org/CorpusID:3098256
0
0
0
1
0
Sparse lumigraph relighting by illumination and reflectance estimation from multi-view images
412,943
We present a novel image-based modeling technique which allows simultaneous estimation of Illumination, diffuse, and specular albedo maps using only object geometry and multi-view images captured under a single, unknown illumination setting.
[ { "first": "Tianli", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Hongcheng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Narendra", "middle": [], "last": "Ahuja", "suffix": "" }, { "first": "Wei-Chao", "middle": [], "last": "Chen", "suffix": "" } ]
2,006
10.1145/1179849.1180068
SIGGRAPH '06
2172041827
[ "18956162", "17083829", "15083415" ]
[ "1866881", "208291011", "20258121", "12268617", "9366144", "277864", "211210660", "13891930" ]
true
true
true
https://api.semanticscholar.org/CorpusID:412943
0
0
0
1
0
Sheep Jet Head: Copyright restrictions prevent ACM from providing the full text for this work.
27,976,497
[ { "first": "Brit", "middle": [], "last": "Bunkley", "suffix": "" } ]
2,006
10.1145/1178977.1179000
SIGGRAPH '06
2022166548
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:27976497
null
null
null
null
null
A Provenance Model for Quantified Self Data
27,979,906
Quantified Self became popular in recent years. People are tracking themselves with Wearables, smartphone apps, or desktop applications. They collect, process and store huge amounts of personal data for medical and other reasons. Due to the complexity of different data sources, apps, and cloud services, it is hard to follow the data flow and to have trust in data integrity and safety. We present a solution that helps to get insight in Quantified Self data flows and to answer questions related to data security. We provide a provenance model for Quantified Self data based on the W3C standard PROV. Using that model, developers and users can record provenance of Quantified Self apps and services with a standardized notation. We show the feasibility of the presented provenance model with a small workflow using steps data from Fitbit fitness tracker.
[ { "first": "Andreas", "middle": [], "last": "Schreiber", "suffix": "" } ]
2,016
10.1007/978-3-319-40250-5_37
HCI
2492530792
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:27979906
null
null
null
null
null
Substructure Topology Preserving Simplification of Tetrahedral Meshes
340,706
Interdisciplinary efforts in modeling and simulating phenomena have led to complex multi-physics models involving different physical properties and materials in the same system. Within a 3d domain, substructures of lower dimensions appear at the interface between different materials. Correspondingly, an unstructured tetrahedral mesh used for such a simulation includes 2d and 1d substructures embedded in the vertices, edges and faces of the mesh. The simplification of such tetrahedral meshes must preserve (1) the geometry and the topology of the 3d domain, (2) the simulated data and (3) the geometry and topology of the embedded substructures. Although intensive research has been conducted on the first two goals, the third objective has received little attention. This paper focuses on the preservation of the topology of 1d and 2d substructures embedded in an unstructured tetrahedral mesh, during edge collapse simplification. We define these substructures as simplicial sub-complexes of the mesh, which is modeled as an extended simplicial complex. We derive a robust algorithm, based on combinatorial topology results, in order to determine if an edge can be collapsed without changing the topology of both the mesh and all embedded substructures. Based on this algorithm we have developed a system for simplifying scientific datasets defined on irregular tetrahedral meshes with substructures. The implementation of our system is discussed in detail. We demonstrate the power of our system with real world scientific datasets from electromagnetism simulations.
[ { "first": "Fabien", "middle": [], "last": "Vivodtzev", "suffix": "" }, { "first": "Georges-Pierre", "middle": [], "last": "Bonneau", "suffix": "" }, { "first": "Stefanie", "middle": [], "last": "Hahmann", "suffix": "" }, { "first": "Hans", "middle": [], "last": "Hagen", "suffix": "" } ]
2,010
10.1007/978-3-642-15014-2_5
Topological Methods in Data Analysis and Visualization
Topological Methods in Data Analysis and Visualization
144015144
[ "58560095", "63977110", "10085168", "50333417", "9687519", "11843", "14961552", "957275", "5620339", "3337564", "1182795" ]
[ "7709094" ]
true
true
true
https://api.semanticscholar.org/CorpusID:340706
1
1
1
1
1
An improved nonlinear mapping algorithm and its application to picture prototype selection
62,163,269
Abstract The problem of classifying an unknown picture, given a set of previously identified pictures, is considered. Two classification schemes that use prototype pictures are discussed, along with a method for obtaining and evaluating these prototypes.
[ { "first": "Bruce", "middle": [], "last": "Schachter", "suffix": "" } ]
1,976
10.1016/0146-664X(76)90035-6
Computer Graphics and Image Processing
2022200082
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:62163269
null
null
null
null
null
Dynamic algorithm binding for interactive walkthroughs
42,869,800
This paper presents a novel approach to the real-time rendering of complex scenes problem. Up to the present date, a huge number of acceleration techniques have been proposed, although most are geared towards a specific kind of scene. Instead of using a single, or a fixed set of rendering acceleration techniques, we propose the use of several, and to select the best one based on the current viewpoint. Thus, dynamically adapting the rendering process to the contents of the scene, it is possible to take advantage of all these techniques when they are better suited, rendering scenes that would otherwise be too complex to display at interactive frame rates. We describe a framework capable of achieving this purpose, consisting of a pre-processor and an interactive rendering engine. The framework is geared towards interactive applications were a complex and large scene has to be rendered at interactive frame rates. Finally, results taken from our test implementation are given.
[ { "first": "P.", "middle": [], "last": "Pires", "suffix": "" }, { "first": "J.", "middle": [], "last": "Pereira", "suffix": "" } ]
2,001
10.1109/SIBGRAPI.2001.963050
Proceedings XIV Brazilian Symposium on Computer Graphics and Image Processing
Proceedings XIV Brazilian Symposium on Computer Graphics and Image Processing
2171798630
[ "60309165", "9356898", "11776878", "1372645", "7915210", "1502691", "6595992", "8660216", "2255022", "1231713", "760636", "686393", "9835947", "483703", "10224615", "273393", "1306751", "2238751", "8500761", "5687076", "496617" ]
[ "41499660" ]
true
true
true
https://api.semanticscholar.org/CorpusID:42869800
0
0
0
1
0
Improving the Effectiveness of Mobile Application Design: User-Pairs Testing by Non-professionals
45,058,875
The nature of mobile applications requires a fast and inexpensive design process. The development phase is short because the life cycle of an application is limited, mobile technology is developing rapidly, and the competition is heavy. Existing design methods are time-consuming and require expertise (e.g. Contextual Design). We suggest a design approach where focus groups are followed by usability tests in pairs carried out by non-professional moderators. With this approach CHI departments can benefit from market research resources, and improve collaboration with marketing people. We evaluated this approach with a case called News Client. The findings show that in paired-user tests near half of the usability problems were found compared to individual usability testing. The results are not too profound but enough for industry needs. Another interesting point is that our findings do not support the earlier reported results according to which the interaction between two participants can bring out more input than a single participant thinking aloud.
[ { "first": "Titti", "middle": [], "last": "Kallio", "suffix": "" }, { "first": "Aki", "middle": [], "last": "Kekäläinen", "suffix": "" } ]
2,004
10.1007/978-3-540-28637-0_29
Mobile HCI
1516349913
[]
[ "19935097", "3074865", "10294150", "51097913", "15842968", "8534531" ]
false
true
false
https://api.semanticscholar.org/CorpusID:45058875
null
null
null
null
null
Laser Pointers as Collaborative Pointing Devices
12,397,912
Single Display Groupware (SDG) is a research area that focuses on providing collaborative computing environments. Traditionally, most hardware platforms for SDG support only one person interacting at any given time, which limits collaboration. In this paper, we present laser pointers as input devices that can provide concurrent input streams ideally required to the SDG environment. First, we discuss several issues related to utilization of laser pointers and present the new concept of computer controlled laser pointers. Then we briefly present a performance evaluation of laser pointers as input devices and a baseline comparison with the mouse according to the ISO 9241-9 standard. Finally, we describe a new system that uses multiple computer controlled laser pointers as interaction devices for one or more displays. Several alternatives for distinguishing between different laser pointers are presented, and an implementation of one of them is demonstrated with SDG applications.
[ { "first": "Ji-Young", "middle": [], "last": "Oh", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Stürzlinger", "suffix": "" } ]
2,002
Graphics Interface
173668522
[ "1154913", "52529", "1468783", "57674726", "10152336", "752454" ]
[ "195995125", "14516372", "12215869", "1916118", "21106204", "62725351", "13991108", "35360685", "39901174", "13972569", "9934511", "10860228", "16008075", "116335965", "7298060", "15345473", "16384614", "175424", "3249689", "208016189", "16083975", "6593211", "791444", "7216030", "6169149", "84186703", "8391759", "53658172", "14607769", "5041528", "5041528", "2922140", "8450444", "18789479", "13483658", "4965043", "12210375", "16722515", "1036334", "14437281", "7354304", "15095374", "13383435", "12035407", "15487238", "202236174", "11820623", "12400005", "3489542", "8547591", "2566687", "12590592", "11470262", "11275834", "3254924", "5217211", "165497", "63321319", "513765", "16729569", "6529164", "14904318", "22117212", "1298549", "135119", "13441941", "12224119", "3394812", "849141", "18198882", "9657957", "15372534", "18164695", "11266785", "3161198", "9793200", "10205661", "195259305", "1351378", "16499094", "7763228" ]
true
true
true
https://api.semanticscholar.org/CorpusID:12397912
0
0
0
1
0
G-Force Basketball
208,015,187
Somewhere in deep space, two astronauts play a game of zero gravity basketball. When the game gets too close, one opponent resorts to manipulating the gravity to beat the other but that some turns against him.
[ { "first": "Bong", "middle": [ "Ho" ], "last": "Kim", "suffix": "" } ]
2,010
10.1145/1836623.1836644
SIGGRAPH 2010
2231948678
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:208015187
null
null
null
null
null
Nonlinear ray tracing: visualizing strange worlds
63,186,367
[ { "first": "Eduard", "middle": [ "Gr" ], "last": "ller", "suffix": "" } ]
1,995
The Visual Computer
2393776149
[]
[ "1299243", "6872981", "14449935", "14038172", "202724744", "16888574", "14756583", "3901758", "61494040", "15610705", "9868697", "2567357", "13816900", "1883250", "18052614", "7877893", "20455320", "16584395" ]
false
true
false
https://api.semanticscholar.org/CorpusID:63186367
null
null
null
null
null
Parallel performance measures for volume ray casting
6,292,431
Describes a technique for achieving fast volume ray-casting on parallel machines, using a load-balancing scheme and an efficient pipelined approach to compositing. We propose a new model for measuring the amount of work one needs to perform in order to render a given volume, and we use this model to obtain a better load-balancing scheme for distributed memory machines. We also discuss in detail the design trade-offs of our technique. In order to validate our model, we have implemented it on the Intel iPSC/860 and the Intel Paragon, and conducted a detailed performance analysis. >
[ { "first": "C.T.", "middle": [], "last": "Silva", "suffix": "" }, { "first": "A.E.", "middle": [], "last": "Kaufman", "suffix": "" } ]
1,994
10.1109/VISUAL.1994.346319
Proceedings Visualization '94
Proceedings Visualization '94
2112251789
[ "22157424", "195864785", "7269839", "26696332", "195866343", "1143743", "9248401", "195866483", "195865423", "15575048", "34591583" ]
[ "186201668", "17277052", "7771162", "16556482", "195865760", "21047059", "17641093", "16998470", "47860520", "1114607", "15489310", "15320409", "30423297", "11336154" ]
true
true
true
https://api.semanticscholar.org/CorpusID:6292431
0
0
0
1
0
Simulation and Performance Evaluation of a Distributed Haptic Virtual Environment Supported by the CyberMed Framework
15,782,923
Performance evaluation on Distributed Haptic Virtual Environments (DHVEs) became important to understand the new Internet requirements for supporting multisensorial and real-time collaborative applications. This paper presents the results of simulation and performance analysis of the CyberMed framework. The main goal of this experiment is to evaluate the real conditions of CyberMed when executed over a non-dedicated hybrid network, like the Internet, comparing its results with other similar works found in the literature
[ { "first": "Paulo", "middle": [ "Vinícius", "F." ], "last": "Paiva", "suffix": "" }, { "first": "Liliane", "middle": [ "S." ], "last": "Machado", "suffix": "" }, { "first": "Jauvane", "middle": [ "C.", "de" ], "last": "Oliveira", "suffix": "" } ]
2,011
10.1109/SVR.2011.16
2011 XIII Symposium on Virtual Reality
2011 XIII Symposium on Virtual Reality
2156957422
[ "20870046", "5192737", "16061782", "15933947", "15306365", "1135551", "202594889", "10876010", "195947371" ]
[ "6893380", "55031165", "1522859", "210865273" ]
true
true
true
https://api.semanticscholar.org/CorpusID:15782923
0
0
0
1
0
An efficient control over human running animation with extension of planar hopper model
22,943,369
The most important goal of character animation is to efficiently control the motions of a character. Until now, many techniques have been proposed for human gait animation, and some techniques have been created to control the emotions in gaits such as "tired walking" and "brisk walking" by using parameter interpolation or motion data mapping. Since it is very difficult to automate the control over the emotion of a motion, the emotions of a character model have been generated by creative animators. The paper proposes a human running model based on a one-leg-planar hopper with a self-balancing mechanism. The proposed technique exploits genetic programming to find an optimal movement. We extend the energy minimization technique to generate various motions in accordance with emotional specifications, for instance, "brisk running".
[ { "first": "Young-Min", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Sun-Jin", "middle": [], "last": "Park", "suffix": "" }, { "first": "Ee-Taek", "middle": [], "last": "Lee", "suffix": "" } ]
1,998
10.1109/PCCGA.1998.732101
Proceedings Pacific Graphics '98. Sixth Pacific Conference on Computer Graphics and Applications (Cat. No.98EX208)
Proceedings Pacific Graphics '98. Sixth Pacific Conference on Computer Graphics and Applications (Cat. No.98EX208)
2113459428
[ "12773211", "24737723", "525464", "12333479", "280799", "15829216", "5695035", "28435", "12993588", "26173763", "1720108", "30814901" ]
[ "3441185", "1515829", "14777652", "2596537", "11805931", "13863413" ]
true
true
true
https://api.semanticscholar.org/CorpusID:22943369
0
0
0
1
0
Classifying interaction methods to support intuitive interaction devices for creating user-centered-systems
39,744,598
Nowadays a wide range of input devices are available to users of technical systems. Especially modern alternative interaction devices, which are known from game consoles etc., provide a more natural way of interaction. But the support in computer programs is currently a big challenge, because a high effort is to invest for developing an application that supports such alternative input devices. For this fact we made a concept for an interaction system, which supports the use of alternative interaction devices. The interaction-system consists as central element a server, which provides a simple access interface for application to support such devices. It is also possible to address an abstract device by its properties and the interaction-system overtakes the converting from a concrete device. For realizing this idea, we also defined a taxonomy for classifying interaction devices by its interaction method and in dependence to the required interaction results, like recognized gestures. Later, by using this system, it is generally possible to develop a user-centered system by integrating this interaction-system, because an adequate integration of alternative interaction devices provides a more natural and easy to understand form of interaction.
[ { "first": "Dirk", "middle": [], "last": "Burkhardt", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Breyer", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Glaser", "suffix": "" }, { "first": "Kawa", "middle": [], "last": "Nazemi", "suffix": "" }, { "first": "Arjan", "middle": [], "last": "Kuijper", "suffix": "" } ]
2,011
10.1007/978-3-642-21672-5_3
HCI
65773451
[]
[ "2950018" ]
false
true
true
https://api.semanticscholar.org/CorpusID:39744598
0
0
0
0
0
Mobile visual computing in C++ on Android
16,842,132
Based on a tutorial developed by NVIDIA's Mobile Visual Computing team, this course will teach the basics to jump-start a visual computing project on Android using native C/C++ code. After explaining how to set up the programming environment and write a simple native application, we will dive into more advanced topics related to Computer Vision (using OpenCV optimized for Android), and high-performance Image Processing (using OpenGL ES2).
[ { "first": "Yun-Ta", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Orazio", "middle": [], "last": "Gallo", "suffix": "" }, { "first": "David", "middle": [], "last": "Pajak", "suffix": "" }, { "first": "Kari", "middle": [], "last": "Pulli", "suffix": "" } ]
2,013
10.1145/2503673.2503696
SIGGRAPH '13
[]
[ "17076847" ]
false
true
true
https://api.semanticscholar.org/CorpusID:16842132
0
0
0
0
0
A Standards Solution to Your Graphics Problems
60,373,003
Imagine the following scenario. Your department has had a new desktop computer for several weeks now, and you have mastered the word processing software so that you are able to generate a perfect letter in less time and with less effort than it used to take to type it. Having revised an important proposal that absolutely has to get to the airport counter by 6:00 p.m., you feel that it just doesn’t create the impact you want it to. What your proposal needs is graphics! But how do you add illustrations at 3:00 p.m. on Friday afternoon.
[ { "first": "Mark", "middle": [ "G." ], "last": "Rawlins", "suffix": "" } ]
1,985
10.1007/978-4-431-68025-3_28
Frontiers in Computer Graphics
Frontiers in Computer Graphics
316523016
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:60373003
null
null
null
null
null
A vector-based representation for image warping
275,224
A method for image analysis, representation and re-synthesis is introduced. Unlike other schemes it is not pixel based but rather represents a picture as vector data, from which an altered version of the original image can be rendered. Representing an image as vector data allows performing operations such as zooming, retouching or colourising, avoiding common problems associated with pixel image manipulation. This paper brings together methods from the areas of computer vision, image compositing and image based rendering to prove that this type of image representation is a step towards accurate and efficient image manipulation.
[ { "first": "Max", "middle": [], "last": "Froumentin", "suffix": "" }, { "first": "Frédéric", "middle": [], "last": "Labrosse", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Willis", "suffix": "" } ]
2,000
10.1111/1467-8659.00434
Computer Graphics Forum - Conference Issue 19, 3 (Sept.), C385–C394 and C428
2162805264
[ "14001890", "86351869", "62615235", "7680709", "16841553", "5097328", "62141757", "6914801", "12849354", "13971036", "4374741", "18663039", "18603445", "59628217", "1240104", "14356403", "3332151", "9192133" ]
[ "2174106", "443507", "143523", "18574964", "1806516", "17047658" ]
true
true
true
https://api.semanticscholar.org/CorpusID:275224
1
1
1
1
1
CosMovis: Analyzing semantic network of sentiment words in movie reviews
11,457,235
In this paper, we present the new method to easily recognize intricate network and cluster by connecting Multidimensional Scaling(MDS) Map and Social Network Graph, and comprehend the feature of each node using Heatmap Visualization. For the sake of this method, this paper used netizen's movie review data. Also, the process is as follow : 1) We calculated the frequency of sentiment word from each movie review. 2) We designed Heatmap Visualization which easily apprehends main emotion of netizens which appear in each movie review. 3) We made location of each node reflect the frequency of sentiment word by designing Sentiment-Movie Network made up of combining MDS Map and Socal Network Graph, as well as we embodied Network Graph to make same nodes form cluster. 4) By granting meaning in accordance with characteristic of clustering, we applied asterism graphic to facilitate cognitive interpretation. Our demonstration is available to http://idlab.ajou.ac.kr/cosmovis/.
[ { "first": "Hyoji", "middle": [], "last": "Ha", "suffix": "" }, { "first": "Gi-nam", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Wonjoo", "middle": [], "last": "Hwang", "suffix": "" }, { "first": "Hanmin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Kyungwon", "middle": [], "last": "Lee", "suffix": "" } ]
2,014
10.1109/LDAV.2014.7013215
2014 IEEE 4th Symposium on Large Data Analysis and Visualization (LDAV)
2014 IEEE 4th Symposium on Large Data Analysis and Visualization (LDAV)
2033936768
[ "15669884" ]
[ "52216384", "195889865" ]
true
true
true
https://api.semanticscholar.org/CorpusID:11457235
0
0
0
1
0
Eurographics Conference on Visualization (EuroVis) (2012) M. Meyer and T. Weinkauf (Editors) Short Papers The Parallel Coordinates Matrix
15,419,440
We introduce the parallel coordinates matrix (PCM) as the counterpart to the scatterplot matrix (SPLOM). Using a graph-theoretic approach, we determine a list of axis orderings such that all pairwise relations can be displayed without redundancy while each parallel-coordinates plot can be used independently to visualize all variables of the dataset. Therefore, existing axis-ordering algorithms, rendering techniques, and interaction methods can easily be applied to the individual parallel-coordinates plots. We demonstrate the value of the PCM in two case studies and show how it can serve as an overview visualization for parallel coordinates. Finally, we apply existing focus-and-context techniques in an interactive setup to support a detailed analysis of multivariate data.
[ { "first": "Julian", "middle": [], "last": "Heinrich", "suffix": "" }, { "first": "John", "middle": [ "T." ], "last": "Stasko", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Weiskopf", "suffix": "" } ]
2,012
10.2312/PE/EuroVisShort/EuroVisShort2012/037-041
EuroVis
2291534001
[ "12170274", "2991726", "9848560", "11702189", "541489", "122046548", "54719876", "122465502", "62096073", "2875215", "16370188", "195994", "623664", "2281975", "7788190", "119378488" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:15419440
0
0
0
1
0
Segmenting deformable soft-body meshes based on statistical variation information for piecewise Active Shape Model
15,948,575
This paper proposes an algorithm for segmenting deforming soft-body meshes based on statistical variation information extracted from the deforming meshes. The variation information is extracted by performing a global principal component analysis (PCA) on the set of meshes. Eigen-variation Similarity (EVS) and Eigen-variation Magnitude (EVM) are then defined for the vertices and triangle faces of the meshes based on the extracted variation information. A multiple-source region growing algorithm is presented for segmenting a mesh that favors grouping faces with similar variations into a same component. We apply the proposed mesh segmentation algorithm to the construction of piecewise Active Shape Model (ASM) and use such piecewise ASM to reconstruct unseen meshes. Experimental results show that our algorithm outperforms several state-of-the-art methods in terms of reconstruction accuracy.
[ { "first": "Peng", "middle": [], "last": "Du", "suffix": "" }, { "first": "H.S.", "middle": [ "Ip" ], "last": "Horace", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Bei", "middle": [], "last": "Hua", "suffix": "" } ]
2,009
10.1109/CADCG.2009.5246901
2009 11th IEEE International Conference on Computer-Aided Design and Computer Graphics
2009 11th IEEE International Conference on Computer-Aided Design and Computer Graphics
2088151815
[ "207162159", "30965411", "9013287", "1701896", "1875598", "27050465", "9908045", "22125366", "15242659", "2921826", "12039660", "8938498" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:15948575
0
0
0
1
0
Keyboard Input as Action
213,233,520
In Chapter 4 we laid the groundwork for better understanding the Unity Event System and its integration with VRTK. For me, it’s helpful to think of events as messages sent between classes and objects to notify different components of a program that a change has occurred. Because user input into our VR applications alters the state of the program, it, too, is an event. As an event, user input fits nicely into the list of responsibilities handled by VRTK as an interface for the Unity Event System.
[ { "first": "Rakesh", "middle": [], "last": "Baruah", "suffix": "" } ]
2,019
10.1007/978-1-4842-5488-2_5
Virtual Reality with VRTK4
Virtual Reality with VRTK4
2993941272
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:213233520
null
null
null
null
null
The Study of Modern Emergency Products under the Direction of New Ergonomics
45,649,780
The rapid development of computer and internet technology has led to human beings’ society into the era of information, and brought certain influence on the modern design and innovative design. Moreover, product design is now emphasizing on design ideology such as Humanisms, Culture and People-Oriented design. That is products serving for users and all their shapes, function, color, material and structure showing the respect and care for their users, furthermore, they are endeavoring in providing some virtual using experience including Humanities, History, Emotion, Joviality and Security. Recently, natural disasters such like earthquake, snowstorm, fire, typhoon, flood and drought always cost enormous casualties and economic loss. As a result of the government’s popularization and promotion of disaster prevention and rising awareness of the meaning of safety, emergency products come up to exist in our daily life. However, emergency indications would send people a sense of fear, which should lower their life quality. New ergonomic, mainly based on researching methods as Physiology, Psychology and Anthropometry etc., and gives prominence to some interdisciplinary subjects, for example, Social Psychology and Economics etc. It is focusing on service system and ergonomic issues. On the topic of disaster emergency design, this paper suggests three researching methods for it: PDCA model, Quantitative Analysis, Semantic Differential. Then how could Emergency Products produce high life quality? The essay records the researching theorization which has been used before. They are based on the people-oriented ideology, through in four phases of the establishment of the design goals (design target selection and judgment), the preliminary design (understanding and analysis of the design goals, design concept engineering), deep design (product detail design) and design confirmation (completion of the design goals) to practice and research.
[ { "first": "Jianxin", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Meiyu", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Junnan", "middle": [], "last": "Ye", "suffix": "" } ]
2,013
10.1007/978-3-642-39143-9_4
HCI
2157009415
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:45649780
null
null
null
null
null
'Subjunctive Interface' Support for Combining Context-Dependent Semi-Structured Resources
6,039,681
Many new kinds of tool are being developed to take advantage of the increasing use of XML for expressing semi-structured information. One category of support is the provision of interactive tools that let people request and control the merging of disparate resources. However, these tools typically assume that the users have a clear idea of exactly how they wish to combine the resources, and that the resources in question can be joined unambiguously as requested. This paper introduces an approach to supporting integration in cases in which these conditions might not hold, for example because the resources contain alternative values intended to suit various different contexts of use.
[ { "first": "Aran", "middle": [], "last": "Lunzer", "suffix": "" } ]
2,001
In Proceedings of IFIP TC. 13 International Conference on Human-Computer Interaction (INTERACT '01
177050930
[ "17965228", "160821154" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:6039681
0
0
0
1
0
Task Specific Paper Controller that Can Be Created by Users for a Specific Computer Operation
32,803,370
We describe Paper Controller, a paper based controller that allows users to design and create their own task specific controllers with touch-sensing capability for controlling a desktop computer. Casual users of computers can design and create a task specific Paper Controller by printing and/or drawing buttons freely with conductive ink and by drawing annotations including text and figures with regular ink. We implemented a prototype system for the Paper Controller. The system consists of the Paper Controller, a Clipboard for the Paper Controller, and a parameterization software running on a computer. We conducted an experiment to examine whether users can create a Paper Controller. The results show that the users can create and use their own Paper Controllers.
[ { "first": "Daisuke", "middle": [], "last": "Komoriya", "suffix": "" }, { "first": "Buntarou", "middle": [], "last": "Shizuki", "suffix": "" }, { "first": "Jiro", "middle": [], "last": "Tanaka", "suffix": "" } ]
2,015
10.1007/978-3-319-20804-6_38
HCI
1441187720
[]
[ "12605531" ]
false
true
false
https://api.semanticscholar.org/CorpusID:32803370
null
null
null
null
null
Surface Information Extraction from The Sketch Image
61,799,884
Principal purpose of this paper is to extract a facial change among the faces at the different situations for the psychological applications. Because of the difficulties fixing the face location at a particular coordinate, an extraction of the facial change from the comfortable to uncomfortable is relatively difficult task for a computerized approach. To overcome this difficulty, we propose a method, which combines the Fourier with wavelets approaches. After, a global facial change is extracted by the Fourier transform; application of the multi-resolution analysis of wavelets makes it possible to extract the local facial changes.
[ { "first": "Harumi", "middle": [], "last": "Iwasaki", "suffix": "" }, { "first": "Yoshifuru", "middle": [], "last": "Saito", "suffix": "" }, { "first": "Chieko", "middle": [], "last": "Kato", "suffix": "" }, { "first": "Susumu", "middle": [], "last": "Hanta", "suffix": "" }, { "first": "Kiyoshi", "middle": [], "last": "Horii", "suffix": "" } ]
1,999
10.3154/jvs.19.Supplement1_209
JOURNAL OF THE FLOW VISUALIZATION SOCIETY OF JAPAN
2313889455
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:61799884
null
null
null
null
null
Heuristic Evaluation: Comparing Generic and Specific Usability Heuristics for Identification of Usability Problems in a Living Museum Mobile Guide App
53,106,691
This paper reports on an empirical study that compares two sets of heuristics, Nielsen’s heuristics and the SMART heuristics in the identification of usability problems in a mobile guide smartphone app for a living museum. Five experts used the severity rating scales to identify and determine the severity of the usability issues based on the two sets of usability heuristics. The study found that Nielsen’s heuristics set is too general to detect usability problems in a mobile application compared to SMART heuristics which focuses on the smartphone application in the product development lifecycle instead of the generic Nielsen’s heuristics which focuses on a wide range of interactive system. The study highlights the importance of utilizing domain specific usability heuristics in the evaluation process. This ensures that relevant usability issues were successfully identified which could then be given immediate attention to ensure optimal user experience.
[ { "first": "Mohd Kamal", "middle": [], "last": "Othman", "suffix": "" }, { "first": "Muhd Nur Shaful", "middle": [], "last": "Sulaiman", "suffix": "" }, { "first": "Shaziti", "middle": [], "last": "Aman", "suffix": "" } ]
2,018
10.1155/2018/1518682
Adv. Human-Computer Interaction
Adv. Human-Computer Interaction
2888984282
[ "10758722", "9075119", "3618406", "2579507", "16067723", "20787769", "16683085", "2823891", "4011176", "6466270", "146856614", "147662692", "58891545", "144261365", "141927366", "190088344", "126767280", "3235980", "14892229", "116686799", "5158388", "10465690", "18535052", "17146837", "12197063", "28392864", "17842414", "7928383", "7341741", "18710370", "16067723", "20198339", "2656725", "35753238", "1784922", "5412064", "13798775", "46053333", "14982850", "11655232", "4682389", "31553472", "106997027", "113212855", "19147398", "651910", "15422061", "49674422", "18086506", "2614028", "15616016" ]
[ "191523499" ]
true
true
true
https://api.semanticscholar.org/CorpusID:53106691
1
1
1
1
1
Topic Modeling for Text Classification
199,005,119
Topic models allude to statistical algorithms for finding out an extensive text body’s latent semantic structures. Standing here in today’s world, the measure of the textual data and information we come across in our day-to-day lives is basically beyond our handling limit. Topic models can provide a way out for us to understand and manage the vast accumulations of unstructured textual data and information. Initially emerged as a text-mining instrument, topic models have found applications in various other fields. This paper makes a thorough comparative study of LSA with that of commonly used TF-IDF approach for text classification and proves that LSA yields better accuracy in classifying texts. The novelty of the paper lies in the fact that we are using a much sparser representation than usual TF-IDF and also, LSA can get from the topic if there are any synonym words. This paper proposes a method, using the concept of entropy, which further increases the accuracy of text classification.
[ { "first": "Pinaki", "middle": [ "Prasad", "Guha" ], "last": "Neogi", "suffix": "" }, { "first": "Amit", "middle": [ "Kumar" ], "last": "Das", "suffix": "" }, { "first": "Saptarsi", "middle": [], "last": "Goswami", "suffix": "" }, { "first": "Joy", "middle": [], "last": "Mustafi", "suffix": "" } ]
2,019
10.1007/978-981-13-7403-6_36
Emerging Technology in Modelling and Graphics
Emerging Technology in Modelling and Graphics
2960449550
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:199005119
null
null
null
null
null