Spaces:
Runtime error
Runtime error
Commit
·
31c3f47
1
Parent(s):
69127db
Update app.py
Browse files
app.py
CHANGED
@@ -33,14 +33,29 @@ def test_input(words):
|
|
33 |
# )
|
34 |
|
35 |
|
36 |
-
title = "BERT on a PLANE"
|
37 |
|
38 |
description = """
|
39 |
Did you know that, logically speaking, a small cat is not a small animal, and that a fake smile is not a smile?
|
40 |
|
41 |
|
42 |
-
Learn more testing our BERT model
|
43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
|
45 |
The current model achieves an accuracy of 90% on out-of-distribution evaluation
|
46 |
Coming soon: check if words were in training data!"""
|
|
|
33 |
# )
|
34 |
|
35 |
|
36 |
+
title = "**BERT on a PLANE**"
|
37 |
|
38 |
description = """
|
39 |
Did you know that, logically speaking, a small cat is not a small animal, and that a fake smile is not a smile?
|
40 |
|
41 |
|
42 |
+
Learn more by testing our BERT model tuned to perform phrase-level adjective-noun entailment, via the [PLANE](https://aclanthology.org/2022.coling-1.359/) dataset.
|
43 |
|
44 |
+
**Intended uses & limitations**:
|
45 |
+
|
46 |
+
The scope of the model is not to run lexical entailment or hypernym detection (e.g., *"A dog is an animal*"), but to perform a very specific subset of phrase-level entailment, based on adjective-nouns phrases. The type of question you can ask the model are limited, and should have one of three forms:
|
47 |
+
|
48 |
+
- An adjective+Noun is a Noun (e.g. A red car is a car)
|
49 |
+
|
50 |
+
- An adjective+Noun is a noun-hypernym (e.g. A red car is a vehicle)
|
51 |
+
|
52 |
+
- An adjective+Noun is a adjective+noun-hypernym (e.g. A red car is a red vehicle)
|
53 |
+
|
54 |
+
Linguistically speaking, adjectives belong to three macro classes (intersective, subsective, and intensional). From a linguistic and logical stand, these class shape the truth value of the three forms above. For instance, since red is an intersective adjective, the three from are all true. A subjective adjective like small allows just the first two, but not the last – that is, logically speaking, a small car is not a small vehicle.
|
55 |
+
|
56 |
+
In other words, the model was built to study out-of-distribution compositional generalisation with respect to a very specific set of compositional phenomena.
|
57 |
+
|
58 |
+
This poses clear limitations to the question you can ask the model. For instance, if you had to query the model with a basic (false) hypernym detection task (e.g., *A dog is a cat*), the model will consider it as true.
|
59 |
|
60 |
The current model achieves an accuracy of 90% on out-of-distribution evaluation
|
61 |
Coming soon: check if words were in training data!"""
|