Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
Tagalog
Size:
1K - 10K
ArXiv:
DOI:
License:
license: gpl-3.0 | |
task_categories: | |
- token-classification | |
language: | |
- tl | |
size_categories: | |
- 1K<n<10K | |
pretty_name: TLUnified-NER | |
tags: | |
- low-resource | |
- named-entity-recognition | |
<!-- SPACY PROJECT: AUTO-GENERATED DOCS START (do not remove) --> | |
# πͺ spaCy Project: Dataset builder to HuggingFace Hub | |
## Dataset Description | |
This dataset contains the annotated TLUnified corpora from Cruz and Cheng | |
(2021). It consists of a curated sample of around 7,000 documents for the | |
named entity recognition (NER) task. The majority of the corpus are news | |
reports in Tagalog, resembling the domain of the original ConLL 2003. There | |
are three entity types: Person (PER), Organization (ORG), and Location (LOC). | |
## About this repository | |
This repository is a [spaCy project](https://spacy.io/usage/projects) for | |
converting the annotated spaCy files into IOB. The process goes like this: we | |
download the raw corpus from Google Cloud Storage (GCS), convert the spaCy | |
files into a readable IOB format, and parse that using our loading script | |
(i.e., `tlunified-ner.py`). We're also shipping the IOB file so that it's | |
easier to access. | |
## π project.yml | |
The [`project.yml`](project.yml) defines the data assets required by the | |
project, as well as the available commands and workflows. For details, see the | |
[spaCy projects documentation](https://spacy.io/usage/projects). | |
### β― Commands | |
The following commands are defined by the project. They | |
can be executed using [`spacy project run [name]`](https://spacy.io/api/cli#project-run). | |
Commands are only re-run if their inputs have changed. | |
| Command | Description | | |
| --- | --- | | |
| `setup-data` | Prepare the Tagalog corpora used for training various spaCy components | | |
| `upload-to-hf` | Upload dataset to HuggingFace Hub | | |
### β Workflows | |
The following workflows are defined by the project. They | |
can be executed using [`spacy project run [name]`](https://spacy.io/api/cli#project-run) | |
and will run the specified commands in order. Commands are only re-run if their | |
inputs have changed. | |
| Workflow | Steps | | |
| --- | --- | | |
| `all` | `setup-data` → `upload-to-hf` | | |
### π Assets | |
The following assets are defined by the project. They can | |
be fetched by running [`spacy project assets`](https://spacy.io/api/cli#project-assets) | |
in the project directory. | |
| File | Source | Description | | |
| --- | --- | --- | | |
| `assets/corpus.tar.gz` | URL | Annotated TLUnified corpora in spaCy format with train, dev, and test splits. | | |
<!-- SPACY PROJECT: AUTO-GENERATED DOCS END (do not remove) --> |