metadata
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': broadleaved_indigenous_hardwood
'1': deciduous_hardwood
'2': grose_broom
'3': harvested_forest
'4': herbaceous_freshwater_vege
'5': high_producing_grassland
'6': indigenous_forest
'7': lake_pond
'8': low_producing_grassland
'9': manuka_kanuka
'10': shortrotation_cropland
'11': urban_build_up
'12': urban_parkland
- name: caption
dtype: string
- name: token
dtype: string
splits:
- name: train
num_bytes: 21868272
num_examples: 260
download_size: 21854979
dataset_size: 21868272
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-4.0
task_categories:
- text-to-image
language:
- en
tags:
- climate
pretty_name: Waikato Aerial 2017 Sample with Blip Captions
size_categories:
- n<1K
This is a re-upload of a random sample of 260 images + curresponding blip captions obtained from the original waikato_aerial_imagery_2017
classification dataset residing at https://datasets.cms.waikato.ac.nz/taiao/waikato_aerial_imagery_2017/ under the same license. You can find additional dataset information using the provided URL.
The BLIP model used for captioning: Salesforce/blip-image-captioning-large
The images belong to 13 unique categories and each caption contains a unique token for each class. These are useful for fine-tuning text-to-image
models like Stable Diffusion.