language:
- ar
configs:
- config_name: default
data_files:
- split: Amiri
path: Amiri/*.csv
- split: Sakkal_Majalla
path: Sakkal_Majalla/*.csv
- split: Arial
path: Arial/*.csv
- split: Calibri
path: Calibri/*.csv
- split: Scheherazade_New
path: Scheherazade_New/*.csv
features:
text:
dtype: string
tags:
- dataset
Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Description
This dataset is designed for training and evaluating Optical Character Recognition (OCR) models for Arabic text. It is an extension of an open-source dataset and includes text rendered in multiple Arabic fonts (Amiri, Sakkal Majalla, Arial, Calibri and Scheherazade New). The dataset simulates real-world book layouts to enhance OCR accuracy.
Dataset Structure
The dataset is divided into five splits based on font name (Sakkal_Majalla, Amiri, Arial, Calibri, and Scheherazade_New). Each split contains data specific to a single font. Within each split, the following attributes are present:
image_name: Unique identifier for each image.
chunk: The text content associated with the image.
font_name: The font used in text rendering.
image_base64: Base64-encoded image representation.
How to Use
from datasets import load_dataset
import base64
from io import BytesIO
from PIL import Image
# Load dataset with streaming enabled
ds = load_dataset("xya22er/text_to_image", streaming=True)
print(ds)
# Load the dataset
# Iterate over a specific font dataset (e.g., Amiri)
for sample in ds["Amiri"]:
image_name = sample["image_name"]
chunk = sample["chunk"] # Arabic text transcription
font_name = sample["font_name"]
# Decode Base64 image
image_data = base64.b64decode(sample["image_base64"])
image = Image.open(BytesIO(image_data))
# Show the image (optional)
image.show()
# Print the details
print(f"Image Name: {image_name}")
print(f"Font Name: {font_name}")
print(f"Text Chunk: {chunk}")
# Break after one sample for testing
break