Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 1 new columns ({'Unnamed: 0'}) and 1 missing columns ({'source'}). This happened while the csv dataset builder was generating data using hf://datasets/ankitkushwaha90/cmd_linux_macos_other_all/cmd_commands.csv (at revision f621ca47d5b9d55033b3b86684792db13a3bdec8) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1831, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 644, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast Unnamed: 0: int64 name: string description: string -- schema metadata -- pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 608 to {'name': Value('string'), 'description': Value('string'), 'source': Value('string')} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1456, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1055, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 894, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 970, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1833, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 1 new columns ({'Unnamed: 0'}) and 1 missing columns ({'source'}). This happened while the csv dataset builder was generating data using hf://datasets/ankitkushwaha90/cmd_linux_macos_other_all/cmd_commands.csv (at revision f621ca47d5b9d55033b3b86684792db13a3bdec8) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
name
string | description
string | source
string |
---|---|---|
AccessChk
|
Get the security descriptor (SDDL) for an object (file, directory, reg key).
|
cmd
|
ADDUSERS
|
Add or list users to/from a CSV file
|
cmd
|
ADmodcmd
|
Active Directory Bulk Modify
|
cmd
|
ARP
|
Address Resolution Protocol
|
cmd
|
ASSOC
|
Change file extension associations •
|
cmd
|
ATTRIB
|
Change file attributes
|
cmd
|
BCDBOOT
|
Create or repair a system partition
|
cmd
|
BCDEDIT
|
Manage Boot Configuration Data
|
cmd
|
BITSADMIN
|
Background Intelligent Transfer Service
|
cmd
|
BOOTREC
|
Repair or replace a partition boot sector
|
cmd
|
BREAK
|
Do nothing, successfully •
|
cmd
|
BROWSTAT
|
Get domain, browser and PDC info
|
cmd
|
CACLS
|
Change file permissions
|
cmd
|
CALL
|
Call one batch program from another •
|
cmd
|
CERTREQ
|
Request certificate from a certification authority
|
cmd
|
CERTUTIL
|
Manage certification authority (CA) files and services
|
cmd
|
CD
|
Change Directory - move to a specific Folder •
|
cmd
|
CHANGE
|
Change Terminal Server Session properties
|
cmd
|
CHANGEPK
|
Upgrade device Edition/Product Key
|
cmd
|
CHCP
|
Change the active console Code Page
|
cmd
|
CHDIR
|
Change Directory - move to a specific Folder •
|
cmd
|
CHKDSK
|
Check Disk - check and repair disk problems
|
cmd
|
CHKNTFS
|
Check the NTFS file system
|
cmd
|
CHOICE
|
Accept keyboard input to a batch file
|
cmd
|
CIPHER
|
Encrypt or Decrypt files/folders, overwrite data.
|
cmd
|
CleanMgr
|
Automated cleanup of Temp files, recycle bin
|
cmd
|
CLIP
|
Copy STDIN to the Windows clipboard
|
cmd
|
CLS
|
Clear the screen •
|
cmd
|
CMD
|
Start a new CMD shell
|
cmd
|
CMDKEY
|
Manage stored usernames/passwords
|
cmd
|
COLOR
|
Change colors of the CMD window •
|
cmd
|
COMP
|
Compare the contents of two files or sets of files
|
cmd
|
COMPACT
|
Compress files or folders on an NTFS partition
|
cmd
|
COMPRESS
|
Compress one or more files
|
cmd
|
CON
|
Console input
|
cmd
|
CONVERT
|
Convert a FAT drive to NTFS
|
cmd
|
COPY
|
Copy one or more files to another location •
|
cmd
|
Coreinfo
|
Show the mapping between logical & physical processors
|
cmd
|
CSCcmd
|
Client-side caching (Offline Files)
|
cmd
|
CSVDE
|
Import or Export Active Directory data
|
cmd
|
CURL
|
Transfer data from or to a server
|
cmd
|
DATE
|
Display or set the date •
|
cmd
|
DEFRAG
|
Defragment hard drive
|
cmd
|
DEL
|
Delete one or more files •
|
cmd
|
DELPROF
|
Delete user profiles
|
cmd
|
DELTREE
|
Delete a folder and all subfolders
|
cmd
|
DevCon
|
Device Manager Command Line Utility
|
cmd
|
DIR
|
Display a list of files and folders •
|
cmd
|
DIRUSE
|
Display directory sizes/usage
|
cmd
|
DISKPART
|
Disk Administration
|
cmd
|
DISKSHADOW
|
Volume Shadow Copy Service
|
cmd
|
DISKUSE
|
Show the space used in folders
|
cmd
|
DISM
|
Deployment Image Servicing and Management
|
cmd
|
DisplaySwitch
|
Specify which display to use and how to use it
|
cmd
|
DNSCMD
|
Manage DNS servers
|
cmd
|
DOSKEY
|
Edit command line, recall commands, and create macros
|
cmd
|
DriverQuery
|
Display installed device drivers
|
cmd
|
DSACLs
|
Active Directory ACLs
|
cmd
|
DSAdd
|
Add items to Active Directory (user group computer)
|
cmd
|
DSGet
|
View items in Active Directory (user group computer)
|
cmd
|
DSQuery
|
Search Active Directory (user group computer)
|
cmd
|
DSMod
|
Modify items in Active Directory (user group computer)
|
cmd
|
DSMove
|
Move an Active Directory Object
|
cmd
|
DSRM
|
Remove items from Active Directory
|
cmd
|
DSREGCMD
|
Directory Service Registration
|
cmd
|
DU
|
Display directory sizes/usage
|
cmd
|
ECHO
|
Display message on screen •
|
cmd
|
ENDLOCAL
|
End localisation of the environment in a batch file •
|
cmd
|
ERASE
|
Delete one or more files •
|
cmd
|
EVENTCREATE
|
Add a message to the Windows event log
|
cmd
|
EXIT
|
Quit the current script/routine and set an errorlevel •
|
cmd
|
EXPAND
|
Uncompress CAB files
|
cmd
|
EXPLORER
|
Open Windows Explorer
|
cmd
|
EXTRACT
|
Uncompress CAB files
|
cmd
|
FC
|
Compare two files
|
cmd
|
FIND
|
Search for a text string in a file
|
cmd
|
FINDSTR
|
Search for strings in files
|
cmd
|
FLTMC
|
Manage MiniFilter drivers
|
cmd
|
FONDUE
|
Features on Demand User Experience Tool
|
cmd
|
FOR /F
|
Loop command: against a set of files •
|
cmd
|
FOR /F
|
Loop command: against the results of another command •
|
cmd
|
FOR
|
Loop command: all options Files, Directory, List •
|
cmd
|
FORFILES
|
Batch process multiple files
|
cmd
|
FORMAT
|
Format a disk
|
cmd
|
FREEDISK
|
Check free disk space
|
cmd
|
FSUTIL
|
File and Volume utilities
|
cmd
|
FTP
|
File Transfer Protocol
|
cmd
|
FTYPE
|
File extension file type associations •
|
cmd
|
GETMAC
|
Display the Media Access Control (MAC) address
|
cmd
|
GOTO
|
Direct a batch program to jump to a labelled line •
|
cmd
|
GPRESULT
|
Display Resultant Set of Policy information
|
cmd
|
GPUPDATE
|
Update Group Policy settings
|
cmd
|
HELP
|
Online Help
|
cmd
|
HOSTNAME
|
Display the host name of the computer
|
cmd
|
iCACLS
|
Change file and folder permissions
|
cmd
|
IEXPRESS
|
Create a self extracting ZIP file archive
|
cmd
|
IF
|
Conditionally perform a command •
|
cmd
|
IFMEMBER
|
Is the current user a member of a group
|
cmd
|
IPCONFIG
|
Configure IP
|
cmd
|
INUSE
|
Replace files that are in use by the OS
|
cmd
|
End of preview.
linux command
window command
mac command
# Complete code to fine-tune a T5 model using the provided "all_commands.csv" dataset.
# Task: We define a text-to-text task where the input is a prompt like "Describe the command: {name} in {source}"
# and the target output is the description.
# This turns the model into a command description generator/query tool.
#
# Prerequisites:
# - Install required libraries: pip install transformers datasets torch sentencepiece
# - Save the provided CSV content as "all_commands.csv" in the working directory.
# The CSV has columns: name, description, source
#
# Notes:
# - Using T5-small for efficiency; change to t5-base or larger if needed.
# - Training on a GPU is recommended for larger models/epochs.
# - Adjust hyperparameters as needed (e.g., epochs, batch size).
from transformers import T5ForConditionalGeneration, T5Tokenizer, Trainer, TrainingArguments
from datasets import load_dataset
import torch
# Load model and tokenizer
model_name = "t5-small" # You can use "t5-base" for better performance (requires more resources)
model = T5ForConditionalGeneration.from_pretrained(model_name)
tokenizer = T5Tokenizer.from_pretrained(model_name)
# Load the dataset from CSV
# Assuming "all_commands.csv" is in the current directory with columns: name, description, source
dataset = load_dataset("csv", data_files={"train": "all_commands.csv"})
# Split into train and validation (80/20 split for simplicity; adjust as needed)
dataset = dataset["train"].train_test_split(test_size=0.2)
dataset["validation"] = dataset["test"] # Rename 'test' to 'validation' for Trainer
# Preprocess function: Create input-output pairs
# Input: "Describe the command: {name} in {source}"
# Target: "{description}"
def preprocess_function(examples):
inputs = [f"Describe the command: {name} in {source}" for name, source in zip(examples["name"], examples["source"])]
targets = examples["description"]
# Tokenize inputs
model_inputs = tokenizer(inputs, max_length=128, truncation=True, padding="max_length")
# Tokenize targets (labels)
labels = tokenizer(targets, max_length=256, truncation=True, padding="max_length")
model_inputs["labels"] = labels["input_ids"]
return model_inputs
# Apply preprocessing to the dataset
tokenized_dataset = dataset.map(preprocess_function, batched=True, remove_columns=dataset["train"].column_names)
# Training arguments
training_args = TrainingArguments(
output_dir="./new_cmd_model", # Directory to save checkpoints
evaluation_strategy="epoch", # Evaluate at the end of each epoch
learning_rate=5e-5, # Learning rate
per_device_train_batch_size=8, # Batch size for training (adjust based on GPU memory)
per_device_eval_batch_size=8, # Batch size for evaluation
num_train_epochs=3, # Number of epochs (increase for better results)
weight_decay=0.01, # Weight decay for regularization
save_strategy="epoch", # Save checkpoint at end of each epoch
load_best_model_at_end=True, # Load the best model at the end
metric_for_best_model="eval_loss", # Use evaluation loss to determine best model
greater_is_better=False, # Lower loss is better
fp16=True, # Enable mixed precision for faster training (if GPU supports)
)
# Initialize Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset["train"],
eval_dataset=tokenized_dataset["validation"],
)
# Train the model
trainer.train()
# Save the fine-tuned model and tokenizer
model.save_pretrained("./new_cmd_model")
tokenizer.save_pretrained("./new_cmd_model")
print("Fine-tuning complete. Model saved to './new_cmd_model'.")
# Optional: Example inference code to test the model
# Load the fine-tuned model for testing
fine_tuned_model = T5ForConditionalGeneration.from_pretrained("./new_cmd_model")
fine_tuned_tokenizer = T5Tokenizer.from_pretrained("./new_cmd_model")
# Example prompt
prompt = "Describe the command: ls in linux"
# Tokenize and generate
inputs = fine_tuned_tokenizer(prompt, return_tensors="pt")
outputs = fine_tuned_model.generate(inputs["input_ids"], max_length=100, num_beams=4, early_stopping=True)
generated_text = fine_tuned_tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Example generated description:", generated_text)
spritual dataset set using for fine-tuning
using in this dataset in fine-tuning
# Fine-Tuning Code for Understanding English and Sanskrit
# Focused on Stotrams of Mahakali and Lord Shiva, Sanatan Hindu Calendar, and Astrology
# This script uses Hugging Face Transformers to fine-tune a language model (distilgpt2) on collected texts.
# The dataset includes Sanskrit stotrams (transliterated/Devanagari), English meanings, and explanations.
# Run this in an environment with Python, and install required libraries if not present.
# Step 1: Install required libraries (if not already installed)
# !pip install transformers datasets torch
import torch
from transformers import (
GPT2Tokenizer,
GPT2LMHeadModel,
TextDataset,
DataCollatorForLanguageModeling,
Trainer,
TrainingArguments
)
from datasets import Dataset
# Step 2: Prepare the dataset
# Collected texts from sources:
# - Sri Maha Kali Stotram (transliterated Sanskrit)
# - Shiva Tandava Stotram (Sanskrit in Devanagari with English meanings)
# - Detailed explanation of Sanatan Hindu Calendar (Panchang)
# - Basics of Hindu Astrology (Jyotisha)
mahakali_stotram = """
Dhyanam:
śavārūḍhāṃ mahābhīmāṃ ghōradaṃṣṭrāṃ varapradāṃ hāsyayuktāṃ triṇētrāñcha kapāla kartrikā karām ।
muktakēśīṃ lalajjihvāṃ pibantīṃ rudhiraṃ muhuḥ chaturbāhuyutāṃ dēvīṃ varābhayakarāṃ smarēt ॥
śavārūḍhāṃ mahābhīmāṃ ghōradaṃṣṭrāṃ hasanmukhīṃ chaturbhujāṃ khaḍgamuṇḍavarābhayakarāṃ śivām ।
muṇḍamālādharāṃ dēvīṃ lalajjihvāṃ digambarāṃ ēvaṃ sañchintayētkāḻīṃ śmaśanālayavāsinīm ॥
Stotram:
1. viśvēśvarīṃ jagaddhātrīṃ sthitisaṃhārakāriṇīm ।
nidrāṃ bhagavatīṃ viṣṇōratulāṃ tējasaḥ prabhām ॥ 1 ॥
2. tvaṃ svāhā tvaṃ svadhā tvaṃ hi vaṣaṭkāraḥ svarātmikā ।
sudhā tvamakṣarē nityē tridhā mātrātmikā sthitā ॥ 2 ॥
3. arthamātrāsthitā nityā yānuchchāryā viśēṣataḥ ।
tvamēva sandhyā sāvitrī tvaṃ dēvī jananī parā ॥ 3 ॥
4. tvayaitaddhāryatē viśvaṃ tvayaitadsṛjyatē jagat ।
tvayaitatpālyatē dēvi tvamatsyantē cha sarvadā ॥ 4 ॥
5. visṛṣṭau sṛṣṭirūpā tvaṃ sthitirūpā cha pālanē ।
tathā saṃhṛtirūpāntē jagatō'sya jaganmayē ॥ 5 ॥
6. mahāvidyā mahāmāyā mahāmēdhā mahāsmṛtiḥ ।
mahāmōhā cha bhavatī mahādēvī mahēśvarī ॥ 6 ॥
7. prakṛtistvaṃ cha sarvasya guṇatrayavibhāvinī ।
kālarātrirmahārātrirmōharātriścha dāruṇā ॥ 7 ॥
8. tvaṃ śrīstvamīśvarī tvaṃ hrīstvaṃ buddhirbōdhalakṣaṇā ।
lajjā puṣṭistathā tuṣṭistvaṃ śāntiḥ kṣāntirēva cha ॥ 8 ॥
9. khaḍginī śūlinī ghōrā gadinī chakriṇī tathā ।
śaṅkhinī chāpinī bāṇabhuśuṇḍīparighāyudhā ॥ 9 ॥
10. saumyā saumyatarāśēṣā saumyēbhyastvatisundarī ।
parāparāṇāṃ paramā tvamēva paramēśvarī ॥ 10 ॥
11. yachcha kiñchit kvachidvastu sadasadvākhilātmikē ।
tasya sarvasya yā śaktiḥ sā tvaṃ kiṃ stūyasē tadā ॥ 11 ॥
12. yayā tvayā jagatsraṣṭā jagatpātyatti yō jagat ।
sō'pi nidrāvaśaṃ nītaḥ kastvāṃ stōtumihēśvaraḥ ॥ 12 ॥
13. viṣṇuḥ śarīragrahaṇamahamīśāna ēva cha
"""
shiva_tandava_stotram = """
Verse 1:
Sanskrit Text:
जटाटवीगलज्जलप्रवाहपावितस्थले गलेऽवलम्ब्य लम्बितां भुजङ्गतुङ्गमालिकाम् ।
डमड्डमड्डमड्डमन्निनादवड्डमर्वयं चकार चण्डताण्डवं तनोतु नः शिवः शिवम् ॥१॥
English Meaning:
(My Prostrations to Lord Shiva, the description of whose great Tandava Dance sends a thrill of Blessedness through the Devotees)
1.1: (There dances Shiva His Great Tandava) From His Huge Matted Hair like a Forest, is Pouring out and Flowing down the Sacred Water of the River Ganges, and making the Ground Holy; on that Holy Ground Shiva is dancing His Great Tandava Dance;
1.2: Supporting His Neck and Hanging down are the Lofty Serpents which are Adorning His Neck like Lofty Garlands,
1.3: His Damaru is continuously Weaving out the Sound - Damad, Damad, Damad, Damad - and filling the Air all around,
1.4: Shiva Performed such a Passionate Tandava; O my Lord Shiva, Please Extend the Auspicious Tandava Dance within our beings also.
Verse 2:
Sanskrit Text:
जटाकटाहसम्भ्रमभ्रमन्निलिम्पनिर्झरी विलोलवीचिवल्लरीविराजमानमूर्धनि ।
धगद्धगद्धगज्जलल्ललाटपट्टपावके किशोरचन्द्रशेखरे रतिः प्रतिक्षणं मम ॥२॥
English Meaning:
(My Prostrations to Lord Shiva, the description of whose great Tandava Dance sends a thrill of Blessedness through the Devotees)
2.1: (There dances Shiva His Great Tandava) His Huge Matted Hair like a Caldron is Revolving round and round; and Whirling with it is the Great River Goddess Ganga, ...
2.2: ... and the Strands of His Matted Hair which are like Huge Creepers are Waving like Huge Waves; His Forehead is Brilliantly Effulgent and ...
2.3: ... on the Surface of that Huge Forehead is Burning a Blazing Fire with the sound - Dhagad, Dhagad, Dhagad (referring to His Third Eye), ...
2.4: ... and a Young Crescent Moon is Shining on the Peak (i.e. on His Head); O my Lord Shiva, Your Great Tandava Dance is passing a surge of Delight Every Moment through my being.
Verse 3:
Sanskrit Text:
धराधरेन्द्रनन्दिनीविलासबन्धुबन्धुर स्फुरद्दिगन्तसन्ततिप्रमोदमानमानसे ।
कृपाकटाक्षधोरणीनिरुद्धदुर्धरापदि क्वचिद्दिगम्बरे मनो विनोदमेतु वस्तुनि ॥३॥
English Meaning:
(My Prostrations to Lord Shiva, the description of whose great Tandava Dance sends a thrill of Blessedness through the Devotees)
3.1: (There dances Shiva His Great Tandava) And Now He is accompanied by the Beautiful Divine Mother Who is the Supporter of the Earth and the Daughter of the Mountain King; She is ever His Companion in His various Divine Sports,
3.2: The Entire Horizon is Shaking with the force of that Tandava, and the subtle waves of the Tandava is entering the sphere of the Mind and raising waves of Excessive Joy,
3.3: That Shiva, the Flow of whose Graceful Side Glance can Restrain even the Unrestrainable Calamities, and ...
3.4: ... Who is Digambara (clothed with sky signifying He is ever-free and without any desire), Sometimes in His Mind Materializes the wish to Play the Divine Sports (and hence this Great Tandava).
Verse 4:
Sanskrit Text:
जटाभुजङ्गपिङ्गलस्फुरत्फणामणिप्रभा कदम्बकुङ्कुमद्रव
"""
hindu_calendar_explanation = """
The Hindu calendar is a traditional lunisolar calendar system used in the Indian subcontinent and Southeast Asia, with regional variations for social and religious purposes. It is based on the sidereal year for the solar cycle and adjusts lunar cycles approximately every three years to align with seasonal and agricultural needs. Unlike the Gregorian calendar, which adds days to months, the Hindu calendar inserts an extra full month, known as Adhika Masa, every 32–33 months to ensure festivals and rituals occur in the appropriate season. This system has been in use since Vedic times and remains significant for setting Hindu festival dates, as well as being adopted by early Buddhist and Jain communities for their calendars.
Lunisolar Nature
The Hindu calendar combines lunar and solar elements, using the lunar cycle for months and days, and the solar cycle for the year. It accounts for the mismatch between the lunar year (approximately 354 days) and the solar year (approximately 365 days) through intercalary months, ensuring alignment with natural and agricultural cycles.
Components of the Panchang
The Panchang, or Panjika in Eastern India, is a comprehensive almanac with five key components (angas):
- Tithi: Lunar day, based on the angular distance between the Sun and Moon, varying between 21.5 and 26 hours, used for timing rituals and festivals.
- Vara: Weekday, corresponding to celestial bodies (e.g., Ravi for Sunday, Soma for Monday), with names aligned with Indo-European calendars, divided into 60 ghatika (24 minutes each).
- Nakshatra: Divisions of the ecliptic, each 13° 20', starting from 0° Aries, used for astrological and ritual purposes.
- Yoga: Calculated by adding the longitudes of the Sun and Moon, normalized to 0°–360°, and divided into 27 parts, each 800 arcminutes, influencing auspicious timings.
- Karana: Half of a tithi, defined by the angular distance between the Sun and Moon increasing by 6°, with 11 preset karaṇas (4 fixed and 7 repeating) to cover 30 tithis.
Months
The Hindu calendar has both solar and lunar months:
- Solar Months: Based on the Sun’s transit through the zodiac (rāśi), divided into 12 parts of 30° each, named after constellations (e.g., Mesha for Aries, Vṛṣabha for Taurus). Regional calendars like Tamil, Bengali, and Malayalam adapt these names, with variations in pronunciation and order. Solar months correspond to seasons (ṛtu), such as Vasanta (spring) and Grīṣma (summer), and are linked to Gregorian months (e.g., Mesha spans April–May).
- Lunar Months: Based on lunar cycles, with two fortnights (pakṣa): Shukla Paksha (waxing, bright half ending in full moon) and Krishna Paksha (waning, dark half ending in new moon). There are two traditions:
- Amānta: Ends on new moon day, followed by most peninsular states except Tamil Nadu, Kerala, etc.
- Purnimānta: Ends on full moon day, followed in northern India and Nepal, restored by King Vikramaditya in 57 BCE. Lunar months are named variably across regions (e.g., Chaitra, Vaishakha), with the Sun’s transit into Mesha marking Chaitra as the first month.
To align lunar and solar calendars, an extra month (Adhika Masa or Purushottam Masa) is inserted every 32.5 months on average, with complex rules to avoid repetition of certain months (Mārgaśīrṣa, Pausha, Magha). Rare corrections include dropping a month (kshaya month) under specific astronomical conditions, such as in 1 BCE when Pausha was omitted.
Eras (Samvat)
The calendar uses several eras (samvat), with three significant ones:
- Vikram Samvat: Starts in 57 BCE, linked to King Vikramaditya, common in northern, western, central, and eastern India, with the new year in Vaishakha.
- Shaka Samvat: Includes the Old Shaka Era (epoch uncertain, possibly 1st millennium BCE) and Saka Era of 78 CE, prevalent in southern India and Southeast Asia, with inscriptions like Kedukan Bukit (682 CE) using it.
- Indian National Calendar: A modern standardized calendar combining Hindu systems, though traditional calendars remain in use.
Regional Variations
The Hindu Calendar Reform Committee identified over 30 variants, broadly categorized as:
- Lunar Emphasizing: Vikrama calendar (western, northern India, Nepal) and Shalivahana Shaka (Deccan region, e.g.
"""
hindu_astrology_explanation = """
Jyotisha, also known as Vedic astrology or Hindu astrology, is a traditional system rooted in the study of the Vedas and is one of the six auxiliary disciplines, or Vedangas, in Hinduism. It is derived from the Sanskrit word "jyotish," meaning light, such as that of the sun, moon, or heavenly bodies, and encompasses astronomy, astrology, and timekeeping using celestial movements. Its primary historical purpose was to maintain calendars and predict auspicious times for Vedic rituals.
History
Jyotisha's earliest texts are found in the Vedanga Jyotisha, linked to the Rigveda and Yajurveda, with versions consisting of 36 and 43 verses, respectively. Early jyotisha focused on calendar preparation for sacrificial rituals, with no mention of planets, and included references to eclipse-causing entities like Rahu and Svarbhānu in texts like the Atharvaveda and Chandogya Upanishad. The practice is based on the Vedic concept of bandhu, connecting the microcosm and macrocosm. There is debate over its origins, with some scholars suggesting Hellenistic influences, particularly after the Yavanajātaka (2nd century CE), while others argue for independent development, possibly with interactions with Greek astrology. The Āryabhaṭīya (5th century CE) and later texts like the Bṛhat Parāśara Horāśāstra (7th-8th centuries CE) and Sārāvalī (around 800 CE) form the basis of classical Indian astrology.
The Six Limbs (Vedangas)
Jyotisha is one of the six Vedangas, which are auxiliary disciplines supporting Vedic rituals. The other Vedangas include phonetics, grammar, etymology, metrics, and ritual. Jyotisha specifically deals with astronomy and astrology for timekeeping and ritual timing, but the content does not detail the other five Vedangas beyond their mention as part of the six.
Core Elements
Grahas (Planets)
The navagraha, or nine celestial bodies, include Surya (Sun), Chandra (Moon), Budha (Mercury), Shukra (Venus), Mangala (Mars), Bṛhaspati or Guru (Jupiter), Shani (Saturn), Rahu (North node of the Moon), and Ketu (South node of the Moon). These are believed to influence human affairs, with Rahu and Ketu, known as shadow planets, associated with eclipses and having an 18-year orbital cycle. Planets are considered indicators (karakas) of major life aspects like profession, marriage, and longevity, with Atmakaraka being the most significant for broad life contours.
Rashis (Zodiac Signs)
Jyotisha uses the sidereal zodiac, an imaginary 360-degree belt divided into 12 equal parts called rashis, each spanning 30 degrees. Unlike Western astrology, which uses the tropical zodiac, Jyotisha accounts for the precession of the equinoxes with an ayanāṃśa adjustment, aligning with constellations. The rashis, with their Sanskrit names and corresponding English signs, include Aries (Meṣa), Taurus (Vṛṣabha), Gemini (Mithuna), Cancer (Karka), Leo (Siṃha), Virgo (Kanyā), Libra (Tulā), Scorpio (Vṛścika), Sagittarius (Dhanuṣa), Capricorn (Makara), Aquarius (Kumbha), and Pisces (Mīna). Each rashi is associated with elements (fire, earth, air, water), qualities (movable, fixed, dual), and ruling bodies, excluding Uranus, Neptune, and Pluto, which are disregarded.
Bhavas (Houses)
The birth chart, or bhāva chakra, divides the 360-degree circle into 12 houses, each spanning 30 degrees, representing different life aspects. Bhavas are categorized into four purusharthas (aims in life): Dharma (duty, 1st, 5th, 9th houses), Artha (resources, 2nd, 6th, 10th), Kama (pleasure, 3rd, 7th, 11th), and Moksha (liberation, 4th, 8th, 12th). These houses personalize astrological signs to the individual, with each house influenced by associated karaka planets.
Nakshatras
Nakshatras, or lunar mansions, are 27 equal divisions of the night sky, each covering 13° 20′ of the ecliptic, identified by prominent stars. Historically, 28 nakshatras were enumerated, but modern practice uses 27, with the missing 28th being Abhijeeta. Each nakshatra is divided into four padas (quarters) of 3° 20′. The junction of
"""
# Concatenate all texts into one large corpus for language modeling
full_text = mahakali_stotram + "\n\n" + shiva_tandava_stotram + "\n\n" + hindu_calendar_explanation + "\n\n" + hindu_astrology_explanation
# Save to a file for TextDataset
with open("fine_tune_dataset.txt", "w", encoding="utf-8") as f:
f.write(full_text)
# Step 3: Load tokenizer and model
model_name = "distilgpt2" # Small model for fine-tuning; supports English and can learn Sanskrit patterns
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name)
# Add padding token if not present
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
# Step 4: Create dataset
dataset = TextDataset(
tokenizer=tokenizer,
file_path="fine_tune_dataset.txt",
block_size=128 # Adjust based on context length
)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=False # Causal LM, not masked
)
# Step 5: Set up training arguments
training_args = TrainingArguments(
output_dir="./fine_tuned_model",
overwrite_output_dir=True,
num_train_epochs=3, # Adjust epochs as needed
per_device_train_batch_size=4,
save_steps=500,
save_total_limit=2,
logging_steps=100,
learning_rate=5e-5,
)
# Step 6: Initialize Trainer and fine-tune
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
)
trainer.train()
# Step 7: Save the fine-tuned model
trainer.save_model("./fine_tuned_model")
tokenizer.save_pretrained("./fine_tuned_model")
print("Fine-tuning completed. The model can now be used for generation tasks related to English, Sanskrit stotrams, Hindu calendar, and astrology.")
- Downloads last month
- 7