Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Column(/personalities/[]/[]) changed from string to number in row 0
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
                  df = pandas_read_json(f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 815, in read_json
                  return json_reader.read()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1025, in read
                  obj = self._get_object_parser(self.data)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1051, in _get_object_parser
                  obj = FrameParser(json, **kwargs).parse()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1187, in parse
                  self._parse()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1403, in _parse
                  ujson_loads(json, precise_float=self.precise_float), dtype=None
              ValueError: Trailing data
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3212, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2051, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2226, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1677, in __iter__
                  for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 299, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 163, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 137, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column(/personalities/[]/[]) changed from string to number in row 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

FarisHijazi/kajiwoto.ai-chat

Dataset Description

This dataset is a processed version of the FarisHijazi/kajiwoto.ai-chat dataset, containing extracted and re-formatted conversation data from Kajiwoto. Kajiwoto is a platform where users can create and interact with AI character datasets. This version focuses on providing the core conversational data in a ShareGPT-like format.

Important Note: This data is still in a raw format and requires careful consideration before use in training.

  • Size: 13 784 rows

Data Fields

The dataset contains the following fields, mirroring the original Kajiwoto dataset structure:

  • id: Unique identifier for the character.
  • name: Name of the character.
  • description: Character description (often empty).
  • profilePhotoUri: URI of the character's profile photo (may be null).
  • dominantColors: List of dominant colors in the profile photo (may be null).
  • personalities: Personality tags (may be null).
  • nsfw: Boolean indicating whether the character is marked as "not safe for work."
  • conversations: A list of turns representing a conversation. Each turn has:
    • from: Either "human" or "gpt" indicating the speaker.
    • value: The text of the message.
  • upvotes: Number of upvotes the character has received.

Data Format

The conversations field is formatted in a ShareGPT-like structure, where each turn indicates the speaker ("human" or "gpt") and the message content:

{
  "id": "M51k",
  "name": "Damian ",
  "description": "",
  "profilePhotoUri": null,
  "dominantColors": null,
  "personalities": null,
  "nsfw": false,
  "conversations": [
    {
      "from": "human",
      "value": "Still a horrific introvert?"
    },
    {
      "from": "gpt",
      "value": "I know you wanted me to find some friends, but I've already got one here in Gotham and one is more than enough"
    },
    {
      "from": "human",
      "value": "How is the friend making going "
    },
    {
      "from": "gpt",
      "value": "I know you wanted me to find some friends, but I've already got one here in Gotham and one is more than enough"
    },
    {
      "from": "human",
      "value": "You're not very good at this relax and have fun thing are you"
    },
    {
      "from": "gpt",
      "value": "I wasn't raised to have fun"
    },
   ...
  ],
  "upvotes": 0
}

Intended Uses

This dataset is intended for research and experimentation in areas such as:

  • AI character development
  • Dialogue generation
  • Role-playing AI
  • Analyzing conversational patterns

Caveats and Recommendations

Preprocessing Considerations:

  • Filtering:
    • Upvotes: Consider filtering by the number of upvotes. However, be aware that profiles with fewer views might have lower upvotes, even if the quality is high.
    • NSFW: The nsfw field may not be completely reliable. Some users engage in NSFW chats with characters not marked as NSFW. Manual inspection may be required.
  • Text Processing:
    • The text contains special markup that needs to be handled:
      • #scene=happy: Scene descriptions.
      • %{Howdy|G'day|Yo}: Multiple possible options for a phrase.
      • Pronouns: Requires handling for consistency.

Limitations

  • The data might not be sufficiently filtered for training without further processing.
  • Many short, repetitive messages may not accurately represent character behavior.
  • The dataset contains a mixture of good and bad quality conversations.
Downloads last month
44