Datasets:
The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: JSON parse error: Column(/games/[]/[]) changed from string to number in row 0
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 183, in _generate_tables
df = pandas_read_json(f)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
return pd.read_json(path_or_buf, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 815, in read_json
return json_reader.read()
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1014, in read
obj = self._get_object_parser(self.data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1040, in _get_object_parser
obj = FrameParser(json, **kwargs).parse()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1176, in parse
self._parse()
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1391, in _parse
self.obj = DataFrame(
^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/core/frame.py", line 778, in __init__
mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/core/internals/construction.py", line 503, in dict_to_mgr
return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/core/internals/construction.py", line 114, in arrays_to_mgr
index = _extract_index(arrays)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/core/internals/construction.py", line 677, in _extract_index
raise ValueError("All arrays must be of the same length")
ValueError: All arrays must be of the same length
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 544, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 383, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 186, in _generate_tables
raise e
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column(/games/[]/[]) changed from string to number in row 0Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Steam Universe Co-Review Network
A graph dataset connecting 25,996 Steam games through 861,232 weighted edges based on shared player reviews. If two games have many of the same reviewers, they're linked. The edge weight is the number of shared reviewers.
Paired with a full catalog of 82,928 Steam games (2005-2025) with genres, tags, ratings, prices, and developer info.
Live visualization: dr.eamer.dev/datavis/interactive/steam-network
Files
steam_network.json (41 MB)
The co-review graph.
- 25,996 nodes (games with enough reviews to form connections)
- 861,232 edges (weighted by shared reviewer count)
Node fields:
{
"id": "1000010",
"title": "Crown Trick",
"year": "2020",
"rating": "Very Positive",
"ratio": 85,
"reviews": 5263,
"price": 4.99,
"genres": [3, 0, 6, 5],
"tags": [40, 7],
"developer": "NEXT Studios"
}
Link fields: source (node index), target (node index), weight (shared reviewers).
steam_all_2005.json (6.2 MB)
Full catalog of 82,928 games in packed array format for compact transfer:
[name, year, approval_ratio, review_count, price, rating_index, genre_indices, tag_indices, developer]
Includes lookup tables for 9 rating tiers, 33 genres, and 50 tags.
steam_force_layout.json (252 KB)
Pre-computed force-directed layout coordinates. Saves about 30 seconds of simulation time when loading the visualization.
Pipeline scripts
Four Python scripts to rebuild the dataset from source:
| Script | Purpose |
|---|---|
build_network_v2.py |
Scans 2GB recommendations.csv to find co-reviewers and build the edge list |
enrich_data.py |
Processes the FronkonGames enriched CSV into the compact JSON catalog |
compute_layout.py |
Runs a force simulation in Python to pre-compute node positions |
build_all_games.py |
Legacy catalog builder (superseded by enrich_data.py) |
Network Construction
The network is built from 41 million Steam user review records. Two games share an edge when 5 or more users reviewed both. Key parameters:
MIN_SHARED = 5-- minimum shared reviewers for an edgeTOP_K = 50-- maximum neighbors retained per nodeMAX_USER_GAMES = 75-- caps per-user pair generation to prevent combinatorial blowup
Use Cases
- Game recommendations -- collaborative filtering through graph traversal
- Community detection -- find genre clusters, indie vs. AAA ecosystems
- Network analysis -- centrality measures, bridge games connecting disparate genres
- Market analysis -- price, rating, and genre distributions across 82K titles
- Visualization -- the companion interactive viz has 8 rendering modes
Sources
- Game metadata: FronkonGames/steam-games-dataset (January 2026 snapshot)
- User reviews: Kaggle Steam recommendations.csv (41M review records)
- Network: Derived from the review data
Quick Stats
| Metric | Value |
|---|---|
| Games in catalog | 82,928 |
| Network nodes | 25,996 |
| Network edges | 861,232 |
| Genres | 33 |
| Tags | 50 |
| Rating tiers | 9 |
| Year range | 2005-2025 |
| Most reviewed | Counter-Strike 2 (8.8M reviews) |
Author
Luke Steuber
- Website: lukesteuber.com
- Bluesky: @lukesteuber.com
- Downloads last month
- -