Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
503 Server Error: Service Unavailable for url: https://huggingface.co/api/datasets/BrandonFors/pandas-github-issues/branch/refs%2Fconvert%2Fparquet (Request ID: Root=1-689fed5a-38d9843357aefa5f624c7ce4;c3a88179-826b-49bc-a532-90a9db28658a)
Internal Error - We're working hard to fix this as soon as possible!
Error code: UnexpectedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
id
int64 | node_id
string | title
string | body
string | state
string | url
string | number
int64 | comments
list | labels
list |
---|---|---|---|---|---|---|---|---|
3,322,672,274 |
PR_kwDOAA0YD86jtSEC
|
ENH: EA._cast_pointwise_result
|
- [x] closes #59895 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Still have a bunch of pyarrow tests involving duration/timestamp dtypes failing. Also need to update/remove the test files' _cast_pointwise_result methods.
xref #56430, could close that with a little effort.
I suspect a bunch of "pyarrow dtype retention" tests are solved by this, will update as I check.
|
open
|
https://github.com/pandas-dev/pandas/pull/62105
| 62,105 |
[
"The pyarrow duration stuff is caused by an upstream issue https://github.com/apache/arrow/issues/40620",
"@rhshadrach i think you had a recent issue/pr involving retaining nullable dtypes in a .map?"
] |
[] |
3,319,749,137 |
PR_kwDOAA0YD86jjfl1
|
RFC: Introduce `pandas.col`
|
xref @jbrockmendel 's comment https://github.com/pandas-dev/pandas/issues/56499#issuecomment-3180770808
I'd also discussed this with @phofl , @WillAyd , and @jorisvandenbossche (who originally showed us something like this in Basel at euroscipy 2023)
Demo:
```python
import pandas as pd
from datetime import datetime
df = pd.DataFrame(
{
"a": [1, -2, 3],
"b": [4, 5, 6],
"c": [datetime(2020, 1, 1), datetime(2025, 4, 2), datetime(2026, 12, 3)],
"d": ["fox", "beluga", "narwhal"],
}
)
result = df.assign(
# The usual Series methods are supported
a_abs=pd.col("a").abs(),
# And can be combined
a_centered=pd.col("a") - pd.col("a").mean(),
a_plus_b=pd.col("a") + pd.col("b"),
# Namespace are supported too
c_year=pd.col("c").dt.year,
c_month_name=pd.col("c").dt.strftime("%B"),
d_upper=pd.col("d").str.upper(),
).loc[pd.col("a_abs") > 1] # This works in `loc` too
print(result)
```
Output:
```python
a b c d a_abs a_centered a_plus_b c_year c_month_name d_upper
1 -2 5 2025-04-02 beluga 2 -2.666667 3 2025 April BELUGA
2 3 6 2026-12-03 narwhal 3 2.333333 9 2026 December NARWHAL
```
Repr demo:
```python
In [4]: pd.col('value')
Out[4]: col('value')
In [5]: pd.col('value') * pd.col('weight')
Out[5]: (col('value') * col('weight'))
In [6]: (pd.col('value') - pd.col('value').mean()) / pd.col('value').std()
Out[6]: ((col('value') - col('value').mean()) / col('value').std())
In [7]: pd.col('timestamp').dt.strftime('%B')
Out[7]: col('timestamp').dt.strftime('%B')
```
What's here should be enough for it to be usable. For the type hints to show up correctly, extra work should be done in `pandas-stubs`. But, I think it should be possible to develop tooling to automate the `Expr` docs and types based on the `Series` ones (going to cc @Dr-Irv here too then)
As for the "`col`" name, that's what PySpark, Polars, Daft, and Datafusion use, so I think it'd make sense to follow the convention
---
I'm opening as a request for comments. Would people want this API to be part of pandas?
One of my main motivations for introducing it is that it avoids common issues with scoping. For example, if you use `assign` to increment two columns' values by 10 and try to write `df.assign(**{col: lambda df: df[col] + 10 for col in ('a', 'b')})` then you'll be in for a big surprise
```python
In [19]: df = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6]})
In [20]: df.assign(**{col: lambda df: df[col] + 10 for col in ('a', 'b')})
Out[20]:
a b
0 14 14
1 15 15
2 16 16
```
whereas with `pd.col`, you get what you were probably expecting:
```python
In [4]: df.assign(**{col: pd.col(col) + 10 for col in ('a', 'b')})
Out[4]:
a b
0 11 14
1 12 15
2 13 16
```
Further advantages:
- expressions are introspectable so the repr can be made to look nice, whereas an anonymous lambda is always going to look something like ` <function __main__.<lambda>(df)`
- the syntax looks more modern and more aligned with modern tools
Expected objections:
- this expands the pandas API even further. Sure, I don't disagree, but I think this is a common enough and longstanding enough request that it's worth expanding it for this
---
TODO:
- tests, API docs, user guide. But first, I just wanted to get a feel for people's thoughts, and to see if anyone's opposed to it
|
open
|
https://github.com/pandas-dev/pandas/pull/62103
| 62,103 |
[
"> For the type hints to show up correctly, extra work should be done in `pandas-stubs`. But, I think it should be possible to develop tooling to automate the `Expr` docs and types based on the `Series` ones (going to cc @Dr-Irv here too then)\r\n\r\nWhen this is added, and then released, `pandas-stubs` can be updated with proper stubs.\r\n\r\nOne comment is that I'm not sure it will support some basic arithmetic, such as:\r\n```python\r\nresult = df.assign(addcon=pd.col(\"a\") + 10)\r\n```\r\nOr alignment with other series:\r\n```python\r\nb = df[\"b\"] # or this could be from a different DF\r\nresult = df.assign(add2=pd.col(\"a\") + b)\r\n```\r\n\r\nAlso, don't you need to add some tests??\r\n",
"Thanks for taking a look!\r\n\r\n> One comment is that I'm not sure it will support some basic arithmetic [...] Or alignment with other series:\r\n\r\nYup, they're both supported:\r\n```python\r\nIn [8]: df = pd.DataFrame({'a': [1,2,3]})\r\n\r\nIn [9]: s = pd.Series([90,100,110], index=[2,1,0])\r\n\r\nIn [10]: df.assign(\r\n ...: b=pd.col('a')+10,\r\n ...: c=pd.col('a')+s,\r\n ...: )\r\nOut[10]: \r\n a b c\r\n0 1 11 111\r\n1 2 12 102\r\n2 3 13 93\r\n```\r\n\r\n> Also, don't you need to add some tests??\r\n\r\n😄 Definitely, I just wanted to test the waters first, as I think this would be perceived as a significant API change",
"> Definitely, I just wanted to test the waters first, as I think this would be perceived as a significant API change\r\n\r\nI don't see it as a \"change\", more like an addition to the API that makes it easier to use. The existing way of using `df.assign(foo=lambda df: df[\"a\"] + df[\"b\"])` would still work, but `df.assign(foo=pd.col(\"a\") + pd.col(\"b\"))` is cleaner.\r\n",
"Is assign the main use case?",
"Currently it would only work in places that accept `DataFrame -> Series` callables which, as far as I know, is only `DataFrame.assign` and filtering with `DataFrame.loc`\r\n\r\nGetting it to work in `GroupBy.agg` is more complex, but [it is possible](https://narwhals-dev.github.io/narwhals/api-reference/dataframe/#narwhals.dataframe.DataFrame.group_by), albeit with some restrictions",
"I haven't seen any objections, so I'll work on adding docs + user guide + tests\r\n\r\nIf anyone intends to block this then I'd appreciate it if you could speak out as soon as possible (also going to cc @mroeschke here in case you were against this)",
"I would be OK adding this API. "
] |
[] |
3,318,801,943 |
PR_kwDOAA0YD86jgVck
|
ENH: Warn on unused arguments to resample
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62101
| 62,101 |
[
"Thanks @jbrockmendel "
] |
[
{
"id": 74975453,
"node_id": "MDU6TGFiZWw3NDk3NTQ1Mw==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Resample",
"name": "Resample",
"color": "207de5",
"default": false,
"description": "resample method"
},
{
"id": 1628184320,
"node_id": "MDU6TGFiZWwxNjI4MTg0MzIw",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Warnings",
"name": "Warnings",
"color": "f2f074",
"default": false,
"description": "Warnings that appear or should be added to pandas"
}
] |
3,316,066,112 |
PR_kwDOAA0YD86jXGRL
|
DOC: Sort the pandas API reference navbar in alphabetical order
|
- [x] closes #59164
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Changes made:
- Added JavaScript to sort only the items in each section of the sidebar and not the section headers.
- This is a re-submission of my previous pull request #62069, which was closed due to the fact that issue #59164 had a "Needs Discussion" tag. To update, the maintainer who placed the tag has acknowledged my previous PR and was okay with merging it. I've also added a comment in #59164 to answer another maintainer's question about why JavaScript was used to fix the issue instead of using Sphinx.
|
open
|
https://github.com/pandas-dev/pandas/pull/62099
| 62,099 |
[
"pre-commit.ci autofix"
] |
[] |
3,312,016,591 |
PR_kwDOAA0YD86jJqft
|
DOC: Add to_julian_date to DatetimeIndex methods listing
|
First time contributor here.
- ~[ ] closes #xxxx (Replace xxxx with the GitHub issue number)~ No issue related
- ~[ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature~
- ~[ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).~
- ~[ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~ Nothing new
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Add `to_julian_date` documentation entry to `pandas.DatetimeIndex`.
Currently, it is not listed nor can be found in the documentation: https://pandas.pydata.org/docs/reference/api/pandas.DatetimeIndex.html
`pandas.Timestamp` already has it linked https://pandas.pydata.org/docs/reference/api/pandas.Timestamp.to_julian_date.html
from the time it was added v0.14.0 https://pandas.pydata.org/docs/whatsnew/v0.14.0.html
I haven't read through the contributing section, please point out if there is something missing. If it looks okay, tell me to proceed with the whatsnew task.
Docs artifact for second commit (it works): https://github.com/pandas-dev/pandas/actions/runs/16895776565/job/47865152612?pr=62090
|
closed
|
https://github.com/pandas-dev/pandas/pull/62090
| 62,090 |
[
"Thanks @echedey-ls "
] |
[
{
"id": 134699,
"node_id": "MDU6TGFiZWwxMzQ2OTk=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Docs",
"name": "Docs",
"color": "3465A4",
"default": false,
"description": null
}
] |
3,310,931,686 |
PR_kwDOAA0YD86jGDwI
|
DEPS: update python version in dockerfile
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
commit -> https://github.com/pandas-dev/pandas/commit/7863029c3084c79b226c0a1c3daeac82929534f1
pull request -> https://github.com/pandas-dev/pandas/pull/62066
Due to the recent change of the python version 3.10 -> 3.11, the gitpod and docker deployments are failing with the following error:
```bash
meson-python: error: Unsupported Python version 3.10.8, expected >=3.11
error: subprocess-exited-with-error
× Preparing editable metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /usr/local/bin/python /usr/local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py prepare_metadata_for_build_editable /tmp/tmpxivcdhnp
cwd: /workspace/pandas
Preparing editable metadata (pyproject.toml) ... error
```
Updating the 3.10.8 in the Dockerfile to 3.11.13 solves the problem.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62088
| 62,088 |
[
"Thanks @surenpoghosian "
] |
[] |
3,310,831,988 |
PR_kwDOAA0YD86jFvX6
|
REF: split big method in pyarrow csv wrapper
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62087
| 62,087 |
[
"Thanks @jbrockmendel "
] |
[
{
"id": 47229171,
"node_id": "MDU6TGFiZWw0NzIyOTE3MQ==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/IO%20CSV",
"name": "IO CSV",
"color": "5319e7",
"default": false,
"description": "read_csv, to_csv"
},
{
"id": 3303158446,
"node_id": "MDU6TGFiZWwzMzAzMTU4NDQ2",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Arrow",
"name": "Arrow",
"color": "f9d0c4",
"default": false,
"description": "pyarrow functionality"
}
] |
3,308,017,837 |
PR_kwDOAA0YD86i8rnx
|
DEPS: Bump meson-python, lower pin Cython
|
Since we recently adopted building with Cython 3 a while back, we don't want to allow users to build pandas with a Cython version lower than that
|
open
|
https://github.com/pandas-dev/pandas/pull/62086
| 62,086 |
[
" pandas/_libs/tslibs/ccalendar.cp313-win_amd64.pyd.p/pandas/_libs/tslibs/ccalendar.pyx.c(9749): error C2220: the following warning is treated as an error\r\n pandas/_libs/tslibs/ccalendar.cp313-win_amd64.pyd.p/pandas/_libs/tslibs/ccalendar.pyx.c(9749): warning C4244: '=': conversion from 'Py_ssize_t' to 'long', possible loss of data\r\n",
"Is it just me or is the meson system harder to debug than the old system?",
"> Is it just me or is the meson system harder to debug than the old system?\r\n\r\nYeah there's a little more indirection than before (meson-python -> meson vs just setuptools), but I understand the \"standardization\" of meson/cmake is better than the wild-west of using setuptools ",
"I don't have a strong point of view on bumping Meson, although I think it would be good to keep the lower bound as is until a new feature is required. (I definitely agree with un-pinning)\r\n\r\nIf we do need the 1.6 bump, you will also want to add that change to the `meson_version` key in the `project` function call in meson.build",
"https://github.com/mesonbuild/meson-python/issues/716 suggested to use the jinja2 FileSystemLoader instead of PackageLoader, and it seems that @WillAyd tried that in https://github.com/pandas-dev/pandas/pull/60681. @WillAyd do you remember if that worked?",
"> [mesonbuild/meson-python#716](https://github.com/mesonbuild/meson-python/issues/716) suggested to use the jinja2 FileSystemLoader instead of PackageLoader, and it seems that @WillAyd tried that in #60681. @WillAyd do you remember if that worked?\r\n\r\nJust testing it out directly, seems to work fine -> https://github.com/pandas-dev/pandas/pull/62123",
"> Or is there a good reason to still bump that one? \r\n\r\nI didn't have a particular reason besides using a newer version of meson-python (which should be less consequential than bumping meson), especially for our minimum dependency build (where I should have pinned meson-python)"
] |
[
{
"id": 129350,
"node_id": "MDU6TGFiZWwxMjkzNTA=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Build",
"name": "Build",
"color": "75507B",
"default": false,
"description": "Library building on various platforms"
},
{
"id": 527603109,
"node_id": "MDU6TGFiZWw1Mjc2MDMxMDk=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Dependencies",
"name": "Dependencies",
"color": "d93f0b",
"default": false,
"description": "Required and optional dependencies"
}
] |
3,307,955,823 |
PR_kwDOAA0YD86i8g6B
|
BUG: Preserve column names in DataFrame.from_records when nrows=0
|
Follow up on #61143
- [x] closes #61140
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62085
| 62,085 |
[
"Thanks @yuanx749 "
] |
[
{
"id": 1049312478,
"node_id": "MDU6TGFiZWwxMDQ5MzEyNDc4",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/DataFrame",
"name": "DataFrame",
"color": "370f77",
"default": false,
"description": "DataFrame data structure"
}
] |
3,306,156,698 |
PR_kwDOAA0YD86i3tjR
|
BUG: Preserve full float precision when double_precision=15 in to_jso…
|
…n; avoid truncation before serialization (#62072)
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
open
|
https://github.com/pandas-dev/pandas/pull/62083
| 62,083 |
[
"Can you add tests for the fixed behavior? What is the performance impact?"
] |
[] |
3,305,822,802 |
PR_kwDOAA0YD86i2xN9
|
BUG: Series.round with pd.NA no longer raises; coerce object+NA to nullable float and round; add test (#61712)
|
Summary:
Fixes Series.round raising TypeError for object-dtype Series containing pd.NA by coercing to nullable Float64 before rounding, preserving pd.NA values.
Description:
This PR fixes an issue where calling Series.round on an object-dtype Series containing pd.NA would raise a TypeError.
The method now attempts to safely cast to the nullable Float64 dtype before rounding, preserving pd.NA values.
Includes:
Bug fix in Series.round to coerce object with pd.NA to Float64 and round as expected.
New test in test_round.py covering the case of object-dtype Series with pd.NA.
Added changelog entry in doc/source/whatsnew/v3.0.0.rst.
closes #61712
[Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests)
All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit)
Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) (N/A – no new functions/args)
Added an entry in the latest doc/source/whatsnew/v3.0.0.rst file
|
open
|
https://github.com/pandas-dev/pandas/pull/62081
| 62,081 |
[
"Simpler to just operate point wise?",
"> Simpler to just operate point wise?\r\n\r\nThanks for the feedback! I went with the dtype coercion approach since it keeps the operation vectorized and avoids Python-level iteration, which I assumed would be more consistent with how similar numeric conversions are handled in Pandas.\r\n\r\nA pointwise implementation could definitely work for handling pd.NA cases explicitly, but I was aiming to minimize performance overhead and preserve existing behavior for other object-typed numeric data. If you think a pointwise approach would be preferable for clarity or maintainability here, I’m happy to prototype it so we can compare.",
"The trouble with the casting approach is it only works in castable cases. Things like Decimal objects don't fit that.",
"> The trouble with the casting approach is it only works in castable cases. Things like Decimal objects don't fit that.\r\n\r\nThanks for pointing that out, I see what you mean about the limitation with non-castable types like Decimal. That’s a good case where the dtype coercion approach wouldn’t behave correctly.\r\n\r\nI can prototype a pointwise implementation so we can compare behavior and performance, and make sure it handles cases like Decimal properly. If the trade-offs look acceptable, we could go with that approach to ensure correctness across all object-typed inputs.\r\n\r\nWould you prefer I push that as an additional commit here, or open a separate PR so we can compare more cleanly?",
"Just update this PR"
] |
[] |
3,305,688,661 |
PR_kwDOAA0YD86i2UbJ
|
BUG: creating Categorical from pandas Index/Series with "object" dtype infers string
|
- [x] closes #61778
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Preserve [object](https://github.com/pandas-dev/pandas/compare/main...niruta25:pandas:niruta_issue61778?expand=1) dtype for categories when constructing [Categorical](https://github.com/pandas-dev/pandas/compare/main...niruta25:pandas:niruta_issue61778?expand=1) from pandas objects
This PR fixes an inconsistency in how pandas infers the dtype of categories when constructing a [Categorical](https://github.com/pandas-dev/pandas/compare/main...niruta25:pandas:niruta_issue61778?expand=1) from different input types:
When constructing a [Categorical](https://github.com/pandas-dev/pandas/compare/main...niruta25:pandas:niruta_issue61778?expand=1) from a pandas [Series](https://github.com/pandas-dev/pandas/compare/main...niruta25:pandas:niruta_issue61778?expand=1) or [Index](https://github.com/pandas-dev/pandas/compare/main...niruta25:pandas:niruta_issue61778?expand=1) with [dtype="object"](https://github.com/pandas-dev/pandas/compare/main...niruta25:pandas:niruta_issue61778?expand=1), the categories' dtype is now preserved as [object](https://github.com/pandas-dev/pandas/compare/main...niruta25:pandas:niruta_issue61778?expand=1).
When constructing from a NumPy array with [dtype="object"](https://github.com/pandas-dev/pandas/compare/main...niruta25:pandas:niruta_issue61778?expand=1) or a raw Python sequence, pandas continues to infer the most specific dtype for the categories (e.g., [str](https://github.com/pandas-dev/pandas/compare/main...niruta25:pandas:niruta_issue61778?expand=1) if all elements are strings).
This change brings the behavior of [Categorical](https://github.com/pandas-dev/pandas/compare/main...niruta25:pandas:niruta_issue61778?expand=1) in line with how [Series](https://github.com/pandas-dev/pandas/compare/main...niruta25:pandas:niruta_issue61778?expand=1) and [Index](https://github.com/pandas-dev/pandas/compare/main...niruta25:pandas:niruta_issue61778?expand=1) handle dtype preservation, making the API more consistent and predictable.
Example
```
pd.options.future.infer_string = True
ser = pd.Series(["foo", "bar", "baz"], dtype="object")
idx = pd.Index(["foo", "bar", "baz"], dtype="object")
arr = np.array(["foo", "bar", "baz"], dtype="object")
pylist = ["foo", "bar", "baz"]
cat_from_ser = pd.Categorical(ser)
cat_from_idx = pd.Categorical(idx)
cat_from_arr = pd.Categorical(arr)
cat_from_list = pd.Categorical(pylist)
# Series/Index with object dtype: preserve object dtype
assert cat_from_ser.categories.dtype == "object"
assert cat_from_idx.categories.dtype == "object"
# Numpy array or list: infer string dtype
assert cat_from_arr.categories.dtype == "str"
assert cat_from_list.categories.dtype == "str"
```
Documentation and release notes have been updated.
Closes: https://github.com/pandas-dev/pandas/issues/61778
|
open
|
https://github.com/pandas-dev/pandas/pull/62080
| 62,080 |
[
"@jbrockmendel Regarding this [bug](https://github.com/pandas-dev/pandas/issues/61778), the change to always preserve object dtype for categories when constructing a Categorical from a pandas Series or Index with dtype=\"object\" is a behavioral change that affects a wide range of pandas internals and user-facing APIs. Hence I am seeing a lot of failures.\r\n\r\nI see two ways to resolve without changing overall behavior. \r\n1. Only Preserve object Dtype When All Elements Are Not Strings\r\n\r\n- If the input is a pandas Series/Index with dtype=\"object\", only preserve object dtype for categories if not all elements are strings.\r\n- If all elements are strings, allow inference to str (the current behavior).\r\n\r\n2. Add a Keyword Argument to Categorical (e.g., preserve_object_dtype=False)\r\n\r\n- Add an explicit option to the Categorical constructor to preserve the object dtype for categories.\r\n- Default to the current behavior, but allow users to opt in to preservation.\r\n\r\nLet me know your thoughts. "
] |
[] |
3,305,401,725 |
PR_kwDOAA0YD86i1eGf
|
BUG: Bug fix for implicit upcast to float64 for large series (more than 1000000 rows)
|
## PR Requirements
- [x] closes #61951
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature.
## Issue Description
Performing binary operations on larger `Series` with `dtype == 'float32'` leads to unexpected upcasts to float64.
Above example prints `float32 float64`.
Using `to_numpy()` on the series before addition inhibits the implicit upcast.
## Expected Behavior
I expect above snippet to print `float32 float32`.
## Bug Fix
Changed scalar conversion logic for NumPy floating scalars to avoid automatic conversion to Python `float`. Now returns a scalar of the original NumPy dtype to preserve type and prevent unintended dtype upcasts.
|
open
|
https://github.com/pandas-dev/pandas/pull/62077
| 62,077 |
[
"pre-commit.ci autofix"
] |
[] |
3,304,976,846 |
PR_kwDOAA0YD86i0Ips
|
BUG: Fix tz_localize(None) with Arrow timestamp
|
- [x] closes #61780
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62076
| 62,076 |
[
"Thanks @yuanx749 "
] |
[
{
"id": 60458168,
"node_id": "MDU6TGFiZWw2MDQ1ODE2OA==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Timezones",
"name": "Timezones",
"color": "5319e7",
"default": false,
"description": "Timezone data dtype"
},
{
"id": 3303158446,
"node_id": "MDU6TGFiZWwzMzAzMTU4NDQ2",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Arrow",
"name": "Arrow",
"color": "f9d0c4",
"default": false,
"description": "pyarrow functionality"
}
] |
3,304,386,279 |
PR_kwDOAA0YD86iyNVi
|
BUG(CoW): also raise for chained assignment for .at / .iat
|
Noticed while working on https://github.com/pandas-dev/pandas/issues/61368. I assume this was an oversight in the original PR https://github.com/pandas-dev/pandas/pull/49467 adding those checks. Also added some more explicit tests for all of loc/iloc/at/iat
|
closed
|
https://github.com/pandas-dev/pandas/pull/62074
| 62,074 |
[
"Thanks @jorisvandenbossche ",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 b84ac0ec647902b9dd0c1bcda790b289f577b617\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #62074: BUG(CoW): also raise for chained assignment for .at / .iat'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-62074-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #62074 on branch 2.3.x (BUG(CoW): also raise for chained assignment for .at / .iat)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Manual backport -> https://github.com/pandas-dev/pandas/pull/62115"
] |
[
{
"id": 1792318342,
"node_id": "MDU6TGFiZWwxNzkyMzE4MzQy",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Still%20Needs%20Manual%20Backport",
"name": "Still Needs Manual Backport",
"color": "ededed",
"default": false,
"description": null
},
{
"id": 2085877452,
"node_id": "MDU6TGFiZWwyMDg1ODc3NDUy",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Copy%20/%20view%20semantics",
"name": "Copy / view semantics",
"color": "70e5ca",
"default": false,
"description": ""
}
] |
3,304,365,588 |
PR_kwDOAA0YD86iyI7J
|
MNT: migrate from codecs.open to open
|
c.f. #61950, towards supporting Python 3.14
See [the 3.14 deprecations](https://docs.python.org/3.14/whatsnew/3.14.html#deprecated) and https://github.com/python/cpython/issues/133036 for more details.
There is one spot that was nontrivial - the use in `pandas.util._print_versions` I think was buggy - it wanted to write a file in bytes mode and also specified an encoding. `codecs.open` didn't blink at this but regular `open` is more restrictive. I think what I updated it to is correct and this is a latent bug, probably introduced a long time ago in the 2->3 transition.
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62073
| 62,073 |
[
"Thanks @ngoldbaum "
] |
[
{
"id": 127685,
"node_id": "MDU6TGFiZWwxMjc2ODU=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Testing",
"name": "Testing",
"color": "C4A000",
"default": false,
"description": "pandas testing functions or related to the test suite"
}
] |
3,303,183,371 |
PR_kwDOAA0YD86iuUZ2
|
Test chained assignment detection for Python 3.14
|
Draft exploring a solution for https://github.com/pandas-dev/pandas/issues/61368
|
open
|
https://github.com/pandas-dev/pandas/pull/62070
| 62,070 |
[
"Some commented-out code needs to be removed, otherwise looks nice"
] |
[] |
3,302,395,020 |
PR_kwDOAA0YD86ir16y
|
DOC: Sort the pandas API reference navbar in alphabetical order
|
- [x] closes #59164
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Changes made:
- Added JavaScript to sort only the items in each section of the sidebar and not the section headers.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62069
| 62,069 |
[
"Thanks for the PR, but as noted in the issue it still `needs discussion` on the solution to move forward so closing"
] |
[] |
3,302,254,495 |
PR_kwDOAA0YD86irZ9r
|
Update pytables.py to include a more thorough description of the min_itemsize variable
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
I am a student contributing for the first time, none of the additions were generated with AI and this PR description is not generated by AI.
I am addressing Issue 14601 where the current documentation for 'min_itemsize' is not thorough enough. It seems to me based on my research that the issue is that people treat it as though it counts characters instead of byte size for strings being input into a table.
I kept the additions simple to avoid any confusion and am open to suggestions on how I can improve this contribution. If there are any specific issues please point me to the location in your contribution guide so I can better understand any errors I might have made.
|
open
|
https://github.com/pandas-dev/pandas/pull/62068
| 62,068 |
[
"Let’s start with the CI failures. Can you address them?"
] |
[] |
3,302,011,263 |
PR_kwDOAA0YD86iqp5K
|
DOC: Docstring additions for min_itemsize
|
## Problem Summary
The current pandas documentation for `min_itemsize` in HDFStore methods doesn’t clearly explain that it refers to byte length, not character length. This causes confusion when working with multi-byte characters.
## Proposed Addition to HDFStore.put() and HDFStore.append() docstrings
Add this clarification to the `min_itemsize` parameter description in the appropriate methods:
```
min_itemsize : int, dict, or None, default None
Minimum size in bytes for string columns when format='table'.
int - Apply the same minimum size to all string columns,
dict - Map column names to their minimum sizes or,
None - use the default the sizing
Important: This specifies the byte length after encoding, not the
character count. For multi-byte characters, calculate the required
size using the encoded byte length.
See examples below for use.
```
And adding this to the example section for each docstring:
```
Examples:
- ASCII 'hello' = 5 bytes
- UTF-8 '香' = 3 bytes (though only 1 character)
- To find byte length: len(string.encode('utf-8'))
```
## Why This Helps
- Directly addresses the confusion in issue #14601
- Provides practical guidance for users working with international text
- Keeps the explanation concise and focused
- Includes actionable examples
|
closed
|
https://github.com/pandas-dev/pandas/pull/62067
| 62,067 |
[
"AI-generated pull requests are not welcome in this project, closing"
] |
[] |
3,301,539,329 |
PR_kwDOAA0YD86ipIVT
|
DEPS: Drop Python 3.10
|
- [ ] closes https://github.com/pandas-dev/pandas/issues/60059 (Replace xxxx with the GitHub issue number)
|
closed
|
https://github.com/pandas-dev/pandas/pull/62066
| 62,066 |
[
"Thanks @mroeschke "
] |
[
{
"id": 129350,
"node_id": "MDU6TGFiZWwxMjkzNTA=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Build",
"name": "Build",
"color": "75507B",
"default": false,
"description": "Library building on various platforms"
},
{
"id": 87485152,
"node_id": "MDU6TGFiZWw4NzQ4NTE1Mg==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Deprecate",
"name": "Deprecate",
"color": "5319e7",
"default": false,
"description": "Functionality to remove in pandas"
},
{
"id": 2955636717,
"node_id": "MDU6TGFiZWwyOTU1NjM2NzE3",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Python%203.10",
"name": "Python 3.10",
"color": "fef2c0",
"default": false,
"description": ""
}
] |
3,301,491,156 |
PR_kwDOAA0YD86io-Tu
|
DEPS: Bump adbc-driver-postgresql/sqlite to 1.2
|
These will have been out for ~1 year by the time (hopefully) pandas 3.0 comes out
|
closed
|
https://github.com/pandas-dev/pandas/pull/62065
| 62,065 |
[
"needs rebase, otherwise lgtm"
] |
[
{
"id": 527603109,
"node_id": "MDU6TGFiZWw1Mjc2MDMxMDk=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Dependencies",
"name": "Dependencies",
"color": "d93f0b",
"default": false,
"description": "Required and optional dependencies"
}
] |
3,297,104,474 |
PR_kwDOAA0YD86iaIkv
|
API: make construct_array_type non-classmethod
|
- [x] closes #58663 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Makes the method robust to the possibility of keywords (e.g. na_value, storage) that determine what EA subclass you get. StringDtype already does this.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62060
| 62,060 |
[
"Thanks @jbrockmendel "
] |
[
{
"id": 849023693,
"node_id": "MDU6TGFiZWw4NDkwMjM2OTM=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/ExtensionArray",
"name": "ExtensionArray",
"color": "6138b5",
"default": false,
"description": "Extending pandas with custom dtypes or arrays."
}
] |
3,296,689,449 |
PR_kwDOAA0YD86iYsiO
|
DOC: add 'Try Pandas Online' section to the 'Getting Started' page
|
- [x] closes #61060
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v2.3.2.rst` file if fixing a bug or adding a new feature.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62058
| 62,058 |
[
"Thanks for the pull request, but I think this is being worked on as apart of https://github.com/pandas-dev/pandas/pull/61061 so closing since that contributor also added this module to link to our docs. Happy to have your contribution on other un-assigned issues"
] |
[] |
3,295,060,237 |
PR_kwDOAA0YD86iTMjX
|
BUG: Fix all-NaT when ArrowEA.astype to categorical
|
- [x] closes #62051 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
open
|
https://github.com/pandas-dev/pandas/pull/62055
| 62,055 |
[
"Using `astype` with `CategoricalDtype` coerces values into `Index`, where the code tries to compare two `Index` objects with numpy and pyarrow types. \r\n\r\nIn the first example, the code does not think they are comparable in `Index._should_compare`, and returns all -1s. In the second example, the code correctly determines that they are comparable, but they try to compare it (coercing both indexes to object dtypes with `Index._find_common_type_compat`) and fail. Hence, both cases return all -1s. ",
"So we should start by fixing should_compare?",
"> So we should start by fixing should_compare?\n\nSorry I missed this earlier. The fix I pushed addresses that and resolves the second issue too. "
] |
[] |
3,294,962,830 |
PR_kwDOAA0YD86iS4z9
|
DOC: Inform users that incomplete reports will generally be closed
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62054
| 62,054 |
[
"Thanks @rhshadrach "
] |
[
{
"id": 32933285,
"node_id": "MDU6TGFiZWwzMjkzMzI4NQ==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Admin",
"name": "Admin",
"color": "DDDDDD",
"default": false,
"description": "Administrative tasks related to the pandas project"
}
] |
3,294,821,413 |
PR_kwDOAA0YD86iSb8e
|
BUG: read_csv with engine=pyarrow and numpy-nullable dtype
|
- [x] closes #56136 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Also makes this code path robust to always-distinguish behavior in #62040
|
closed
|
https://github.com/pandas-dev/pandas/pull/62053
| 62,053 |
[
"> From the original issue, do you know where we are introducing float to lose precision when wanting the result type to be int?\r\n\r\nIn `arrow_table_to_pandas` the pyarrow[int64] columns get converted to np.float64, then in finalize_pandas_output that gets cast back to Int64.\r\n",
"OK I see, it's `pyarrow.Table.to_pandas` casting the int to float when there's `null`.\r\n\r\nWhat if in `arrow_table_to_pandas`, we always provide fallback `type_mapper={pyarrow ints : pandas nullable ints}` to avoid the lossy conversions, then afterwards we cast the pandas nullable ints to the appropriate type?",
"That’s basically what this is currently doing, just not in that function since it is also called from other places.\r\n\r\nI’m out of town for a few days. If you feel strongly that this logic should live inside that function I’ll move it when I get back",
"Looking at this again, I'm skeptical of moving the logic into arrow_table_to_pandas. The trouble is that between the table.to_pandas() and the .astype conversions, we have to do a bunch of other csv-keyword-specific stuff like set_index and column renaming. (Just opened #62087 to clean that up a bit). Shoe-horning all of that into arrow_table_to_pandas would make it a really big function in a way that i think is a net negative.",
"Sorry in https://github.com/pandas-dev/pandas/pull/62053#issuecomment-3160877705, I meant for `arrow_table_to_pandas` to just have this change\r\n\r\n```diff\r\ndiff --git a/pandas/io/_util.py b/pandas/io/_util.py\r\nindex 6827fbe9c9..2e15bd3749 100644\r\n--- a/pandas/io/_util.py\r\n+++ b/pandas/io/_util.py\r\n@@ -85,7 +85,14 @@ def arrow_table_to_pandas(\r\n else:\r\n types_mapper = None\r\n elif dtype_backend is lib.no_default or dtype_backend == \"numpy\":\r\n- types_mapper = None\r\n+ # Avoid lossy conversion to float64\r\n+ # Caller is responsible for converting to numpy type if needed\r\n+ types_mapper = {\r\n+ pa.int8(): pd.Int8Dtype(),\r\n+ pa.int16(): pd.Int16Dtype(),\r\n+ pa.int32(): pd.Int32Dtype(),\r\n+ pa.int64(): pd.Int64Dtype(),\r\n+ }\r\n else:\r\n raise NotImplementedError\r\n```\r\n\r\nAnd then each IO parser is responsible for manipulating this result based on the IO arguments",
"> And then each IO parser is responsible for manipulating this result based on the IO arguments\r\n\r\nThat would mean adding that logic to each of the 7 places where arrow_table_to_pandas is called, so we would almost-certainly be better off having it centralized.\r\n\r\nIf we get #62087 in then moving all the logic into arrow_table_to_pandas at least gets a little bit less bulky, so I can give it a try.\r\n",
"This is plausibly nicer.",
"Thanks @jbrockmendel "
] |
[
{
"id": 47229171,
"node_id": "MDU6TGFiZWw0NzIyOTE3MQ==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/IO%20CSV",
"name": "IO CSV",
"color": "5319e7",
"default": false,
"description": "read_csv, to_csv"
}
] |
3,293,700,389 |
PR_kwDOAA0YD86iOsD-
|
REF: simplify mask_missing
|
In the past this had to handle list-like values_to_mask but that is no longer the case, so this can be simplified a bit. The edit in dtypes.common makes `dtype.kind in ...` checks very slightly faster.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62049
| 62,049 |
[
"Thanks @jbrockmendel "
] |
[
{
"id": 127681,
"node_id": "MDU6TGFiZWwxMjc2ODE=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Refactor",
"name": "Refactor",
"color": "FCE94F",
"default": false,
"description": "Internal refactoring of code"
}
] |
3,290,359,243 |
PR_kwDOAA0YD86iDbG1
|
API: rank with nullable dtypes preserve NA
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62043
| 62,043 |
[
"Thanks @jbrockmendel "
] |
[
{
"id": 1817503692,
"node_id": "MDU6TGFiZWwxODE3NTAzNjky",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/NA%20-%20MaskedArrays",
"name": "NA - MaskedArrays",
"color": "8cc645",
"default": false,
"description": "Related to pd.NA and nullable extension arrays"
}
] |
3,290,320,726 |
PR_kwDOAA0YD86iDSsg
|
REF: Avoid/defer `dtype=object` containers in plotting
|
Probably best to avoid operations on containers with these types unless needed/expected
|
closed
|
https://github.com/pandas-dev/pandas/pull/62042
| 62,042 |
[
"pandas/plotting/_matplotlib/boxplot.py:246: error: Argument \"labels\" to \"_set_ticklabels\" has incompatible type \"list[Hashable]\"; expected \"list[str]\" [arg-type]\r\n",
"Seems benign",
"thanks @mroeschke "
] |
[
{
"id": 2413328,
"node_id": "MDU6TGFiZWwyNDEzMzI4",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Visualization",
"name": "Visualization",
"color": "8AE234",
"default": false,
"description": "plotting"
}
] |
3,289,945,128 |
PR_kwDOAA0YD86iCBHW
|
API: mode.nan_is_na to consistently distinguish NaN-vs-NA
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
As discussed on the last dev call, this implements `"mode.nan_is_na"` (default `True`) to consider NaN as either always-equivalent or never-equivalent to NA.
This sits on top of
- #62021, which trims the diff here by updating some tests to use NA instead of NaN.
- #61732 which implements the option but only for pyarrow dtypes.
- #62038 which addresses an issue in `DataFrame.where`
- #62053 which addresses a kludge in read_csv with engine="pyarrow"
Still need to
- [x] Add docs for the new option, including whatsnew section
- [x] deal with a kludge in algorithms.rank; fixed by #62043
- [x] deal with a kludge in read_csv with engine="pyarrow"; fixed by #62053
- [ ] Add tests for the issues this addresses
|
open
|
https://github.com/pandas-dev/pandas/pull/62040
| 62,040 |
[
"Discussed in the dev call before last where I, @mroeschke, and @Dr-Irv were +1. Joris was unenthused but \"not necessarily opposed\". On slack @rhshadrach expressed a +1. All those opinions were to the concept, not the execution."
] |
[] |
3,287,540,739 |
PR_kwDOAA0YD86h55WI
|
API: improve dtype in df.where with EA other
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Improves the patch-job done by #38742. Also makes the affected test robust to always-distinguish NAN-vs-NA behavior.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62038
| 62,038 |
[
"Looks like [`87d5fdf`](https://github.com/pandas-dev/pandas/pull/62038/commits/87d5fdfd1983c6408033f22009bab7b5b0d1be07) undid all your changes before",
"Woops. Looks better now.",
"```\r\nruff format.............................................................................................Failed\r\n- hook id: ruff-format\r\n- files were modified by this hook\r\n```\r\n\r\nAny idea how to make it tell me what it wants to change?",
"You can comment `pre-commit.ci autofix` in this PR to get pre-commit to add a commit to fix it for you.\r\n\r\nOtherwise if you have the pre-commit hooks installed, to show the fixes you'll probably need to add `--show-fixes` on this line \r\n\r\nhttps://github.com/pandas-dev/pandas/blob/84757581420cfcc79448aa3274e28d90aaf75c87/.pre-commit-config.yaml#L25",
"running `ruff format` locally on the affected files says it leaves them unchanged",
"What about running `pre-commit run ruff --all-files`? (Generally `pre-commit run <id to check>` is the source of truth for the linting checks)",
"That did it, thanks. Lets see if the CI agrees",
"Booyah, did it. Thanks for help troubleshooting",
"Np! Thanks @jbrockmendel "
] |
[
{
"id": 31404521,
"node_id": "MDU6TGFiZWwzMTQwNDUyMQ==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Dtype%20Conversions",
"name": "Dtype Conversions",
"color": "e102d8",
"default": false,
"description": "Unexpected or buggy dtype conversions"
},
{
"id": 42670965,
"node_id": "MDU6TGFiZWw0MjY3MDk2NQ==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Error%20Reporting",
"name": "Error Reporting",
"color": "ffa0ff",
"default": false,
"description": "Incorrect or improved errors from pandas"
}
] |
3,287,397,246 |
PR_kwDOAA0YD86h5eic
|
"BUG: Fix repeated rolling mean assignment causing all-NaN values"
|
## Fix repeated rolling mean assignment causing all-NaN values
- Closes #<issue_number> (if there’s an issue, otherwise leave this out)
- This PR fixes a regression where assigning the result of `.rolling().mean()` to a DataFrame column more than once caused all values in the column to become NaN (see pandas-dev/pandas#61841).
- The bug was due to pandas reusing memory blocks when overwriting an existing column with a rolling result Series, leading to incorrect block alignment.
- The fix is to make a defensive `.copy()` of the Series when overwriting an existing column, ensuring correct assignment.
### Example
```python
df = pd.DataFrame({"A": range(30)})
df["SMA"] = df["A"].rolling(20).mean()
df["SMA"] = df["A"].rolling(20).mean()
print(df["SMA"].notna().sum()) # should be > 0, not all NaN
```
### Tests
- Added a regression test in `pandas/tests/window/test_rolling.py`.
- All tests pass locally.
---
Thanks for your consideration!
|
closed
|
https://github.com/pandas-dev/pandas/pull/62037
| 62,037 |
[
"Is this AI? The claims about tests in the OP are obviously false.",
"@mroeschke can we block a person? Looking at their PR history it has “AI spam” written all over it",
"Agreed, blocking and closing this PR"
] |
[] |
3,286,989,398 |
PR_kwDOAA0YD86h4MdD
|
EHN: return early when the result is None
|
There's no need to continue the loop when the result is destined to be None.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62032
| 62,032 |
[
"I prefer single-return, but not a huge deal",
"> I prefer single-return, but not a huge deal\r\n\r\nSwitched to using `break` instead.",
"Thanks @zhiqiangxu "
] |
[
{
"id": 1218227310,
"node_id": "MDU6TGFiZWwxMjE4MjI3MzEw",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Index",
"name": "Index",
"color": "e99695",
"default": false,
"description": "Related to the Index class or subclasses"
}
] |
3,286,799,825 |
PR_kwDOAA0YD86h3tDR
|
API: timestamp resolution inference: default to microseconds when possible
|
Draft PR for https://github.com/pandas-dev/pandas/issues/58989/
This should already make sure that we consistently use 'us' when converting non-numeric data in `pd.to_datetime` and `pd.Timestamp`, but if we want to do this, this PR still requires updating lots of tests and docs (and whatsnew) and cleaning up.
Currently the changes here will ensure that we use microseconds more consistently when inferring the resolution while creating datetime64 data. Exceptions: if the data don't fit in the range of us (either because out of bounds (use ms or s) or because it has nanoseconds or below (use ns)), or if the input data already has a resolution defined (for Timestamp objects, or numpy datetime64 data).
|
open
|
https://github.com/pandas-dev/pandas/pull/62031
| 62,031 |
[
"@jbrockmendel would you have time to give this a review?",
"Yes, but its in line behind a few other reviews i owe."
] |
[
{
"id": 211840,
"node_id": "MDU6TGFiZWwyMTE4NDA=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Datetime",
"name": "Datetime",
"color": "AFEEEE",
"default": false,
"description": "Datetime data dtype"
},
{
"id": 3713792788,
"node_id": "LA_kwDOAA0YD87dW_sU",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Non-Nano",
"name": "Non-Nano",
"color": "006b75",
"default": false,
"description": "datetime64/timedelta64 with non-nanosecond resolution"
},
{
"id": 3877216987,
"node_id": "LA_kwDOAA0YD87nGaLb",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Timestamp",
"name": "Timestamp",
"color": "5DE3E8",
"default": false,
"description": "pd.Timestamp and associated methods"
}
] |
3,286,617,297 |
PR_kwDOAA0YD86h3GtE
|
BUG: Catch TypeError in _is_dtype_type when converting abstract numpy types (#62018)
|
- Closes #62018
- Wrap np.dtype() call in try/except to handle abstract numpy types (e.g. np.floating, np.inexact).
- On TypeError, return condition(type(None)) to indicate mismatch rather than raising.
This prevents `is_signed_integer_dtype` and similar functions from raising on abstract NumPy classes and restores expected behaviour.
|
open
|
https://github.com/pandas-dev/pandas/pull/62030
| 62,030 |
[
"This is a small perf hit that will add up in a ton of places, all for something that we shouldn't expect to work anyway."
] |
[] |
3,286,518,961 |
PR_kwDOAA0YD86h20DY
|
DOC: fix mask/where docstring alignment note (#61781)
|
The explanatory paragraph wrongly said that alignment is between `other` and `cond`. It is between *self* and `cond`; values fall back to *self* for mis-aligned positions. Update both generic docstring templates so all Series/DataFrame variants inherit the correct wording.
Closes #61781
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
open
|
https://github.com/pandas-dev/pandas/pull/62029
| 62,029 |
[
"How do I merge this? "
] |
[] |
3,286,290,983 |
PR_kwDOAA0YD86h2BLY
|
TST: Speed up hypothesis and slow tests
|
* Hypothesis tests seems to be the slowest running tests in CI. Limiting the `max_examples` IMO is OK as we're looking to exercise some edge cases
* Avoiding some work being done in `test_*number_of_levels_larger_than_int32` as we're just looking to check a warning
|
closed
|
https://github.com/pandas-dev/pandas/pull/62028
| 62,028 |
[
"thanks @mroeschke "
] |
[
{
"id": 127685,
"node_id": "MDU6TGFiZWwxMjc2ODU=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Testing",
"name": "Testing",
"color": "C4A000",
"default": false,
"description": "pandas testing functions or related to the test suite"
}
] |
3,286,157,068 |
PR_kwDOAA0YD86h1mGX
|
BUG: Fix DataFrame reduction to preserve NaN vs <NA> in mixed dtypes (GH#62024)
|
(GH#62024)
This PR fixes a bug in (DataFrame._reduce) where reductions on (DataFrames) with mixed dtypes (e.g., float64 and nullable integer Int64) would incorrectly upcast all results to use pd.NA and the Float64 dtype if any column was a pandas extension type.
Please let me know if my approach or fix needs any improvements . I’m open to feedback and happy to make changes based on suggestions.
Thankyou!
|
closed
|
https://github.com/pandas-dev/pandas/pull/62027
| 62,027 |
[
"This is not the correct approach. Please look for issues with the Good First Issue label.",
"Thankyou for feedback . I will look for good first issues"
] |
[] |
3,286,155,407 |
PR_kwDOAA0YD86h1lxB
|
BUG: groupby.idxmin/idxmax with all NA values should raise
|
- [x] closes #57745
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62026
| 62,026 |
[
"Couple of small questions, assuming no surprises on those: LGTM",
"Thanks @rhshadrach "
] |
[
{
"id": 233160,
"node_id": "MDU6TGFiZWwyMzMxNjA=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Groupby",
"name": "Groupby",
"color": "729FCF",
"default": false,
"description": null
},
{
"id": 1741841389,
"node_id": "MDU6TGFiZWwxNzQxODQxMzg5",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/API%20-%20Consistency",
"name": "API - Consistency",
"color": "b60205",
"default": false,
"description": "Internal Consistency of API/Behavior"
},
{
"id": 2365504383,
"node_id": "MDU6TGFiZWwyMzY1NTA0Mzgz",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Reduction%20Operations",
"name": "Reduction Operations",
"color": "547c03",
"default": false,
"description": "sum, mean, min, max, etc."
}
] |
3,286,153,785 |
PR_kwDOAA0YD86h1lcy
|
BUG: Change default of observed in Series.groupby
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Fix for #57330. In all tests where `observed` makes a difference, we explicitly specify `observed` so this wasn't noticed. The deprecation itself was properly done (saying that we were changing the default to True), it was only the enforcement of the deprecation that had a mistake.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62025
| 62,025 |
[
"Thanks @rhshadrach "
] |
[
{
"id": 76811,
"node_id": "MDU6TGFiZWw3NjgxMQ==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Bug",
"name": "Bug",
"color": "e10c02",
"default": false,
"description": null
},
{
"id": 233160,
"node_id": "MDU6TGFiZWwyMzMxNjA=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Groupby",
"name": "Groupby",
"color": "729FCF",
"default": false,
"description": null
},
{
"id": 78527356,
"node_id": "MDU6TGFiZWw3ODUyNzM1Ng==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Categorical",
"name": "Categorical",
"color": "e11d21",
"default": false,
"description": "Categorical Data Type"
}
] |
3,285,544,351 |
PR_kwDOAA0YD86hzpYd
|
continue from #61957 which closed with unmerged commit
|
Using Markup() due to ascii reading on attribute tags. Can check in #61957
|
open
|
https://github.com/pandas-dev/pandas/pull/62023
| 62,023 |
[
"for #51536 "
] |
[] |
3,285,422,535 |
PR_kwDOAA0YD86hzP1-
|
TST: nan->NA in non-construction tests
|
Significantly trim the diff for PR(s) implementing always-distinguish behavior.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62021
| 62,021 |
[
"Thanks @jbrockmendel "
] |
[
{
"id": 127685,
"node_id": "MDU6TGFiZWwxMjc2ODU=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Testing",
"name": "Testing",
"color": "C4A000",
"default": false,
"description": "pandas testing functions or related to the test suite"
},
{
"id": 2822342,
"node_id": "MDU6TGFiZWwyODIyMzQy",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Missing-data",
"name": "Missing-data",
"color": "d7e102",
"default": false,
"description": "np.nan, pd.NaT, pd.NA, dropna, isnull, interpolate"
}
] |
3,284,873,938 |
PR_kwDOAA0YD86hxZLS
|
BUG: Fix is_signed_integer_dtype to handle abstract floating types (GH 62018)
|
(#GH 62018)
This PR fixes a bug in (is_signed_integer_dtype) where abstarct Numpy floating types would raise a TypeError .
Now, the function returns False for these types, as expected.
Please let me know if my approach or fix needs any improvements . I’m open to feedback and happy to make changes based on suggestions.
Thankyou !
|
open
|
https://github.com/pandas-dev/pandas/pull/62020
| 62,020 |
[
"Please wait for discussion on the issue as to whether this is worth doing."
] |
[] |
3,284,662,373 |
PR_kwDOAA0YD86hwq_Z
|
REF: make copy keyword in recode_for_categories keyword only
|
Follows up to https://github.com/pandas-dev/pandas/pull/62000
`recode_for_categories` had a default `copy=True` to copy the passed codes if the codes didn't need re-coding. This PR makes this argument keyword only to make it explicit if the caller wants to copy - to avoid unnecessary copying when blindly using `recode_for_categories`
|
closed
|
https://github.com/pandas-dev/pandas/pull/62019
| 62,019 |
[
"Thanks @mroeschke "
] |
[
{
"id": 78527356,
"node_id": "MDU6TGFiZWw3ODUyNzM1Ng==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Categorical",
"name": "Categorical",
"color": "e11d21",
"default": false,
"description": "Categorical Data Type"
}
] |
3,281,851,417 |
PR_kwDOAA0YD86hnEnT
|
DOC: Add SSLCertVerificationError warning message for documentation b…
|
…uild fail
- [ ] closes #61975
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
`pre-commit run --files doc/source/development/contributing_documentation.rst` PASSED locally
|
closed
|
https://github.com/pandas-dev/pandas/pull/62015
| 62,015 |
[
"Here is a screenshot of the changed section in Contributing to the Documentation, if you chose to merge.\r\n\r\n<img width=\"825\" height=\"360\" alt=\"Screenshot 2025-08-02 at 10 34 19 PM\" src=\"https://github.com/user-attachments/assets/7cf8297b-3781-42f6-b705-bb63f8fc90bb\" />\r\n",
"Thanks @jeffersbaxter "
] |
[
{
"id": 134699,
"node_id": "MDU6TGFiZWwxMzQ2OTk=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Docs",
"name": "Docs",
"color": "3465A4",
"default": false,
"description": null
}
] |
3,280,429,368 |
PR_kwDOAA0YD86hiOaY
|
BUG: Raise TypeError for invalid calendar types in CustomBusinessDay (#60647)
|
- Closes #60647
### Bug Description
Previously, if an invalid `calendar` argument was passed to `CustomBusinessDay` (e.g., a `pandas_market_calendars` object), it was silently ignored. This resulted in potentially incorrect behavior without warning, which could lead to confusion and incorrect results.
### What This Fix Does
- Adds a strict type check in `offsets.pyx` to ensure the `calendar` parameter is either a `numpy.busdaycalendar` or `AbstractHolidayCalendar`.
- If the type is invalid, a `TypeError` is raised with a clear error message.
- This aligns with expected behavior and helps prevent incorrect usage.
### Tests Added
- ✅ New unit test `test_invalid_calendar_raises_typeerror` added to `test_custom_business_day.py` to assert that an invalid calendar raises a `TypeError`.
- ✅ Existing test `test_calendar` was updated to construct a valid `np.busdaycalendar` from `USFederalHolidayCalendar` dates.
- ✅ All 8 tests in this module now pass successfully.
### Why This Matters
Silently ignoring invalid input is dangerous and can introduce subtle bugs. This fix ensures strict input validation and protects downstream consumers from incorrect assumptions.
### Checklist
- [x] Bug fix added and tested
- [x] New test added for reproducibility
- [x] All existing + new tests pass locally via `pytest`
- [x] Clear commit message: `"BUG: Raise TypeError for invalid calendar types in CustomBusinessDay (#60647)"`
- [x] pandas test structure followed
|
closed
|
https://github.com/pandas-dev/pandas/pull/62012
| 62,012 |
[
"We don't accept AI generated pull requests so closing"
] |
[] |
3,280,258,859 |
PR_kwDOAA0YD86hho3R
|
BUG: Fix assert_series_equal for categoricals with nulls and check_category_order=False (#62008)
|
### Description
This PR fixes an issue where `pd.testing.assert_series_equal` fails when comparing Series with categorical values containing NaNs when using `check_category_order=False`.
### Problem
When using `left.categories.take(left.codes)` for comparing category values, null codes (-1) were not handled correctly, causing incorrect comparisons.
Please let me know if my approach or fix needs any improvements . I’m open to feedback and happy to make changes based on suggestions.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62011
| 62,011 |
[
"Hi @mroeschke\r\nThis PR fixes an issue where pd.testing.assert_series_equal fails when comparing Series with categorical values containing NaNs when using check_category_order=False.\r\n\r\nI'd really appreciate it if you could take a look and provide feedback .\r\nPlease let me know if anything needs to be improved or clarified.\r\n\r\nThanks!",
"Hi @jorisvandenbossche,\r\nCould you please review this PR and let me know if any changes are needed.\r\nAlso,I want to ask if the is issue already assigned to someone should I continue working on it or leave it to the current assignee?\r\nThanks!"
] |
[] |
3,278,195,740 |
PR_kwDOAA0YD86ha15j
|
Fix for issue 62001; ENH: Context-aware error messages for optional dependencies
|
#62001
Summary
This PR enhances import_optional_dependency() to provide context-aware error messages that suggest relevant alternatives when
dependencies are missing, addressing issue #62001.
Before:
Missing optional dependency 'openpyxl'. Use pip or conda to install openpyxl.
After:
Missing optional dependency 'openpyxl'. For Excel file operations, try installing xlsxwriter, calamine, xlrd, pyxlsb, or odfpy.
Use pip or conda to install openpyxl.
Implementation Details
- Core Enhancement: Added operation_context parameter to import_optional_dependency() with 13 predefined contexts (excel,
plotting, html, xml, sql, performance, compression, cloud, formats, computation, timezone, testing, development)
- Smart Alternative Filtering: Excludes the failed dependency from suggestions to avoid confusion
- Backward Compatibility: All existing calls work unchanged; new parameter is optional
- Strategic Implementation: Updated high-impact locations where users commonly encounter missing dependencies:
- Excel operations (5 readers: openpyxl, xlrd, calamine, pyxlsb, odf)
- Plotting operations (matplotlib)
- HTML parsing operations (html5lib)
Files Modified
1. pandas/compat/_optional.py: Core enhancement with context mapping and message building
2. pandas/tests/test_optional_dependency.py: Updated test patterns and added comprehensive context tests
3. pandas/io/excel/_*.py: Added context to 5 Excel readers
4. pandas/plotting/_core.py: Added plotting context
5. pandas/io/html.py: Added HTML parsing context
6. doc/source/whatsnew/v3.0.0.rst: Added whatsnew entry
Testing
- All existing tests pass with updated patterns
- New tests verify context functionality works correctly
- Manual verification confirms all files compile successfully
- Backward compatibility maintained for existing calls
- [x] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Please make sure to double check I am still new to contributions and let me know if there are any mistakes.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62003
| 62,003 |
[
"Also I suspect this PR was AI generated so closing. We discourage heavily AI generated pull requests",
"It partially was, I appreciate the review, it makes sense, thank you"
] |
[] |
3,277,510,423 |
PR_kwDOAA0YD86hYhwM
|
DOC: Simplify footer text in pandas documentation
|
This PR simplifies the documentation footer template for clarity.
(Note: This is unrelated to issue #60647, which is about CustomBusinessDay.)
|
closed
|
https://github.com/pandas-dev/pandas/pull/62002
| 62,002 |
[
"This is fine as is, closing"
] |
[] |
3,276,652,040 |
PR_kwDOAA0YD86hVik8
|
BUG: Avoid copying categorical codes if `copy=False`
|
Categorical codes are always copied by `recode_for_categories` regardless of the copy argument. This fixes it by passing the copy argument down to `recode_for_categories`
- ~[ ] closes #xxxx (Replace xxxx with the GitHub issue number)~
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
closed
|
https://github.com/pandas-dev/pandas/pull/62000
| 62,000 |
[
"Thanks @fjetter "
] |
[
{
"id": 78527356,
"node_id": "MDU6TGFiZWw3ODUyNzM1Ng==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Categorical",
"name": "Categorical",
"color": "e11d21",
"default": false,
"description": "Categorical Data Type"
}
] |
3,275,089,979 |
PR_kwDOAA0YD86hQVRM
|
DOC: add button to edit on GitHub
|
- [x] closes #39859
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Changes made:
- Added an extension that allows to have a sidebar with extra "Show on GitHub" and "Edit on GitHub" links. Found [here](https://mg.pov.lt/blog/sphinx-edit-on-github.html).
- Modified conf.py to make sure the extension is added and links direct to editable pages.
|
closed
|
https://github.com/pandas-dev/pandas/pull/61997
| 61,997 |
[
"Hi [mroeschke](https://github.com/mroeschke), would you mind taking a look at this? My fix should have all the links direct-able and I added an extension so it resembles the issue description #39859. This is my first issue so please let me know if there's any convention I'm missing. Also, I said in the issue that I was working on this, but I had a ton of issues building pandas for the first time, so that's why my update is delayed. Thanks in advance!",
"pre-commit.ci autofix",
"Thanks for the PR, but someone is already working on this in https://github.com/pandas-dev/pandas/pull/61956 so closing to let them have a change to finish. But happy to have contributions labeled `good first issue` that doesn't have a linked PR open"
] |
[] |
3,274,486,584 |
PR_kwDOAA0YD86hOT8C
|
BUG/DEPR: logical operation with bool and string
|
- [x] closes #60234 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
closed
|
https://github.com/pandas-dev/pandas/pull/61995
| 61,995 |
[
"Thanks @jbrockmendel ",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 36b8f20e06d3a322890173e6f520ed108825ea02\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61995: BUG/DEPR: logical operation with bool and string'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61995-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61995 on branch 2.3.x (BUG/DEPR: logical operation with bool and string)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Manual backport -> https://github.com/pandas-dev/pandas/pull/62114"
] |
[
{
"id": 47223669,
"node_id": "MDU6TGFiZWw0NzIyMzY2OQ==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Numeric%20Operations",
"name": "Numeric Operations",
"color": "006b75",
"default": false,
"description": "Arithmetic, Comparison, and Logical operations"
},
{
"id": 57522093,
"node_id": "MDU6TGFiZWw1NzUyMjA5Mw==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Strings",
"name": "Strings",
"color": "5319e7",
"default": false,
"description": "String extension data type and string data"
}
] |
3,272,100,794 |
PR_kwDOAA0YD86hGKPD
|
BUG: Fix ExtensionArray binary op protocol
|
- [x] closes #61866
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- Updated pandas/core/arrays/boolean.py to return NotImplemented in binary operations where appropriate, following Python's operator protocol.
- Added and updated tests to ensure correct error handling and array interaction behavior.
|
closed
|
https://github.com/pandas-dev/pandas/pull/61990
| 61,990 |
[
"pre-commit.ci autofix",
"\r\npre-commit.ci autofix\r\n\r\n",
"pre-commit.ci autofix\r\n\r\n",
"pre-commit.ci autofix\r\n\r\n",
"pre-commit.ci autofix\r\n\r\n",
"@jbrockmendel can you please review at your convenience? thanks!",
"I'm out of town for a few days, will look at this when i get back.",
"pre-commit.ci autofix\r\n\r\n",
"@jbrockmendel changes you requested have been made. please take a look",
"Hi @mroeschke , I think this pr is ready, would you mind reviewing it if you get a chance?",
"thanks @tisjayy "
] |
[
{
"id": 47223669,
"node_id": "MDU6TGFiZWw0NzIyMzY2OQ==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Numeric%20Operations",
"name": "Numeric Operations",
"color": "006b75",
"default": false,
"description": "Arithmetic, Comparison, and Logical operations"
},
{
"id": 849023693,
"node_id": "MDU6TGFiZWw4NDkwMjM2OTM=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/ExtensionArray",
"name": "ExtensionArray",
"color": "6138b5",
"default": false,
"description": "Extending pandas with custom dtypes or arrays."
}
] |
3,271,674,434 |
PR_kwDOAA0YD86hEuGU
|
Fix warning for extra fields in read_csv with on_bad_lines callable
|
- [ ] closes #61837 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
open
|
https://github.com/pandas-dev/pandas/pull/61987
| 61,987 |
[
"\r\npre-commit.ci autofix\r\n\r\n",
"pre-commit.ci autofix\r\n\r\n",
"\r\npre-commit.ci autofix\r\n\r\n",
"pre-commit.ci autofix\r\n\r\n",
"\r\npre-commit.ci autofix\r\n\r\n",
"\r\npre-commit.ci autofix\r\n\r\n"
] |
[] |
3,271,508,973 |
PR_kwDOAA0YD86hELM3
|
API: offsets.Day is always calendar-day
|
- [x] closes #44823
- [x] closes #55502
- [x] closes #41943
- [x] closes #51716
- [x] closes #35388
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Alternative to #55502 discussed at last week's dev meeting. This allows TimedeltaIndex.freq to be a`Day` even though it is not a Tick.
|
closed
|
https://github.com/pandas-dev/pandas/pull/61985
| 61,985 |
[
"`/home/runner/work/pandas/pandas/doc/source/whatsnew/v3.0.0.rst:345: WARNING: Bullet list ends without a blank line; unexpected unindent. [docutils]`\r\n\r\nis stumping me. Who is our go-to person for debugging these?\r\n",
"Still seeing \r\n\r\n```\r\n/home/runner/work/pandas/pandas/doc/source/whatsnew/v3.0.0.rst:345: WARNING: Bullet list ends without a blank line; unexpected unindent. [docutils]\r\n```\r\n\r\nTried building the docs locally and jinja2 is complaining `ValueError: PackageLoader could not find a 'io/formats/templates' directory in the 'pandas' package.` so doing some yak-shaving.",
"Looks like there was even more whitespace I messed up. It's happy now.",
"Thanks @jbrockmendel. Good to finally have this change! ",
"Looks like we missed a usage of freq.nanos in the window code. I don't know that code too well. Does that need updating too?",
"Hmm I think we always treated `\"D\"` as 24 hours before this change, and you defined `Day.nanos` in this PR so that's probably why the tests were still passing (and that we might not having any rolling tests with DST?).\r\n\r\nI guess technically with this change `rolling(\"D\")` shouldn't work since `Day` isn't a fixed frequency anymore, but maybe we should keep allowing this case?\r\n\r\n",
"> and that we might not having any rolling tests with DST?\r\n\r\nlooks like we have one rolling test (test_rolling_datetime) with Day and tzaware self._on but i don't think it passes over a DST transition"
] |
[
{
"id": 53181044,
"node_id": "MDU6TGFiZWw1MzE4MTA0NA==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Frequency",
"name": "Frequency",
"color": "0052cc",
"default": false,
"description": "DateOffsets"
}
] |
3,271,225,087 |
PR_kwDOAA0YD86hDOo_
|
MNT: simplify `cibuildwheel` configuration
|
follow up to https://github.com/pandas-dev/pandas/pull/61981#discussion_r2237723118
This reduce the maintenance burden for `cibuildwheel` config parameters:
- cibw takes `project.requires-python` into account for target selection, so there is no need for explicitly excluding unsupported versions
- using `test-extras` instead of `test-requires` avoids a repetition and keeps `project.optional-dependencies` as the one source of truth in this area
- [N/A] closes #xxxx (Replace xxxx with the GitHub issue number)
- [N/A] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [N/A] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [N/A] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
closed
|
https://github.com/pandas-dev/pandas/pull/61984
| 61,984 |
[
"Thanks @neutrinoceros "
] |
[
{
"id": 129350,
"node_id": "MDU6TGFiZWwxMjkzNTA=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Build",
"name": "Build",
"color": "75507B",
"default": false,
"description": "Library building on various platforms"
}
] |
3,269,821,531 |
PR_kwDOAA0YD86g-YZn
|
Bump pypa/cibuildwheel from 2.23.3 to 3.1.1
|
Bumps [pypa/cibuildwheel](https://github.com/pypa/cibuildwheel) from 2.23.3 to 3.1.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pypa/cibuildwheel/releases">pypa/cibuildwheel's releases</a>.</em></p>
<blockquote>
<h2>v3.1.1</h2>
<ul>
<li>🐛 Fix a bug showing an incorrect wheel count at the end of execution, and misrepresenting test-only runs in the GitHub Action summary (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2512">#2512</a>)</li>
<li>📚 Docs fix (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2510">#2510</a>)</li>
</ul>
<h2>v3.1.0</h2>
<ul>
<li>🌟 CPython 3.14 wheels are now built by default - without the <code>"cpython-prerelease"</code> <code>enable</code> set. It's time to build and upload these wheels to PyPI! This release includes CPython 3.14.0rc1, which is guaranteed to be ABI compatible with the final release. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2507">#2507</a>) Free-threading is no longer experimental in 3.14, so you have to skip it explicitly with <code>'cp31?t-*'</code> if you don't support it yet. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2503">#2503</a>)</li>
<li>🌟 Adds the ability to <a href="https://cibuildwheel.pypa.io/en/stable/platforms/#android">build wheels for Android</a>! Set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#platform"><code>platform</code> option</a> to <code>android</code> on Linux or macOS to try it out! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2349">#2349</a>)</li>
<li>🌟 Adds Pyodide 0.28, which builds 3.13 wheels (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2487">#2487</a>)</li>
<li>✨ Support for 32-bit <code>manylinux_2_28</code> (now a consistent default) and <code>manylinux_2_34</code> added (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2500">#2500</a>)</li>
<li>🛠 Improved summary, will also use markdown summary output on GHA (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2469">#2469</a>)</li>
<li>🛠 The riscv64 images now have a working default (as they are now part of pypy/manylinux), but are still experimental (and behind an <code>enable</code>) since you can't push them to PyPI yet (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2506">#2506</a>)</li>
<li>🛠 Fixed a typo in the 3.9 MUSL riscv64 identifier (<code>cp39-musllinux_ricv64</code> -> <code>cp39-musllinux_riscv64</code>) (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2490">#2490</a>)</li>
<li>🛠 Mistyping <code>--only</code> now shows the correct possibilities, and even suggests near matches on Python 3.14+ (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2499">#2499</a>)</li>
<li>🛠 Only support one output from the repair step on linux like other platforms; auditwheel fixed this over four years ago! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2478">#2478</a>)</li>
<li>🛠 We now use pattern matching extensively (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2434">#2434</a>)</li>
<li>📚 We now have platform maintainers for our special platforms and interpreters! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2481">#2481</a>)</li>
</ul>
<h2>v3.0.1</h2>
<ul>
<li>🛠 Updates CPython 3.14 prerelease to 3.14.0b3 (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2471">#2471</a>)</li>
<li>✨ Adds a CPython 3.14 prerelease iOS build (only when prerelease builds are <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable">enabled</a>) (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2475">#2475</a>)</li>
</ul>
<h2>v3.0.0</h2>
<p>See <a href="https://github.com/henryiii"><code>@henryiii</code></a>'s <a href="https://iscinumpy.dev/post/cibuildwheel-3-0-0/">release post</a> for more info on new features!</p>
<ul>
<li>
<p>🌟 Adds the ability to <a href="https://cibuildwheel.pypa.io/en/stable/platforms/#ios">build wheels for iOS</a>! Set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#platform"><code>platform</code> option</a> to <code>ios</code> on a Mac with the iOS toolchain to try it out! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2286">#2286</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2363">#2363</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2432">#2432</a>)</p>
</li>
<li>
<p>🌟 Adds support for the GraalPy interpreter! Enable for your project using the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/1538">#1538</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2411">#2411</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2414">#2414</a>)</p>
</li>
<li>
<p>✨ Adds CPython 3.14 support, under the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a> <code>cpython-prerelease</code>. This version of cibuildwheel uses 3.14.0b2. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2390">#2390</a>)</p>
<p><em>While CPython is in beta, the ABI can change, so your wheels might not be compatible with the final release. For this reason, we don't recommend distributing wheels until RC1, at which point 3.14 will be available in cibuildwheel without the flag.</em> (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2390">#2390</a>)</p>
</li>
<li>
<p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#test-sources">test-sources option</a>, and changes the working directory for tests. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2062">#2062</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2284">#2284</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2437">#2437</a>)</p>
<ul>
<li>If this option is set, cibuildwheel will copy the files and folders specified in <code>test-sources</code> into the temporary directory we run from. This is required for iOS builds, but also useful for other platforms, as it allows you to avoid placeholders.</li>
<li>If this option is not set, behaviour matches v2.x - cibuildwheel will run the tests from a temporary directory, and you can use the <code>{project}</code> placeholder in the <code>test-command</code> to refer to the project directory. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2420">#2420</a>)</li>
</ul>
</li>
<li>
<p>✨ Adds <a href="https://cibuildwheel.pypa.io/en/stable/options/#dependency-versions"><code>dependency-versions</code></a> inline syntax (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2122">#2122</a>)</p>
</li>
<li>
<p>✨ Improves support for Pyodide builds and adds the experimental <a href="https://cibuildwheel.pypa.io/en/stable/options/#pyodide-version"><code>pyodide-version</code></a> option, which allows you to specify the version of Pyodide to use for builds. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2002">#2002</a>)</p>
</li>
<li>
<p>✨ Add <code>pyodide-prerelease</code> <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable">enable</a> option, with an early build of 0.28 (Python 3.13). (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2431">#2431</a>)</p>
</li>
<li>
<p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#test-environment"><code>test-environment</code></a> option, which allows you to set environment variables for the test command. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2388">#2388</a>)</p>
</li>
<li>
<p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#xbuild-tools"><code>xbuild-tools</code></a> option, which allows you to specify tools safe for cross-compilation. Currently only used on iOS; will be useful for Android in the future. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2317">#2317</a>)</p>
</li>
<li>
<p>🛠 The default <a href="https://cibuildwheel.pypa.io/en/stable/options/#linux-image">manylinux image</a> has changed from <code>manylinux2014</code> to <code>manylinux_2_28</code>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2330">#2330</a>)</p>
</li>
<li>
<p>🛠 EOL images <code>manylinux1</code>, <code>manylinux2010</code>, <code>manylinux_2_24</code> and <code>musllinux_1_1</code> can no longer be specified by their shortname. The full OCI name can still be used for these images, if you wish. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2316">#2316</a>)</p>
</li>
<li>
<p>🛠 Invokes <code>build</code> rather than <code>pip wheel</code> to build wheels by default. You can control this via the <a href="https://cibuildwheel.pypa.io/en/stable/options/#build-frontend"><code>build-frontend</code></a> option. You might notice that you can see your build log output now! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2321">#2321</a>)</p>
</li>
<li>
<p>🛠 Build verbosity settings have been reworked to have consistent meanings between build backends when non-zero. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2339">#2339</a>)</p>
</li>
<li>
<p>🛠 Removed the <code>CIBW_PRERELEASE_PYTHONS</code> and <code>CIBW_FREE_THREADED_SUPPORT</code> options - these have been folded into the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code></a> option instead. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2095">#2095</a>)</p>
</li>
<li>
<p>🛠 Build environments no longer have setuptools and wheel preinstalled. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2329">#2329</a>)</p>
</li>
<li>
<p>🛠 Use the standard Schema line for the integrated JSONSchema. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2433">#2433</a>)</p>
</li>
<li>
<p>⚠️ Dropped support for building Python 3.6 and 3.7 wheels. If you need to build wheels for these versions, use cibuildwheel v2.23.3 or earlier. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2282">#2282</a>)</p>
</li>
<li>
<p>⚠️ The minimum Python version required to run cibuildwheel is now Python 3.11. You can still build wheels for Python 3.8 and newer. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/1912">#1912</a>)</p>
</li>
<li>
<p>⚠️ 32-bit Linux wheels no longer built by default - the <a href="https://cibuildwheel.pypa.io/en/stable/options/#archs">arch</a> was removed from <code>"auto"</code>. It now requires explicit <code>"auto32"</code>. Note that modern manylinux images (like the new default, <code>manylinux_2_28</code>) do not have 32-bit versions. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2458">#2458</a>)</p>
</li>
<li>
<p>⚠️ PyPy wheels no longer built by default, due to a change to our options system. To continue building PyPy wheels, you'll now need to set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a> to <code>pypy</code> or <code>pypy-eol</code>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2095">#2095</a>)</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pypa/cibuildwheel/blob/main/docs/changelog.md">pypa/cibuildwheel's changelog</a>.</em></p>
<blockquote>
<h3>v3.1.1</h3>
<p><em>24 July 2025</em></p>
<ul>
<li>🐛 Fix a bug showing an incorrect wheel count at the end of execution, and misrepresenting test-only runs in the GitHub Action summary (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2512">#2512</a>)</li>
<li>📚 Docs fix (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2510">#2510</a>)</li>
</ul>
<h3>v3.1.0</h3>
<p><em>23 July 2025</em></p>
<ul>
<li>🌟 CPython 3.14 wheels are now built by default - without the <code>"cpython-prerelease"</code> <code>enable</code> set. It's time to build and upload these wheels to PyPI! This release includes CPython 3.14.0rc1, which is guaranteed to be ABI compatible with the final release. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2507">#2507</a>) Free-threading is no longer experimental in 3.14, so you have to skip it explicitly with <code>'cp31?t-*'</code> if you don't support it yet. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2503">#2503</a>)</li>
<li>🌟 Adds the ability to <a href="https://cibuildwheel.pypa.io/en/stable/platforms/#android">build wheels for Android</a>! Set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#platform"><code>platform</code> option</a> to <code>android</code> on Linux or macOS to try it out! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2349">#2349</a>)</li>
<li>🌟 Adds Pyodide 0.28, which builds 3.13 wheels (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2487">#2487</a>)</li>
<li>✨ Support for 32-bit <code>manylinux_2_28</code> (now a consistent default) and <code>manylinux_2_34</code> added (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2500">#2500</a>)</li>
<li>🛠 Improved summary, will also use markdown summary output on GHA (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2469">#2469</a>)</li>
<li>🛠 The riscv64 images now have a working default (as they are now part of pypy/manylinux), but are still experimental (and behind an <code>enable</code>) since you can't push them to PyPI yet (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2506">#2506</a>)</li>
<li>🛠 Fixed a typo in the 3.9 MUSL riscv64 identifier (<code>cp39-musllinux_ricv64</code> -> <code>cp39-musllinux_riscv64</code>) (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2490">#2490</a>)</li>
<li>🛠 Mistyping <code>--only</code> now shows the correct possibilities, and even suggests near matches on Python 3.14+ (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2499">#2499</a>)</li>
<li>🛠 Only support one output from the repair step on linux like other platforms; auditwheel fixed this over four years ago! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2478">#2478</a>)</li>
<li>🛠 We now use pattern matching extensively (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2434">#2434</a>)</li>
<li>📚 We now have platform maintainers for our special platforms and interpreters! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2481">#2481</a>)</li>
</ul>
<h3>v3.0.1</h3>
<p><em>5 July 2025</em></p>
<ul>
<li>🛠 Updates CPython 3.14 prerelease to 3.14.0b3 (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2471">#2471</a>)</li>
<li>✨ Adds a CPython 3.14 prerelease iOS build (only when prerelease builds are <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable">enabled</a>) (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2475">#2475</a>)</li>
</ul>
<h3>v3.0.0</h3>
<p><em>11 June 2025</em></p>
<p>See <a href="https://github.com/henryiii"><code>@henryiii</code></a>'s <a href="https://iscinumpy.dev/post/cibuildwheel-3-0-0/">release post</a> for more info on new features!</p>
<ul>
<li>
<p>🌟 Adds the ability to <a href="https://cibuildwheel.pypa.io/en/stable/platforms/#ios">build wheels for iOS</a>! Set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#platform"><code>platform</code> option</a> to <code>ios</code> on a Mac with the iOS toolchain to try it out! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2286">#2286</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2363">#2363</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2432">#2432</a>)</p>
</li>
<li>
<p>🌟 Adds support for the GraalPy interpreter! Enable for your project using the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/1538">#1538</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2411">#2411</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2414">#2414</a>)</p>
</li>
<li>
<p>✨ Adds CPython 3.14 support, under the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a> <code>cpython-prerelease</code>. This version of cibuildwheel uses 3.14.0b2. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2390">#2390</a>)</p>
<p><em>While CPython is in beta, the ABI can change, so your wheels might not be compatible with the final release. For this reason, we don't recommend distributing wheels until RC1, at which point 3.14 will be available in cibuildwheel without the flag.</em> (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2390">#2390</a>)</p>
</li>
<li>
<p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#test-sources">test-sources option</a>, which copies files and folders into the temporary working directory we run tests from. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2062">#2062</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2284">#2284</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2420">#2420</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2437">#2437</a>)</p>
<p>This is particularly important for iOS builds, which do not support placeholders in the <code>test-command</code>, but can also be useful for other platforms.</p>
</li>
<li>
<p>✨ Adds <a href="https://cibuildwheel.pypa.io/en/stable/options/#dependency-versions"><code>dependency-versions</code></a> inline syntax (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2122">#2122</a>)</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pypa/cibuildwheel/commit/e6de07ed3921b51089aae6981989889cf1eddd0c"><code>e6de07e</code></a> Bump version: v3.1.1</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/2ca692b1e55a1f924bfb460099c9d7e015671a8d"><code>2ca692b</code></a> docs: iOS typo fix in docs (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2510">#2510</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/1ac7fa7f004958fbde774ee89523c446a5d99934"><code>1ac7fa7</code></a> fix: report defects in logs and HTML summaries (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2512">#2512</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/ffd835cef18fa11522f608fc0fa973b89f5ddc87"><code>ffd835c</code></a> Bump version: v3.1.0</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/3e2a9aa6e85824999f897fc2c060ca12a5113ef6"><code>3e2a9aa</code></a> fix: regenerate schema</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/10c727eed9fc962f75d33d472272e3ad78c3e707"><code>10c727e</code></a> feat: Python 3.14rc1 build by default (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2507">#2507</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/f628c9dd23fe6e263cb91cef755a51a0af3bcddc"><code>f628c9d</code></a> [pre-commit.ci] pre-commit autoupdate (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2505">#2505</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/0f487ee2cb00876d95290da49d04208c91237857"><code>0f487ee</code></a> feat: add support for building Android wheels (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2349">#2349</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/e2e24882d8422e974295b1b9079d4ce80a5098a4"><code>e2e2488</code></a> feat: add default riscv64 images (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2506">#2506</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/a8bff94dbb5f3a4a914e29cf8353c2f6f1b9ab8b"><code>a8bff94</code></a> [Bot] Update dependencies (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2504">#2504</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pypa/cibuildwheel/compare/v2.23.3...v3.1.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
|
closed
|
https://github.com/pandas-dev/pandas/pull/61981
| 61,981 |
[
"@mroeschke Could this change be backported to the `2.3.x` branch? cibuildwheel `3.1.1` will be a requirement for cp314 wheels.",
"Just noting that 2.3.x will likely not add additional Python version support as 2.3.x releases are only meant to address regressions and fixes for major pandas 3.0 features.\r\n\r\npandas 3.0 _may_ be the first pandas version to support 3.14",
"> Just noting that 2.3.x will likely not add additional Python version support as 2.3.x releases are only meant to address regressions and fixes for major pandas 3.0 features.\r\n> \r\n> pandas 3.0 _may_ be the first pandas version to support 3.14\r\n\r\nOh, I've started testing 3.14 for Home Assistant already and the test suite passes, including some (albeit very) limited tests with pandas `2.3.1`. Not sure how much effort it will actually take to make it fully compatible, I've seen there is some work in #61950.\r\n\r\nJust from a downstream package perspective, in general I prefer it if packages don't couple new Python version support with a new major revision / breaking changes. It just makes upgrading more difficult. _I'm aware that's often just the nature of things line up. Just wanted to share my experience._"
] |
[
{
"id": 129350,
"node_id": "MDU6TGFiZWwxMjkzNTA=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Build",
"name": "Build",
"color": "75507B",
"default": false,
"description": "Library building on various platforms"
},
{
"id": 48070600,
"node_id": "MDU6TGFiZWw0ODA3MDYwMA==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/CI",
"name": "CI",
"color": "a2bca7",
"default": false,
"description": "Continuous Integration"
},
{
"id": 527603109,
"node_id": "MDU6TGFiZWw1Mjc2MDMxMDk=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Dependencies",
"name": "Dependencies",
"color": "d93f0b",
"default": false,
"description": "Required and optional dependencies"
}
] |
3,269,017,396 |
PR_kwDOAA0YD86g7jZQ
|
DOC: Update documentation for using natural sort with `sort_values`
|
The previous documentation recommended to use the lambda function `lambda x: np.argsort(index_natsorted(x))` as a key argument to `sort_values`. However, while this works when sorting on a single column, it causes incorrect sorting when sorting multiple columns with duplicated values. For example:
```
>>> df = pd.DataFrame(
... {
... "hours": ["0hr", "128hr", "0hr", "64hr", "64hr", "128hr"],
... "mins": ["10mins", "40mins", "40mins", "40mins", "10mins", "10mins"],
... "value": [10, 20, 30, 40, 50, 60],
... }
... )
>>> df
hours mins value
0 0hr 10mins 10
1 128hr 40mins 20
2 0hr 40mins 30
3 64hr 40mins 40
4 64hr 10mins 50
5 128hr 10mins 60
>>> from natsort import index_natsorted
>>> df.sort_values(
... by=["hours", "mins"],
... key=lambda x: np.argsort(index_natsorted(x)),
... )
hours mins value
0 0hr 10mins 10
2 0hr 40mins 30
3 64hr 40mins 40
4 64hr 10mins 50
1 128hr 40mins 20
5 128hr 10mins 60
```
Note how the `hours` column is sorted correctly, but the `mins` column isn't.
This PR updates the documentation to use `natsort_keygen`, which is robust to sorting on multiple columns.
Commit 2: Removes the calls to `natsort_keygen()` in the example code as the output generated was too long and doctest didn't seem to like having the tuple formatted.
|
closed
|
https://github.com/pandas-dev/pandas/pull/61979
| 61,979 |
[
"Thanks @marc-jones "
] |
[
{
"id": 134699,
"node_id": "MDU6TGFiZWwxMzQ2OTk=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Docs",
"name": "Docs",
"color": "3465A4",
"default": false,
"description": null
}
] |
3,267,416,510 |
PR_kwDOAA0YD86g2G4Y
|
BUG: Fix infer_dtype result for complex with pd.NA
|
- [x] closes #61976
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Fix a bug in `api.types.infer_dtype` returning "mixed" for complex and ``pd.NA`` mix.
|
closed
|
https://github.com/pandas-dev/pandas/pull/61977
| 61,977 |
[
"Thanks @yuanx749 "
] |
[
{
"id": 31404521,
"node_id": "MDU6TGFiZWwzMTQwNDUyMQ==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Dtype%20Conversions",
"name": "Dtype Conversions",
"color": "e102d8",
"default": false,
"description": "Unexpected or buggy dtype conversions"
}
] |
3,267,163,751 |
PR_kwDOAA0YD86g1UuB
|
ENH: Include line number and number of fields when read_csv() callable with `engine="python"` raises ParserWarning
|
- [X] closes #61838
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
## Description of the change
`read_csv()` currently provides the description of an invalid row(expected_columns, actual_columns, number, text) when the row has too many elements where `engine="pyarrow"`, but the callable can only include the contents of the row when `engine="python"`.
(For more details on pyarrow.csv.InvalidRow, see [pyarrow documentation](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ParseOptions.html#pyarrow.csv.ParseOptions.invalid_row_handler))
This PR proposes to additionally pass `expected_columns`, `actual_columns` and `row` when `on_bad_lines` is a callable and `engine="python"`, so that users can desribe the invalid row more in detail.
The order of the arguments has been aligned with `pyarrow`.
|
closed
|
https://github.com/pandas-dev/pandas/pull/61974
| 61,974 |
[
"Thanks for the PR, but this enhancement needs more discussion before moving forward with a PR. Additionally this approach.\r\n\r\n1. Is an API breaking change for user pass the older form of the callable\r\n2. You callable description doesn't seem to match PyArrow from the example in https://arrow.apache.org/docs/python/generated/pyarrow.csv.ParseOptions.html#pyarrow.csv.ParseOptions\r\n\r\nso closing",
"Many thanks @mroeschke ,\n\n>1. Is an API breaking change for user pass the older form of the callable\n\nUnderstood. Maybe there could be some further discussions regarding this in the near future considering there are some suggestions at #61978 .\n\n\n>2. You callable description doesn't seem to match PyArrow from the example in https://arrow.apache.org/docs/python/generated/pyarrow.csv.ParseOptions.html#pyarrow.csv.ParseOptions\n\nI've meant the callable has been aligned with `pyarrow.csv.InvalidRow`, but as you mentioned, this also needs to be considered in terms of backwards compatibility."
] |
[] |
3,267,036,631 |
PR_kwDOAA0YD86g06pG
|
BUG: Series.replace with CoW when made from an Index
|
- [x] closes #61622 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
When we create a Series from an Index, it's zero copy which means that with CoW there are weak refs to the Index. Comparison of these weak refs uses `Index.__eq__`, which operates on the array (unlike `Block.__eq__` which is merely `is`). This leads to failure in `Series.replace`.
Instead, we replace the equality checks with `is`, plus some additional logic for performance. I believe this is the only place where we are using `__eq__` on these references.
|
closed
|
https://github.com/pandas-dev/pandas/pull/61972
| 61,972 |
[
"Thanks @rhshadrach "
] |
[
{
"id": 76811,
"node_id": "MDU6TGFiZWw3NjgxMQ==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Bug",
"name": "Bug",
"color": "e10c02",
"default": false,
"description": null
},
{
"id": 1652721180,
"node_id": "MDU6TGFiZWwxNjUyNzIxMTgw",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/replace",
"name": "replace",
"color": "01a886",
"default": false,
"description": "replace method"
},
{
"id": 2085877452,
"node_id": "MDU6TGFiZWwyMDg1ODc3NDUy",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Copy%20/%20view%20semantics",
"name": "Copy / view semantics",
"color": "70e5ca",
"default": false,
"description": ""
}
] |
3,266,955,396 |
PR_kwDOAA0YD86g0qHV
|
contributing codebase is revised
|
- Issue: #61968
- DOC: code coverage app provided in documentation is invalid #61968
Open
- [https://github.com/pandas-dev/pandas/blob/main/doc/source/development/contributing_codebase.rst](https://github.com/pandas-dev/pandas/blob/main/doc/source/development/contributing_codebase.rst)
|
closed
|
https://github.com/pandas-dev/pandas/pull/61971
| 61,971 |
[
"@vishwajeetsinghrana8 - why are you removing these lines?",
"These lines doesn't make sense.",
"Thanks for the PR but the changes are not applicable to the linked issues so closing"
] |
[] |
3,266,897,434 |
PR_kwDOAA0YD86g0eCV
|
DOC: rephrase CoW ChainedAssignmentError message now CoW is always enabled
|
The "When using the Copy-on-Write mode" can be updated now it is no longer a mode that is enabled opt-in, but the only behaviour.
|
closed
|
https://github.com/pandas-dev/pandas/pull/61970
| 61,970 |
[
"> Related, I noticed we use a `ChainedAssignmentError` with these messages to raise a warning, not an exception. Do you think we should change the name of this subclass to `ChainedAssignmentWarning` as a clearer name\r\n\r\nYeah, it's probably confusing .. \r\nSo it was originally an exception (and was then called that way), but then we changed it to a warning because of some false positives that occurred in cython code (https://github.com/pandas-dev/pandas/pull/51926). You asked the question at the time (https://github.com/pandas-dev/pandas/pull/51926#discussion_r1134827320), and so the idea is that it really is an error that you should fix (typically, except for those false positives if you are writing cython code), and that was the reasoning to prefer the \"stronger\" wording about an error. But of course a warning class being called `Error` is also confusing.."
] |
[
{
"id": 134699,
"node_id": "MDU6TGFiZWwxMzQ2OTk=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Docs",
"name": "Docs",
"color": "3465A4",
"default": false,
"description": null
}
] |
3,266,178,695 |
PR_kwDOAA0YD86gyHda
|
BUG: Fix Series.reindex losing values when reindexing to MultiIndex
|
- [X] closes #60923
- [X ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [X] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature.
##Series.reindex()
#Before
```
# Create a Series with a named Index
series = pd.Series([26.73, 24.255], index=pd.Index([81, 82], name='a'))
# Create a MultiIndex with level names 'a', 'b', 'c'
target = pd.MultiIndex.from_product(
[[81, 82], [np.nan], ["2018-06-01", "2018-07-01"]],
names=["a", "b", "c"]
)
# This would incorrectly set all values to NaN
series.reindex(target)
# a b c
# 81 NaN 2018-06-01 NaN
# 2018-07-01 NaN
# 82 NaN 2018-06-01 NaN
# 2018-07-01 NaN
# But this works correctly
series.reindex(target, level="a")
# a b c
# 81 NaN 2018-06-01 26.73
# 2018-07-01 26.73
# 82 NaN 2018-06-01 24.255
# 2018-07-01 24.255
```
#After
```
# Same setup as before
series = pd.Series([26.73, 24.255], index=pd.Index([81, 82], name='a'))
target = pd.MultiIndex.from_product(
[[81, 82], [np.nan], ["2018-06-01", "2018-07-01"]],
names=["a", "b", "c"]
)
# Now both produce the same correct result
series.reindex(target) # Automatically detects level='a'
# a b c
# 81 NaN 2018-06-01 26.73
# 2018-07-01 26.73
# 82 NaN 2018-06-01 24.255
# 2018-07-01 24.255
```
##Datafram.reindex()
```
df = pd.DataFrame({
'value': [26.73, 24.255],
'other': ['A', 'B']
}, index=pd.Index([81, 82], name='a'))
target = pd.MultiIndex.from_product(
[[81, 82], [np.nan], ["2018-06-01", "2018-07-01"]],
names=["a", "b", "c"]
)
```
Before
```
df.reindex(index = target)
value other
a b c
81 NaN 2018-06-01 NaN NaN
2018-07-01 NaN NaN
82 NaN 2018-06-01 NaN NaN
2018-07-01 NaN NaN
```
After
```
df.reindex(index = target)
value other
a b c
81 NaN 2018-06-01 26.730 A
2018-07-01 26.730 A
82 NaN 2018-06-01 24.255 B
2018-07-01 24.255 B
```
|
closed
|
https://github.com/pandas-dev/pandas/pull/61969
| 61,969 |
[
"> * reviewers\r\n\r\n\r\n\r\n> Does `DataFrame.reindex` also need the same handling?\r\n\r\nYes, Dataframe with single index is having the same issue\r\n```\r\ndf = pd.DataFrame({\r\n 'value': [26.73, 24.255],\r\n 'other': ['A', 'B']\r\n}, index=pd.Index([81, 82], name='a'))\r\n\r\n# Create a MultiIndex with level names 'a', 'b', 'c'\r\ntarget = pd.MultiIndex.from_product(\r\n [[81, 82], [np.nan], [\"2018-06-01\", \"2018-07-01\"]], \r\n names=[\"a\", \"b\", \"c\"]\r\n)\r\n\r\n\r\n\r\ndf.reindex(target)\r\n value other\r\na b c\r\n81 NaN 2018-06-01 NaN NaN\r\n 2018-07-01 NaN NaN\r\n82 NaN 2018-06-01 NaN NaN\r\n 2018-07-01 NaN NaN\r\n\r\ndf.reindex(target, level=\"a\")\r\n value other\r\na b c\r\n81 NaN 2018-06-01 26.730 A\r\n 2018-07-01 26.730 A\r\n82 NaN 2018-06-01 24.255 B\r\n 2018-07-01 24.255 B\r\n```\r\n\r\n\r\nHow its the same scenario for multiindex, reindex only works if all index are matching. Infact specifying level for multiIndex dataframe is raising TypeError\r\n\r\n```\r\nraise TypeError(\"Join on level between two MultiIndex objects is ambiguous\")\r\nTypeError: Join on level between two MultiIndex objects is ambiguous\r\n```\r\n\r\n```\r\n source_idx = pd.MultiIndex.from_product(\r\n [[81, 82], [\"2018-06-01\"]],\r\n names=[\"a\", \"c\"]\r\n )\r\n df = pd.DataFrame(\r\n {\"value\": [26.73, 24.255]},\r\n index=source_idx\r\n )\r\n\r\n # Create target with same level names but different structure\r\n target_idx = pd.MultiIndex.from_product(\r\n [[81, 82], [np.nan], [\"2018-06-01\", \"2018-07-01\"]],\r\n names=[\"a\", \"b\", \"c\"]\r\n )\r\n\r\n \r\n>>> df.reindex(target_idx) # Reindexing doesnt copy matching index values\r\n value\r\na b c\r\n81 NaN 2018-06-01 NaN\r\n 2018-07-01 NaN\r\n82 NaN 2018-06-01 NaN\r\n 2018-07-01 NaN\r\n```\r\n\r\nReindex MultiIndex dataframe works iff all indexes match.\r\n\r\nI will leave the multiIndex dataframe functionality as is and address the issue in single index dataframe like the example above. lmk what you think.",
"@mroeschke could you please review it when you get a chance",
"@mroeschke , sorry for tagging again. I have addressed all comments from what I can see, but still seeing \"Changes Requested\" \"[mroeschke](https://github.com/mroeschke) Requested changes\". Have I missed addressing any of your comments? Do you mind pointing which comment is not being addressed. \r\n",
"Thanks @Roline-Stapny "
] |
[
{
"id": 71268330,
"node_id": "MDU6TGFiZWw3MTI2ODMzMA==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/MultiIndex",
"name": "MultiIndex",
"color": "207de5",
"default": false,
"description": null
},
{
"id": 1218227310,
"node_id": "MDU6TGFiZWwxMjE4MjI3MzEw",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Index",
"name": "Index",
"color": "e99695",
"default": false,
"description": "Related to the Index class or subclasses"
}
] |
3,265,665,525 |
PR_kwDOAA0YD86gwfgz
|
BUG: fix Series.str.fullmatch() and Series.str.match() with a compiled regex failing with arrow strings
|
Fixes: #61952
After Fix:
```python
DATA = ["applep", "bananap", "Cherryp", "DATEp", "eGGpLANTp", "123p", "23.45p"]
s=pd.Series(DATA)
s.str.fullmatch(re.compile(r"applep"))
Output:
0 True
1 False
2 False
3 False
4 False
5 False
6 False
dtype: bool
```
```python
DATA = ["applep", "bananap", "Cherryp", "DATEp", "eGGpLANTp", "123p", "23.45p"]
sa=pd.Series(DATA, dtype="string[pyarrow]")
sa.str.match(re.compile(r"applep"))
Output:
0 True
1 False
2 False
3 False
4 False
5 False
6 False
dtype: boolean
```
- [x] closes #61952
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
|
closed
|
https://github.com/pandas-dev/pandas/pull/61964
| 61,964 |
[
"@jorisvandenbossche Moved tests to `pandas/tests/strings/test_find_replace.py` and made a minor change to the docstring. I’m not sure what changes need to be made in docs. could you please provide more details?",
"> I’m not sure what changes need to be made in docs. could you please provide more details?\r\n\r\nThe suggestions of @yuanx749 are in the good direction\r\n",
"Thanks @khemkaran10 ",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 3cefa1ee6b30843a24065fa67392fbfa63d0769b\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61964: BUG: fix Series.str.fullmatch() and Series.str.match() with a compiled regex failing with arrow strings '\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61964-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61964 on branch 2.3.x (BUG: fix Series.str.fullmatch() and Series.str.match() with a compiled regex failing with arrow strings )\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Manual backport -> https://github.com/pandas-dev/pandas/pull/62113"
] |
[
{
"id": 57522093,
"node_id": "MDU6TGFiZWw1NzUyMjA5Mw==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Strings",
"name": "Strings",
"color": "5319e7",
"default": false,
"description": "String extension data type and string data"
},
{
"id": 3303158446,
"node_id": "MDU6TGFiZWwzMzAzMTU4NDQ2",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Arrow",
"name": "Arrow",
"color": "f9d0c4",
"default": false,
"description": "pyarrow functionality"
}
] |
3,265,397,831 |
PR_kwDOAA0YD86gvqH0
|
BUG: fix .str.isdigit to honor unicode superscript for older pyarrow
|
- [x] closes #61466
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
open
|
https://github.com/pandas-dev/pandas/pull/61962
| 61,962 |
[
"pandas/tests/strings/test_strings.py::test_isnumeric_unicode",
"> pandas/tests/strings/test_strings.py::test_isnumeric_unicode\r\n\r\nYeah, see https://github.com/pandas-dev/pandas/issues/61466#issuecomment-3121827923 (but I suppose the best option is just to accept that difference and update the test to reflect it. Alternatively we could still only use pyarrow for ascii, and always fall back to python for unicode, if we really want consistent behaviour)"
] |
[
{
"id": 57522093,
"node_id": "MDU6TGFiZWw1NzUyMjA5Mw==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Strings",
"name": "Strings",
"color": "5319e7",
"default": false,
"description": "String extension data type and string data"
},
{
"id": 3303158446,
"node_id": "MDU6TGFiZWwzMzAzMTU4NDQ2",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Arrow",
"name": "Arrow",
"color": "f9d0c4",
"default": false,
"description": "pyarrow functionality"
}
] |
3,265,364,121 |
PR_kwDOAA0YD86gvkQu
|
DOC: update .str.contains/match/startswith docstring examples for default behaviour
|
Updating the docstrings of `.str.` predicate methods that have the `na` keyword.
For the examples, the current text is no longer correct (because the default behaviour with str dtype is now to already return False).
For now I just removed those examples. I could instead update the example to create an object-dtype Series to still show the `na` behaviour, but personally I feel that would make the docstring examples more complex than needed, and that it is fine to let they focus on just the default dtype. But no strong opinion ;)
|
closed
|
https://github.com/pandas-dev/pandas/pull/61960
| 61,960 |
[
"Thanks @jorisvandenbossche "
] |
[
{
"id": 134699,
"node_id": "MDU6TGFiZWwxMzQ2OTk=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Docs",
"name": "Docs",
"color": "3465A4",
"default": false,
"description": null
},
{
"id": 57522093,
"node_id": "MDU6TGFiZWw1NzUyMjA5Mw==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Strings",
"name": "Strings",
"color": "5319e7",
"default": false,
"description": "String extension data type and string data"
}
] |
3,265,174,806 |
PR_kwDOAA0YD86gu8QD
|
Flattened footer
|
Flattened the footer with pandas custom footer, sphinx-version, and theme-version in single line as shown
<img width="1433" height="71" alt="Screenshot 2025-07-26 at 15 51 18" src="https://github.com/user-attachments/assets/f45acd94-dd78-44e9-b026-20191153a9e8" />
|
closed
|
https://github.com/pandas-dev/pandas/pull/61957
| 61,957 |
[
"#51536 ",
"Thanks could you remove the custom template as described in that issue",
"hi @mroeschke thanks for checking out! I read their doc about, but in order to have the pandas copyright, we will have to use their customized template approach as mentioned https://pydata-sphinx-theme.readthedocs.io/en/stable/user_guide/layout.html#add-your-own-html-templates-to-theme-sections.\r\n\r\nAlthough in their library, the \"copyright\" keyword is in the package and if you were to do without custom template, you can access into their library on server side to change the copyright.html as shown https://github.com/pydata/pydata-sphinx-theme/blob/main/src/pydata_sphinx_theme/theme/pydata_sphinx_theme/components/copyright.html\r\n\r\nHope this clear things up. ",
"No you can add their copyright component to add into the `html_theme_options` config: https://pydata-sphinx-theme.readthedocs.io/en/stable/user_guide/layout.html#built-in-components-to-insert-into-sections\r\n\r\nIt uses the `copyright` variable defined in `conf.py` ",
"Hi, you were right, it can be done without _template, here's the work i had. Hope this solves it!",
"Hi, the reason I used Markup() because without it the footer would read the code in ascii and show it as this\r\n \r\n<img width=\"2938\" height=\"200\" alt=\"image\" src=\"https://github.com/user-attachments/assets/56d8a45f-8ebf-4afe-b2c3-db7a506dc949\" />\r\n\r\nGenerated source code:\r\n<img width=\"2938\" height=\"326\" alt=\"image\" src=\"https://github.com/user-attachments/assets/7eba5b38-c92c-4e80-b4c7-8cad8e89b1b4\" />\r\n\r\n"
] |
[
{
"id": 134699,
"node_id": "MDU6TGFiZWwxMzQ2OTk=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Docs",
"name": "Docs",
"color": "3465A4",
"default": false,
"description": null
}
] |
3,265,051,457 |
PR_kwDOAA0YD86gujQx
|
DOC: added button to edit on GitHub
|
- [x] closes #39859 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
open
|
https://github.com/pandas-dev/pandas/pull/61956
| 61,956 |
[
"pre-commit.ci autofix",
"hey @afeld, could you take a look at this? TIA!",
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/61956/",
"> Thanks but the links do not direct to editable pages\r\n\r\nIt seems that’s the only page that isn’t working. Sorry, but could you explain how `index.rst` works? I have the file locally for some reason, but it’s not on GitHub. Is it supposed to redirect to `index.rst.template`?",
"Ah OK, yes I see this works for some straightforward `.rst` pages.\r\n\r\nYes ideally we would only only want this button on pages that are not templates or API pages. Is there a straightforward way in pydata-sphinx-theme to only add this button to select pages? ",
"Sadly, there is no straightforward way to exclude some pages. But, I am gonna try to make a extension for this (might take some time as I am new to sphinx lol).",
"Ok, so I added a new list called exclude_edit_page_button and it will exclude adding the button to those pages. But, I have some questions:\r\n\r\n1. Would you like the button to be in the This Page menu? (like how it was in #61997)\r\n2. What do you mean by API page? Should the button be excluded in every page in /reference?",
"1. I don't know what 'This Page' refers to, so no?\r\n2. The `/reference` pages can have the button a page for a particular pandas API e.g. https://pandas.pydata.org/preview/pandas-dev/pandas/61956/docs/reference/api/pandas.melt.html#pandas.melt should not have the button",
"<img width=\"152\" height=\"151\" alt=\"Screenshot 2025-08-01 at 9 52 48 AM\" src=\"https://github.com/user-attachments/assets/7272c6ca-bec6-40cc-8eb2-83364dd917d0\" /> \r\n\r\nI was referring to this - though I noticed it's not in production?\r\n\r\nI’ve updated the logic to exclude the index page and any pages that include api in their path.\r\nLet me know what you think!",
"pre-commit.ci autofix"
] |
[
{
"id": 134699,
"node_id": "MDU6TGFiZWwxMzQ2OTk=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Docs",
"name": "Docs",
"color": "3465A4",
"default": false,
"description": null
}
] |
3,263,981,792 |
PR_kwDOAA0YD86grBAe
|
docs: Improve README with helpful contributor resources
|
Added a small section to the end of the README that provides useful resources for new contributors, including:
- Official Pandas cheat sheet
- Beginner tutorials
- “Good first issues” link
- Slack community link
This addition aims to encourage and guide new contributors without altering any of the existing README content.
Let me know if this fits the community guidelines — happy to adjust!
|
closed
|
https://github.com/pandas-dev/pandas/pull/61954
| 61,954 |
[
"Thanks for the PR.\r\n\r\nSince there's no issue discussing this inclusion, I don't think we necessarily need to add this at this time so closing. If interested in contributing feel free to tackle issues labeled `good first issue`"
] |
[] |
3,263,634,189 |
PR_kwDOAA0YD86gp1ir
|
TST: run python-dev CI on 3.14-dev
|
I'd like to see how widespread the test breakage is due to https://github.com/pandas-dev/pandas/issues/61368.
Also 3.14rc1 came out earlier this week so Pandas should probably start thinking about 3.14 support soonish.
|
closed
|
https://github.com/pandas-dev/pandas/pull/61950
| 61,950 |
[
"After turning off the warning the tests results look much more reasonable. Here's the summary on Linux CI:\r\n\r\n<details>\r\n\r\n```\r\n=========================== short test summary info ============================\r\nFAILED pandas/tests/copy_view/test_chained_assignment_deprecation.py::test_series_setitem[0] - Failed: DID NOT WARN. No warnings of type (<class 'Warning'>,) were emitted.\r\n Emitted warnings: [].\r\nFAILED pandas/tests/copy_view/test_chained_assignment_deprecation.py::test_series_setitem[indexer1] - Failed: DID NOT WARN. No warnings of type (<class 'Warning'>,) were emitted.\r\n Emitted warnings: [].\r\nFAILED pandas/tests/copy_view/test_chained_assignment_deprecation.py::test_series_setitem[indexer2] - Failed: DID NOT WARN. No warnings of type (<class 'Warning'>,) were emitted.\r\n Emitted warnings: [].\r\nFAILED pandas/tests/copy_view/test_chained_assignment_deprecation.py::test_series_setitem[indexer3] - Failed: DID NOT WARN. No warnings of type (<class 'Warning'>,) were emitted.\r\n Emitted warnings: [].\r\nFAILED pandas/tests/copy_view/test_chained_assignment_deprecation.py::test_frame_setitem[a] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_chained_assignment_deprecation.py::test_frame_setitem[indexer1] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_chained_assignment_deprecation.py::test_frame_setitem[indexer2] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_chained_assignment_deprecation.py::test_frame_setitem[indexer3] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_clip.py::test_clip_chained_inplace - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_interp_fillna.py::test_fillna_chained_assignment - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_interp_fillna.py::test_interpolate_chained_assignment[interpolate] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_interp_fillna.py::test_interpolate_chained_assignment[ffill] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_interp_fillna.py::test_interpolate_chained_assignment[bfill] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_methods.py::test_chained_where_mask[mask] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_methods.py::test_chained_where_mask[where] - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_methods.py::test_update_chained_assignment - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/copy_view/test_replace.py::test_replace_chained_assignment - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/reshape/merge/test_merge.py::test_merge_suffix_length_error[a-a-suffixes0-too many values to unpack \\\\(expected 2\\\\)] - AssertionError: Regex pattern did not match.\r\n Regex: 'too many values to unpack \\\\(expected 2\\\\)'\r\n Input: 'too many values to unpack (expected 2, got 3)'\r\nFAILED pandas/tests/scalar/period/test_period.py::TestPeriodConstruction::test_invalid_arguments - AssertionError: Regex pattern did not match.\r\n Regex: 'day is out of range for month'\r\n Input: 'day 0 must be in range 1..31 for month 1 in year 1: 0'\r\nFAILED pandas/tests/scalar/timestamp/test_constructors.py::TestTimestampConstructorPositionalAndKeywordSupport::test_constructor_positional - AssertionError: Regex pattern did not match.\r\n Regex: 'day is out of range for month'\r\n Input: 'day 0 must be in range 1..31 for month 1 in year 2000'\r\nFAILED pandas/tests/scalar/timestamp/test_constructors.py::TestTimestampConstructorPositionalAndKeywordSupport::test_constructor_keyword - AssertionError: Regex pattern did not match.\r\n Regex: 'day is out of range for month'\r\n Input: 'day 0 must be in range 1..31 for month 1 in year 2000'\r\nFAILED pandas/tests/series/accessors/test_dt_accessor.py::TestSeriesDatetimeValues::test_dt_accessor_not_writeable - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/series/indexing/test_indexing.py::test_underlying_data_conversion - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/series/methods/test_update.py::TestUpdate::test_update - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexes/test_indexing.py::TestContains::test_contains_requires_hashable_raises[interval] - AssertionError: Regex pattern did not match.\r\n Regex: \"unhashable type: 'dict'|must be real number, not dict|an integer is required|\\\\{\\\\}|pandas\\\\._libs\\\\.interval\\\\.IntervalTree' is not iterable\"\r\n Input: \"argument of type 'pandas._libs.interval.IntervalTree' is not a container or iterable\"\r\nFAILED pandas/tests/indexing/multiindex/test_chaining_and_caching.py::test_detect_chained_assignment - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/multiindex/test_chaining_and_caching.py::test_cache_updating - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/multiindex/test_partial.py::TestMultiIndexPartial::test_partial_set - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/multiindex/test_setitem.py::test_frame_setitem_copy_raises - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/multiindex/test_setitem.py::test_frame_setitem_copy_no_write - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestCaching::test_setitem_cache_updating_slices - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_setitem_chained_setfault - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_detect_chained_assignment - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_detect_chained_assignment_raises - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_detect_chained_assignment_fails - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_detect_chained_assignment_doc_example - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_detect_chained_assignment_object_dtype - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_detect_chained_assignment_undefined_column - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_detect_chained_assignment_changing_dtype - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_setting_with_copy_bug - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_detect_chained_assignment_warnings_errors - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_iloc_setitem_chained_assignment - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/indexing/test_chaining_and_caching.py::TestChaining::test_getitem_loc_assignment_slice_state - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestToDatetime::test_datetime_invalid_scalar[None-00:01:99] - AssertionError: Regex pattern did not match.\r\n Regex: '^time data \"a\" doesn\\\\\\'t match format \"%H:%M:%S\". You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\\\'ISO8601\\\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\\\'mixed\\\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$|^Given date string \"a\" not likely a datetime$|^unconverted data remains when parsing with format \"%H:%M:%S\": \"9\". You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\\\'ISO8601\\\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\\\'mixed\\\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$|^second must be in 0..59: 00:01:99$'\r\n Input: 'second must be in 0..59, not 99: 00:01:99'\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestToDatetime::test_datetime_invalid_index[None-values1] - AssertionError: Regex pattern did not match.\r\n Regex: '^Given date string \"a\" not likely a datetime$|^time data \"a\" doesn\\\\\\'t match format \"%H:%M:%S\". You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\\\'ISO8601\\\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\\\'mixed\\\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$|^unconverted data remains when parsing with format \"%H:%M:%S\": \"9\". You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\\\'ISO8601\\\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\\\'mixed\\\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$|^second must be in 0..59: 00:01:99$'\r\n Input: 'second must be in 0..59, not 99: 00:01:99'\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise[True] - AssertionError: Regex pattern did not match.\r\n Regex: 'day is out of range for month: 2015-02-29'\r\n Input: 'day 29 must be in range 1..28 for month 2 in year 2015: 2015-02-29'\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise[False] - AssertionError: Regex pattern did not match.\r\n Regex: 'day is out of range for month: 2015-02-29'\r\n Input: 'day 29 must be in range 1..28 for month 2 in year 2015: 2015-02-29'\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise_value[True-2015-02-29-%Y-%m-%d-^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$] - AssertionError: Regex pattern did not match.\r\n Regex: \"^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$\"\r\n Input: \"day 29 must be in range 1..28 for month 2 in year 2015. You might want to try:\\n - passing `format` if your strings have a consistent format;\\n - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\n - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.\"\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise_value[True-2015-29-02-%Y-%d-%m-^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$] - AssertionError: Regex pattern did not match.\r\n Regex: \"^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$\"\r\n Input: \"day 29 must be in range 1..28 for month 2 in year 2015. You might want to try:\\n - passing `format` if your strings have a consistent format;\\n - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\n - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.\"\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise_value[True-2015-04-31-%Y-%m-%d-^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$] - AssertionError: Regex pattern did not match.\r\n Regex: \"^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$\"\r\n Input: \"day 31 must be in range 1..30 for month 4 in year 2015. You might want to try:\\n - passing `format` if your strings have a consistent format;\\n - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\n - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.\"\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise_value[True-2015-31-04-%Y-%d-%m-^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$] - AssertionError: Regex pattern did not match.\r\n Regex: \"^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$\"\r\n Input: \"day 31 must be in range 1..30 for month 4 in year 2015. You might want to try:\\n - passing `format` if your strings have a consistent format;\\n - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\n - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.\"\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise_value[False-2015-02-29-%Y-%m-%d-^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$] - AssertionError: Regex pattern did not match.\r\n Regex: \"^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$\"\r\n Input: \"day 29 must be in range 1..28 for month 2 in year 2015. You might want to try:\\n - passing `format` if your strings have a consistent format;\\n - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\n - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.\"\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise_value[False-2015-29-02-%Y-%d-%m-^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$] - AssertionError: Regex pattern did not match.\r\n Regex: \"^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$\"\r\n Input: \"day 29 must be in range 1..28 for month 2 in year 2015. You might want to try:\\n - passing `format` if your strings have a consistent format;\\n - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\n - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.\"\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise_value[False-2015-04-31-%Y-%m-%d-^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$] - AssertionError: Regex pattern did not match.\r\n Regex: \"^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$\"\r\n Input: \"day 31 must be in range 1..30 for month 4 in year 2015. You might want to try:\\n - passing `format` if your strings have a consistent format;\\n - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\n - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.\"\r\nFAILED pandas/tests/tools/test_to_datetime.py::TestDaysInMonth::test_day_not_in_month_raise_value[False-2015-31-04-%Y-%d-%m-^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$] - AssertionError: Regex pattern did not match.\r\n Regex: \"^day is out of range for month. You might want to try:\\\\n - passing `format` if your strings have a consistent format;\\\\n - passing `format=\\\\'ISO8601\\\\'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\\\n - passing `format=\\\\'mixed\\\\'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.$\"\r\n Input: \"day 31 must be in range 1..30 for month 4 in year 2015. You might want to try:\\n - passing `format` if your strings have a consistent format;\\n - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;\\n - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.\"\r\nFAILED pandas/tests/frame/indexing/test_setitem.py::TestDataFrameSetitemCopyViewSemantics::test_setitem_column_update_inplace - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/frame/indexing/test_xs.py::TestXS::test_xs_view - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/frame/methods/test_fillna.py::TestFillNA::test_fillna_on_column_view - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/frame/methods/test_interpolate.py::TestDataFrameInterpolate::test_interp_inplace - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/util/test_show_versions.py::test_show_versions - DeprecationWarning: codecs.open() is deprecated. Use open() instead.\r\nFAILED pandas/tests/util/test_show_versions.py::test_json_output_match - DeprecationWarning: codecs.open() is deprecated. Use open() instead.\r\nFAILED pandas/tests/io/parser/test_quoting.py::test_bad_quote_char[python-kwargs0-\"quotechar\" must be a(n)? 1-character string] - AssertionError: Regex pattern did not match.\r\n Regex: '\"quotechar\" must be a(n)? 1-character string'\r\n Input: '\"quotechar\" must be a unicode character or None, not a string of length 3'\r\nFAILED pandas/tests/io/parser/test_quoting.py::test_bad_quote_char[python-kwargs2-\"quotechar\" must be string( or None)?, not int] - AssertionError: Regex pattern did not match.\r\n Regex: '\"quotechar\" must be string( or None)?, not int'\r\n Input: '\"quotechar\" must be a unicode character or None, not int'\r\nFAILED pandas/tests/io/parser/test_quoting.py::test_null_quote_char[python--0] - AssertionError: Regex pattern did not match.\r\n Regex: '\"quotechar\" must be a 1-character string'\r\n Input: '\"quotechar\" must be a unicode character or None, not a string of length 0'\r\nFAILED pandas/tests/io/test_common.py::test_codecs_encoding[csv-None] - DeprecationWarning: codecs.open() is deprecated. Use open() instead.\r\nFAILED pandas/tests/io/test_common.py::test_codecs_encoding[csv-utf-8] - DeprecationWarning: codecs.open() is deprecated. Use open() instead.\r\nFAILED pandas/tests/io/test_common.py::test_codecs_encoding[json-None] - DeprecationWarning: codecs.open() is deprecated. Use open() instead.\r\nFAILED pandas/tests/io/test_common.py::test_codecs_encoding[json-utf-8] - DeprecationWarning: codecs.open() is deprecated. Use open() instead.\r\nFAILED pandas/tests/frame/test_block_internals.py::TestDataFrameBlockInternals::test_stale_cached_series_bug_473 - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/frame/test_block_internals.py::test_update_inplace_sets_valid_block_values - AssertionError: Did not see expected warning of class 'ChainedAssignmentError'\r\nFAILED pandas/tests/generic/test_generic.py::TestGeneric::test_nonzero[DataFrame] - ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().\r\nFAILED pandas/tests/generic/test_generic.py::TestGeneric::test_nonzero[Series] - ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().\r\n= 72 failed, 167066 passed, 24154 skipped, 781 xfailed, 83 xpassed, 31 warnings in 453.44s (0:07:33) =\r\n```\r\n\r\n</details>\r\n\r\nBesides the tests looking for warnings but not seeing any, I see some failures due to new deprecations in Python, some that look like changes in the regex and datetime modules maybe and a few other failures that I can't classify just looking at the failure report.\r\n\r\n@jorisvandenbossche did you ever have time to look closer at generating the chained assignment warning on 3.14 since it was reported in April? Unfortunately we're probably past the time when we can get C API changes merged into CPython to support this use-case, so it may not be easily feasible to detect what you're looking for just based on refcounts in 3.14 and newer.",
"> @jorisvandenbossche did you ever have time to look closer at generating the chained assignment warning on 3.14 since it was reported in April? Unfortunately we're probably past the time when we can get C API changes merged into CPython to support this use-case, so it may not be easily feasible to detect what you're looking for just based on refcounts in 3.14 and newer.\r\n\r\nI didn't get to it yet, but now installed python 3.14 to try myself and took a first look. I added some more context to the issue https://github.com/pandas-dev/pandas/issues/61368. Based on that I am also afraid we won't be able to \"fix\" this (but let's further discuss that on the issue). \r\nBut in any case, to start testing Python 3.14, certainly fine to disable those warnings for now (and then the tests that currently check for the presence of a warning can just be skipped, I think)",
"OK, I think I've gotten everything except for the two test failures in `pandas/tests/generic/test_generic.py`, which I don't understand. It looks like `pytest.raises` is broken somehow or it's broken as a side effect of something else? Because the exception should be getting caught as far as I can see but it's not.\r\n\r\n<details>\r\n\r\n```\r\ngoldbaum at Nathans-MBP in ~/Documents/pandas on 3.14-ci\r\n± pytest pandas/tests/generic/test_generic.py\r\n============================= test session starts ==============================\r\nplatform darwin -- Python 3.14.0rc1, pytest-8.4.1, pluggy-1.6.0\r\nrootdir: /Users/goldbaum/Documents/pandas\r\nconfigfile: pyproject.toml\r\nplugins: xdist-3.8.0, hypothesis-6.136.4, cov-6.2.1, run-parallel-0.5.1.dev0\r\ncollected 79 items\r\nCollected 0 items to run in parallel\r\n\r\npandas/tests/generic/test_generic.py .........FF....................................................................\r\n\r\n=================================== FAILURES ===================================\r\n_____________________ TestGeneric.test_nonzero[DataFrame] ______________________\r\n\r\nself = <pandas.tests.generic.test_generic.TestGeneric object at 0x10aa25a90>\r\nframe_or_series = <class 'pandas.DataFrame'>\r\n\r\n def test_nonzero(self, frame_or_series):\r\n # GH 4633\r\n # look at the boolean/nonzero behavior for objects\r\n obj = construct(frame_or_series, shape=4)\r\n msg = f\"The truth value of a {frame_or_series.__name__} is ambiguous\"\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 0)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 1)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj)\r\n\r\n obj = construct(frame_or_series, shape=4, value=1)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 0)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 1)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj)\r\n\r\n obj = construct(frame_or_series, shape=4, value=np.nan)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 0)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 1)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj)\r\n\r\n # empty\r\n obj = construct(frame_or_series, shape=0)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj)\r\n\r\n # invalid behaviors\r\n\r\n obj1 = construct(frame_or_series, shape=4, value=1)\r\n obj2 = construct(frame_or_series, shape=4, value=1)\r\n\r\n with pytest.raises(ValueError, match=msg):\r\n if obj1:\r\n pass\r\n\r\n with pytest.raises(ValueError, match=msg):\r\n> obj1 and obj2\r\n\r\npandas/tests/generic/test_generic.py:152:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = 0 1 2 3\r\n0 1.0 1.0 1.0 1.0\r\n1 1.0 1.0 1.0 1.0\r\n2 1.0 1.0 1.0 1.0\r\n3 1.0 1.0 1.0 1.0\r\n\r\n @final\r\n def __bool__(self) -> NoReturn:\r\n> raise ValueError(\r\n f\"The truth value of a {type(self).__name__} is ambiguous. \"\r\n \"Use a.empty, a.bool(), a.item(), a.any() or a.all().\"\r\n )\r\nE ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().\r\n\r\npandas/core/generic.py:1503: ValueError\r\n_______________________ TestGeneric.test_nonzero[Series] _______________________\r\n\r\nself = <pandas.tests.generic.test_generic.TestGeneric object at 0x10aa25b80>\r\nframe_or_series = <class 'pandas.Series'>\r\n\r\n def test_nonzero(self, frame_or_series):\r\n # GH 4633\r\n # look at the boolean/nonzero behavior for objects\r\n obj = construct(frame_or_series, shape=4)\r\n msg = f\"The truth value of a {frame_or_series.__name__} is ambiguous\"\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 0)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 1)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj)\r\n\r\n obj = construct(frame_or_series, shape=4, value=1)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 0)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 1)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj)\r\n\r\n obj = construct(frame_or_series, shape=4, value=np.nan)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 0)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj == 1)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj)\r\n\r\n # empty\r\n obj = construct(frame_or_series, shape=0)\r\n with pytest.raises(ValueError, match=msg):\r\n bool(obj)\r\n\r\n # invalid behaviors\r\n\r\n obj1 = construct(frame_or_series, shape=4, value=1)\r\n obj2 = construct(frame_or_series, shape=4, value=1)\r\n\r\n with pytest.raises(ValueError, match=msg):\r\n if obj1:\r\n pass\r\n\r\n with pytest.raises(ValueError, match=msg):\r\n> obj1 and obj2\r\n\r\npandas/tests/generic/test_generic.py:152:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = 0 1.0\r\n1 1.0\r\n2 1.0\r\n3 1.0\r\ndtype: float64\r\n\r\n @final\r\n def __bool__(self) -> NoReturn:\r\n> raise ValueError(\r\n f\"The truth value of a {type(self).__name__} is ambiguous. \"\r\n \"Use a.empty, a.bool(), a.item(), a.any() or a.all().\"\r\n )\r\nE ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().\r\n\r\npandas/core/generic.py:1503: ValueError\r\n------ generated xml file: /Users/goldbaum/Documents/pandas/test-data.xml ------\r\n============================= slowest 30 durations =============================\r\n0.01s call pandas/tests/generic/test_generic.py::TestGeneric::test_truncate_out_of_bounds[DataFrame]\r\n\r\n(29 durations < 0.005s hidden. Use -vv to show these durations.)\r\n=========================== short test summary info ============================\r\nFAILED pandas/tests/generic/test_generic.py::TestGeneric::test_nonzero[DataFrame] - ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.boo...\r\nFAILED pandas/tests/generic/test_generic.py::TestGeneric::test_nonzero[Series] - ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool()...\r\n========================= 2 failed, 77 passed in 0.19s =====\r\n```\r\n\r\n</details>",
"> Because the exception should be getting caught as far as I can see but it's not.\r\n\r\nOK, here's a weird one. This script runs without error on Python 3.13 but dies with an uncaught `ValueError` on 3.14.0rc1:\r\n\r\n```python\r\nimport pandas as pd\r\nobj1 = pd.DataFrame({'0': [1, 1, 1, 1], '1': [1, 1, 1, 1]})\r\nobj2 = pd.DataFrame({'0': [1, 1, 1, 1], '1': [1, 1, 1, 1]})\r\ntry:\r\n obj1 and obj2\r\nexcept ValueError:\r\n pass\r\n```\r\n\r\n```\r\ngoldbaum at Nathans-MBP in ~/Documents/test\r\n○ python test.py\r\nTraceback (most recent call last):\r\n File \"/Users/goldbaum/Documents/test/test.py\", line 5, in <module>\r\n obj1 and obj2\r\n File \"/Users/goldbaum/.pyenv/versions/3.14.0rc1/lib/python3.14/site-packages/pandas/core/generic.py\", line 1577, in __nonzero__\r\n raise ValueError(\r\n ...<2 lines>...\r\n )\r\nValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().\r\n```\r\n\r\nSeems kinda like a Python bug to me?",
"@ngoldbaum I've been tracking this PR and happened to see your comments tonight and thought \"no way, that can't be\", but yeah I don't understand how that's possible without it being a python. Even if pandas is doing something wrong somehow it shouldn't be magically getting around a try/except.",
"> Seems kinda like a Python bug to me?\r\n\r\nI think this is https://github.com/python/cpython/issues/137288, which I think should be fixed in 3.14.0rc2. It's a little tricky to ignore these test failures because I can't actually catch these particular exceptions... I guess I can just skip them for `sys.version_info == (3, 14, 0, 'candidate', 1)` and then we can reassess when rc2 comes out?",
"@ngoldbaum thanks for finding that upstream issue, good to see it is already fixed. FWIW, numpy has the same issue (`obj1 and obj2` where those objects are numpy arrays also bypasses the `except ValueError`)",
"Yup! Not really surprising to me that the Pandas test suite caught the upstream bug but the NumPy tests missed it, Pandas has much more comprehensive tests...\n\nI think I might split off the changes for the new error messages and `codecs.open` into their own PR so they can be merged separately. If you decide we ultimately need to disable the warning and workarounds in C aren't possible, we can merge this and then work with @mpage to get a fix in for 3.14.1. But hopefully you figure out how to get the warning working again!",
"Actually on second thought I don't think it makes sense to PR the warnings changes without any 3.14 testing, so I'll leave that here. @jorisvandenbossche please feel free to cherry-pick fc51e5f6fa5a8573db4c7e00750f4d9499c029a7 if you end up coming up with a better approach. I'll go ahead and re-enable all the CI to make sure I didn't break anything on older Python versions.",
"Closing in favor of Joris' PRs. Please feel free to cherry-pick [180081b](https://github.com/pandas-dev/pandas/commit/180081b04fa9c18ebec787b1c40b321f93a0dce2)"
] |
[
{
"id": 48070600,
"node_id": "MDU6TGFiZWw0ODA3MDYwMA==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/CI",
"name": "CI",
"color": "a2bca7",
"default": false,
"description": "Continuous Integration"
}
] |
3,263,291,757 |
PR_kwDOAA0YD86goqyh
|
CI: enable doctest errors again + fixup categorical examples
|
Updating the categorical docstring examples after https://github.com/pandas-dev/pandas/pull/61891
This now closes https://github.com/pandas-dev/pandas/issues/61886 and enables the doctests again
|
closed
|
https://github.com/pandas-dev/pandas/pull/61947
| 61,947 |
[
"Thanks @jorisvandenbossche "
] |
[
{
"id": 134699,
"node_id": "MDU6TGFiZWwxMzQ2OTk=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Docs",
"name": "Docs",
"color": "3465A4",
"default": false,
"description": null
},
{
"id": 48070600,
"node_id": "MDU6TGFiZWw0ODA3MDYwMA==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/CI",
"name": "CI",
"color": "a2bca7",
"default": false,
"description": "Continuous Integration"
},
{
"id": 57522093,
"node_id": "MDU6TGFiZWw1NzUyMjA5Mw==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Strings",
"name": "Strings",
"color": "5319e7",
"default": false,
"description": "String extension data type and string data"
}
] |
3,262,892,048 |
PR_kwDOAA0YD86gnTqi
|
BUG: Fix Series.str.contains with compiled regex on Arrow string dtype
|
closes #61942
This PR fixes an issue in `Series.str.contains()` where passing a compiled regex object failed when the underlying string data is backed by PyArrow.
Please, provide feedback if my approach is not correct , I would love to improve and contribute in this.
|
closed
|
https://github.com/pandas-dev/pandas/pull/61946
| 61,946 |
[
"Hi @mroeschke \r\nI've worked on the issue\r\nBUG: Fix Series.str.contains with compiled regex on Arrow string dtype ([#61942])\r\nand have opened a pull request for it.\r\n\r\nI'd appreciate it if you could take a look and share your feedback.\r\nPlease let me know if anything needs to be improved or clarified.\r\n\r\nThanks!",
"Thankyou for the feedback!\r\nI will update that.",
"Additionally, if this is something that is not implemented by pyarrow, we should not raise a NotImplementedError, but fall back on the python object implementation (you can see a similar pattern in some other str methods, like `ArrowStringArray._str_replace`)",
"@jorisvandenbossche Thank you for the feedback! I will update the PR accordingly.\r\n\r\nWould you mind letting me know the reason behind the one failing check (pre-commit.ci)?\r\nThanks again!",
"> Would you mind letting me know the reason behind the one failing check (pre-commit.ci)?\r\n\r\nruff is failing, which is used for auto formatting. I would recommend to install the pre-commit locally to avoid having this fail on CI: https://pandas.pydata.org/docs/dev/development/contributing_codebase.html#pre-commit",
"hi @jorisvandenbossche\r\nPlease review this PR, and if area needs changes please suggest.\r\nAlso I want to know if i would need to write unit test for this .\r\n\r\nThankyou!",
"Can you try to run the test you added locally? Then you can make sure to get it working correctly. Right now it is still failing according to CI",
"Sure, I will try to run tests locally and update this PR .\r\nThankyou !",
"Thanks @Aniketsy",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 1d2233185083423b8ecb27986f11175b2d6e8fa6\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61946: BUG: Fix Series.str.contains with compiled regex on Arrow string dtype'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61946-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61946 on branch 2.3.x (BUG: Fix Series.str.contains with compiled regex on Arrow string dtype)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"@jorisvandenbossche \r\nBig thanks for patiently guiding me at every step and helping me get this right. I learned a lot from this, and I’m glad the PR is now merged!\r\nThankyou .",
"@Aniketsy you're welcome!",
"Manual backport -> https://github.com/pandas-dev/pandas/pull/62116"
] |
[
{
"id": 76811,
"node_id": "MDU6TGFiZWw3NjgxMQ==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Bug",
"name": "Bug",
"color": "e10c02",
"default": false,
"description": null
},
{
"id": 57522093,
"node_id": "MDU6TGFiZWw1NzUyMjA5Mw==",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Strings",
"name": "Strings",
"color": "5319e7",
"default": false,
"description": "String extension data type and string data"
},
{
"id": 1792318342,
"node_id": "MDU6TGFiZWwxNzkyMzE4MzQy",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Still%20Needs%20Manual%20Backport",
"name": "Still Needs Manual Backport",
"color": "ededed",
"default": false,
"description": null
},
{
"id": 3303158446,
"node_id": "MDU6TGFiZWwzMzAzMTU4NDQ2",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Arrow",
"name": "Arrow",
"color": "f9d0c4",
"default": false,
"description": "pyarrow functionality"
}
] |
3,261,447,335 |
PR_kwDOAA0YD86gihiD
|
BUG: Fix TypeError in assert_index_equal when comparing CategoricalIndex with check_categorical=True and exact=False
|
#61935
- Fixes a bug where `assert_index_equal` raises a `TypeError` instead of `AssertionError` when comparing two `CategoricalIndex` objects with `check_categorical=True` and `exact=False`.
- Ensures consistency with expected testing behavior by properly raising an `AssertionError` in these cases.
Please let me know if my approach or fix needs any improvements . I’m open to feedback and happy to make changes based on suggestions.
|
open
|
https://github.com/pandas-dev/pandas/pull/61941
| 61,941 |
[
"Hi @mroeschke\r\nI've opened a pull request addressing\r\nBUG: Fix TypeError in assert_index_equal when comparing CategoricalIndex with check_categorical=True and exact=False ([#61941])\r\nThe changes are ready for review.\r\n\r\nI'd really appreciate it if you could take a look and provide feedback .\r\nPlease let me know if anything needs to be improved or clarified.\r\n\r\nThanks!",
"Hi @mroeschke,\r\n\r\nThank you for your review. I’ve updated the PR based on your feedback ,please have a look when convenient.\r\n\r\nAdditionally, I noticed one check failure (pre-commit.ci-pr) and wanted to ask if you could help clarify the reason behind it. Apologies if this isn't the appropriate way to raise this, please do let me know the correct approach if needed.\r\n\r\nThanks again!",
"<img width=\"885\" height=\"477\" alt=\"Checks fail\" src=\"https://github.com/user-attachments/assets/f5ab18a6-533e-4641-a10c-8a533d596ab2\" />\r\n\r\nHi @jorisvandenbossche, I ran pre-commit locally and all hooks passed. However, the GitHub checks are still showing a failure. Could you please advise if I’ve missed something?\r\n",
"Hi @mroeschke\r\nWhen you have a moment, could you please review this PR? I've been working on resolving the check failure, but haven't been able to pinpoint the issue yet. Any insights or suggestions you could provide would be greatly appreciated.\r\n\r\nThank you!",
"Hi @mroeschke \r\nI just wanted to check in on this PR to see if there’s anything further you’d like me to update or improve.\r\nThankyou !"
] |
[
{
"id": 127685,
"node_id": "MDU6TGFiZWwxMjc2ODU=",
"url": "https://api.github.com/repos/pandas-dev/pandas/labels/Testing",
"name": "Testing",
"color": "C4A000",
"default": false,
"description": "pandas testing functions or related to the test suite"
}
] |
End of preview.
README.md exists but content is empty.
- Downloads last month
- 103