id
int64 | number
int64 | title
string | state
string | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | html_url
string | is_pull_request
bool | pull_request_url
string | pull_request_html_url
string | user_login
string | comments_count
int64 | body
string | labels
list | reactions_plus1
int64 | reactions_minus1
int64 | reactions_laugh
int64 | reactions_hooray
int64 | reactions_confused
int64 | reactions_heart
int64 | reactions_rocket
int64 | reactions_eyes
int64 | comments
list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,190,536,978
| 57,873
|
ENH: rolling window, make the very first window have the first element of my series on its left
|
open
| 2024-03-17T08:05:40
| 2025-07-15T20:42:11
| null |
https://github.com/pandas-dev/pandas/issues/57873
| true
| null | null |
abdullam
| 0
|
### Feature Type
- [ ] Adding new functionality to pandas
- [X] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
For a data series with datetime index... I wish there is an easy way to make the very first element to be on the left side of the window when rolling starts.
As of pandas 2.1.4 I can align a rolling window so the very first element of my series is on the right side of the window when the rolling starts (i.e. center=False).
I can also align the verify first element to be in the middle of the window when rolling starts (i.e. center=True)
But there is no easy way to make the very first element to be on the left side of the window when rolling starts.
For integer window I can see that I can use FixedForwardWindowIndexer as shown in the following page
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.api.indexers.FixedForwardWindowIndexer.html#pandas.api.indexers.FixedForwardWindowIndexer
But that doesn't seem to work for when the index is datetime.
### Feature Description
```
import pandas as pd
# create series
s = pd.Series(range(5), index=pd.date_range('2020-01-01', periods=5, freq='1D'))
# create a function to call for each window
def roll_apply(window):
# return first element of current window
return window[:1].iloc[0]
# if center=False, the very first window will have the first element of my series on its right most side
right = s.rolling('4D', center=False).apply(roll_apply)
# if center=False, the very first window will have the first element of my series on its center
center = s.rolling('4D', center=True).apply(roll_apply)
# there is no easy way to have the very first window have the first element of my series on its left
# ???
print("********* original *********")
print(s)
print("********* right aligned *********")
print(right)
print("********* center aligned *********")
print(center)
print("********* left aligned *********")
print("Not Implemented Yet!")
```
# output
```
********* original *********
2020-01-01 0
2020-01-02 1
2020-01-03 2
2020-01-04 3
2020-01-05 4
Freq: D, dtype: int64
********* right aligned *********
2020-01-01 0.0
2020-01-02 0.0
2020-01-03 0.0
2020-01-04 0.0
2020-01-05 1.0
Freq: D, dtype: float64
********* center aligned *********
2020-01-01 0.0
2020-01-02 0.0
2020-01-03 1.0
2020-01-04 2.0
2020-01-05 3.0
Freq: D, dtype: float64
********* left aligned *********
Not Implemented Yet!
```
### Alternative Solutions
For integer window I can see that I can use FixedForwardWindowIndexer as shown in the following page
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.api.indexers.FixedForwardWindowIndexer.html#pandas.api.indexers.FixedForwardWindowIndexer
But that doesn't seem to work for when the index is datetime.
### Additional Context
_No response_
|
[
"Enhancement",
"Window",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
2,190,357,851
| 57,872
|
DEP: Deprecate CoW and no_silent_downcasting options
|
closed
| 2024-03-16T23:39:08
| 2025-04-15T16:29:42
| 2025-04-15T16:29:41
|
https://github.com/pandas-dev/pandas/pull/57872
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57872
|
https://github.com/pandas-dev/pandas/pull/57872
|
phofl
| 3
|
- [ ] closes #59502 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Deprecate",
"Copy / view semantics",
"Stale"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Looks good, but appears to be valid test failures",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"I think we would want this, but it appears this has gone stale so going to close to clear the queue"
] |
2,190,315,646
| 57,871
|
DOC: Fix rendering of whatsnew entries
|
closed
| 2024-03-16T23:01:40
| 2024-03-19T20:08:53
| 2024-03-17T00:32:48
|
https://github.com/pandas-dev/pandas/pull/57871
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57871
|
https://github.com/pandas-dev/pandas/pull/57871
|
phofl
| 4
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
cc @rhshadrach those render in a weird box if you have a space in the beiginning
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/57871/",
"We tend to keep those and clean them up when releasing, it’s a good placeholder until we cut the release ",
"Thanks @phofl"
] |
2,190,309,539
| 57,870
|
DEPR: Deprecate remaining copy usages
|
closed
| 2024-03-16T22:56:33
| 2024-03-26T20:25:58
| 2024-03-26T20:25:50
|
https://github.com/pandas-dev/pandas/pull/57870
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57870
|
https://github.com/pandas-dev/pandas/pull/57870
|
phofl
| 1
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Deprecate",
"Copy / view semantics"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @phofl "
] |
2,190,295,446
| 57,869
|
CLN: enforce deprecation of `NDFrame.interpolate` with `ffill/bfill/pad/backfill` methods
|
closed
| 2024-03-16T22:41:14
| 2024-03-22T22:00:26
| 2024-03-22T20:30:20
|
https://github.com/pandas-dev/pandas/pull/57869
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57869
|
https://github.com/pandas-dev/pandas/pull/57869
|
natmokval
| 2
|
xref #53607
enforced deprecation of `NDFrame.interpolate` with `"ffill” / "bfill” / “pad” / “backfill"` methods
|
[
"Missing-data",
"Clean"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for sticking with this @natmokval! ",
"Thanks @mroeschke for helping me with this PR. Enforcing this deprecation was more complicated than I expected."
] |
2,190,176,894
| 57,868
|
Fix stubname equal suffix in wide to long function
|
closed
| 2024-03-16T18:29:46
| 2025-05-26T09:17:08
| 2025-05-26T09:15:44
|
https://github.com/pandas-dev/pandas/pull/57868
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57868
|
https://github.com/pandas-dev/pandas/pull/57868
|
tqa236
| 4
|
- [x] closes #46939(Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. (Will add after review)
As `pandas` allows an index in a multiindex and a column to have the same name, I think it's reasonable to allow stubname equal suffix.
However, I think the root cause is in the `melt` function. It's stated in the docstring that
```
value_name : scalar, default 'value'
Name to use for the 'value' column, can't be an existing column label.
```
But if `value_name == var_name`, illogical result is returned. Example:
```
df = pd.DataFrame(
{
"A": {0: "a", 1: "b", 2: "c"},
"B": {0: 1, 1: 3, 2: 5},
"C": {0: 2, 1: 4, 2: 6},
}
)
pd.melt(
df,
id_vars=["A"],
value_vars=["B"],
var_name="myVarname",
value_name="myVarname",
)
```
Result:

I think it's more reasonable to also raise an error if `value_name == var_name` in `melt`
|
[
"Reshaping"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Hi @mroeschke, can you take a look at this PR, please? It should be ready for review.",
"hi @datapythonista, thank you for the review. I also force pushed to the branch because I noticed that you also didn't make any comment on the actual content of the PR. Now this PR should be ready for review.",
"Thanks for the update here @tqa236. The fix you implemented seems a hack, not the proper fix. You should identify what's causing the exception, which seem kind of unrelated, and fix that. Please let me know if you want to have a look more in detail to the actual problem, otherwise we can close this PR.",
"thank you for taking a look @datapythonista, unfortunately it's been a year and the context of this issue is no longer fresh in my mind. Let's close this PR."
] |
2,190,128,166
| 57,867
|
DOC: Add examples for creating an index accessor
|
closed
| 2024-03-16T16:39:14
| 2024-04-03T18:12:59
| 2024-04-03T18:12:52
|
https://github.com/pandas-dev/pandas/pull/57867
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57867
|
https://github.com/pandas-dev/pandas/pull/57867
|
sjalkote
| 9
|
Picked up issue #49202. This adds examples for creating index, dataframe, and series accessors.
- [X] closes #49202
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"pre-commit.ci autofix",
"pre-commit.ci autofix",
"pre-commit.ci autofix",
"Can you have a look at the CI errors please?",
"The checks are still failing, could you please fix them?",
"Hi @datapythonista I believe we have fixed all the CI errors. Could you please re-review?",
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/57867/",
"Thanks @sjalkote "
] |
2,190,027,652
| 57,866
|
DOC: Update CoW user guide docs
|
closed
| 2024-03-16T14:17:51
| 2024-03-16T17:24:08
| 2024-03-16T17:24:05
|
https://github.com/pandas-dev/pandas/pull/57866
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57866
|
https://github.com/pandas-dev/pandas/pull/57866
|
phofl
| 0
|
- [ ] closes #57828 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Just a few clean ups after the deprecation
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
2,190,008,434
| 57,865
|
BUG: Replace on Series/DataFrame stops replacing after first NA
|
closed
| 2024-03-16T13:59:56
| 2025-08-16T15:56:42
| 2024-03-20T17:06:56
|
https://github.com/pandas-dev/pandas/pull/57865
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57865
|
https://github.com/pandas-dev/pandas/pull/57865
|
asishm
| 8
|
- [x] closes #56599
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
The issue was the line `a = a[mask]` was triggered only for `ndarray` and doesn't hit when `dtype='string'`, but the `np.place` logic applied as long as the result was an `ndarray`.
The old behavior had things like (depending on the number and location of NAs)
```py
In [2]: s = pd.Series(['m', 'm', pd.NA, 'm', 'm', 'm'], dtype='string')
In [3]: s.replace({'m': 't'}, regex=True)
Out[3]:
0 t
1 t
2 <NA>
3 m
4 t
5 t
dtype: string
In [4]: s = pd.Series(['m', 'm', pd.NA, pd.NA, 'm', 'm', 'm'], dtype='string')
In [5]: s.replace({'m': 't'}, regex=True)
Out[5]:
0 t
1 t
2 <NA>
3 <NA>
4 m
5 m
6 t
dtype: string
In [6]: s = pd.Series(['m', 'm', pd.NA, 'm', 'm', pd.NA, 'm', 'm'], dtype='string')
In [7]: s.replace({'m': 't'}, regex=True)
Out[7]:
0 t
1 t
2 <NA>
3 m
4 t
5 <NA>
6 t
7 m
dtype: string
```
|
[
"Missing-data",
"replace"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"pre-commit.ci autofix",
"Thanks @asishm ",
"Along with other string type fixes, I think it'd be good to backport this to 2.3.x. Any objection @mroeschke @jorisvandenbossche?",
"No objection from me",
"@MeeseeksDev backport to 2.3.x",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 0f7ded2a3a637b312f6ad454fd6c0b89d3d3e7aa\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #57865: BUG: Replace on Series/DataFrame stops replacing after first NA'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-57865-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #57865 on branch 2.3.x (BUG: Replace on Series/DataFrame stops replacing after first NA)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"@rhshadrach I see there is commit linking to this PR, but not an actual open PR. Did you start backporting but didn't finish? (I was just planning to backport this one, but want to avoid double work)",
"Ah, I see this is already applied to the 2.3.x branch in https://github.com/pandas-dev/pandas/commit/8d1cc9088d358a4fb364a0a75fa4fed834211e21 (just not through a PR, and now I remember you mentioning you did an accidental push to 2.3.x). \r\nSo no need to backport anymore. Reminder to try to think about removing the label (and ideally referencing the backport PR/commit, so things can be tracked later if needed)"
] |
2,189,955,940
| 57,864
|
Fix 28021: Add 5% padding in timeseries line plot
|
closed
| 2024-03-16T12:56:24
| 2024-08-15T04:05:32
| 2024-04-17T16:51:33
|
https://github.com/pandas-dev/pandas/pull/57864
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57864
|
https://github.com/pandas-dev/pandas/pull/57864
|
Compro-Prasad
| 3
|
- [x] closes #28021
- [x] All code checks passed
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` (**will do this once this PR is reviewed**)
- [x] Tests passed (`pytest pandas/tests/plotting/test_datetimelike.py`)
|
[
"Stale"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Was hunting through commits and found https://github.com/pandas-dev/pandas/commit/3795272ad8987f5af22c5887fc95bb35c1b6bb69#diff-b742bb6314b1f7b55dd7a0987ae6102f529a8d4faa6a5524ba9122d47138ad23R393 which does the exact thing that I have done. But I don't know if this is the right way to do it because it was removed. I am new to this and would like some references to do this better.",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"Thanks for the pull request, but it appears to have gone stale. Additionally, I don't think adding arbitrary tolerances is a sufficient solution here. Closing but let's continue to discuss solutions in the original issue"
] |
2,189,943,034
| 57,863
|
DOC: Fix F821 error in docstring
|
closed
| 2024-03-16T12:21:02
| 2024-03-19T19:54:09
| 2024-03-19T19:51:22
|
https://github.com/pandas-dev/pandas/pull/57863
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57863
|
https://github.com/pandas-dev/pandas/pull/57863
|
tqa236
| 4
|
- [ ] xref #22900 (I think this PR can also close the linked issue as now `pandas` and `numpy` are considered defaults import before linting docstring)
The command below returns no error now.
```
flake8 --doctest --ignore="" --select=F821 pandas/ | grep -v "F821 undefined name 'pd'" | grep -v "F821 undefined name 'np'"
```
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@tqa236 should we add a check in the CI for this?",
"Hello @datapythonista, I fully agree that fixing linting errors without enforcing them is quite inefficient. But I wonder if the CI check is a long term solution as it's based on `flake8 --doctest` while we're moving more and more to `ruff`.\r\n\r\nThere's an open issue on `ruff` to track this feature: https://github.com/astral-sh/ruff/issues/3542\r\n\r\nIn my opinion, we can wait for the feature to be implemented in `ruff` and enable it here rather than implementing our own check with `flake8 --doctest` for 2 reasons\r\n- The code in all of the official docstrings are already run and checked if there's any syntax errors already.\r\n- What's left for this CI check is only the internal docstring (in `conftest`,... ), and the number of errors is quite low (it in fact reduces compared to your result last year from the issue). \r\n\r\nLet me know what you think.",
"Sounds good, up to you, I was just wondering if you considered it. Thanks for all the details and for the continued work on this.",
"Thanks @tqa236 "
] |
2,189,776,102
| 57,862
|
DOC: RT03 fix for various DataFrameGroupBy and SeriesGroupBy methods
|
closed
| 2024-03-16T04:53:08
| 2024-04-01T18:41:19
| 2024-04-01T18:41:12
|
https://github.com/pandas-dev/pandas/pull/57862
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57862
|
https://github.com/pandas-dev/pandas/pull/57862
|
quangngd
| 5
|
- [x] xref https://github.com/pandas-dev/pandas/issues/57416
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/57862/",
"@quangngd we changed how we specify the pending errors in `code_checks.sh` and you'll have to resolve the conflicts. Sorry for the inconvenience.",
"@datapythonista resolved",
"Thanks @quangngd "
] |
2,189,515,562
| 57,861
|
Backport PR #57843: DOC: Remove Dask and Modin sections in scale.rst in favor of linking to ecosystem docs.
|
closed
| 2024-03-15T21:46:21
| 2024-03-15T22:43:42
| 2024-03-15T22:43:38
|
https://github.com/pandas-dev/pandas/pull/57861
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57861
|
https://github.com/pandas-dev/pandas/pull/57861
|
mroeschke
| 0
| null |
[
"Docs",
"Dependencies"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
2,189,446,849
| 57,860
|
CLN: Remove unused code
|
closed
| 2024-03-15T20:48:01
| 2024-03-15T22:20:49
| 2024-03-15T21:41:36
|
https://github.com/pandas-dev/pandas/pull/57860
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57860
|
https://github.com/pandas-dev/pandas/pull/57860
|
tqa236
| 1
|
I conclude my `vulture` check (for now)
|
[
"Clean"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for all your work here!"
] |
2,189,411,580
| 57,859
|
BUG: Regression in v2.2.x Pandas DataFrame Raises a __deepcopy__ error when class placed in `attrs`
|
open
| 2024-03-15T20:24:35
| 2024-03-29T12:47:55
| null |
https://github.com/pandas-dev/pandas/issues/57859
| true
| null | null |
achapkowski
| 6
|
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import json as _ujson
from collections import OrderedDict
from collections.abc import MutableMapping, Mapping
###########################################################################
class InsensitiveDict(MutableMapping):
"""
A case-insensitive ``dict`` like object used to update and alter JSON
A varients of a case-less dictionary that allows for dot and bracket notation.
"""
# ----------------------------------------------------------------------
def __init__(self, data=None):
self._store = OrderedDict() #
if data is None:
data = {}
self.update(data)
self._to_isd(self)
# ----------------------------------------------------------------------
def __repr__(self):
return str(dict(self.items()))
# ----------------------------------------------------------------------
def __str__(self):
return str(dict(self.items()))
# ----------------------------------------------------------------------
def __setitem__(self, key, value):
if str(key).lower() in {"_store"}:
super(InsensitiveDict, self).__setattr__(key, value)
else:
if isinstance(value, dict):
self._store[str(key).lower()] = (key, InsensitiveDict(value))
else:
self._store[str(key).lower()] = (key, value)
# ----------------------------------------------------------------------
def __getitem__(self, key):
return self._store[str(key).lower()][1]
# ----------------------------------------------------------------------
def __delitem__(self, key):
del self._store[str(key).lower()]
# ----------------------------------------------------------------------
def __getattr__(self, key):
if str(key).lower() in {"_store"}:
return self._store
else:
return self._store[str(key).lower()][1]
# ----------------------------------------------------------------------
def __setattr__(self, key, value):
if str(key).lower() in {"_store"}:
super(InsensitiveDict, self).__setattr__(key, value)
else:
if isinstance(value, dict):
self._store[str(key).lower()] = (key, InsensitiveDict(value))
else:
self._store[str(key).lower()] = (key, value)
# ----------------------------------------------------------------------
def __dir__(self):
return self.keys()
# ----------------------------------------------------------------------
def __iter__(self):
return (casedkey for casedkey, mappedvalue in self._store.values())
# ----------------------------------------------------------------------
def __len__(self):
return len(self._store)
# ----------------------------------------------------------------------
def _lower_items(self):
"""Like iteritems(), but with all lowercase keys."""
return (
(lowerkey, keyval[1])
for (lowerkey, keyval) in self._store.items()
)
# ----------------------------------------------------------------------
def __setstate__(self, d):
"""unpickle support"""
self.__dict__.update(InsensitiveDict(d).__dict__)
self = InsensitiveDict(d)
# ----------------------------------------------------------------------
def __getstate__(self):
"""pickle support"""
return _ujson.loads(self.json)
# ----------------------------------------------------------------------
@classmethod
def from_dict(cls, o):
"""Converts dict to a InsensitiveDict"""
return cls(o)
# ----------------------------------------------------------------------
@classmethod
def from_json(cls, o):
"""Converts JSON string to a InsensitiveDict"""
if isinstance(o, str):
o = _ujson.loads(o)
return InsensitiveDict(o)
return InsensitiveDict.from_dict(o)
# ----------------------------------------------------------------------
def __eq__(self, other):
if isinstance(other, Mapping):
other = InsensitiveDict(other)
else:
return NotImplemented
return dict(self._lower_items()) == dict(other._lower_items())
# ----------------------------------------------------------------------
def copy(self):
return InsensitiveDict(self._store.values())
# ---------------------------------------------------------------------
def _to_isd(self, data):
"""converts a dictionary from InsensitiveDict to a dictionary"""
for k, v in data.items():
if isinstance(v, (dict, InsensitiveDict)):
data[k] = self._to_isd(v)
elif isinstance(v, (list, tuple)):
l = []
for i in v:
if isinstance(i, dict):
l.append(InsensitiveDict(i))
else:
l.append(i)
if isinstance(v, tuple):
l = tuple(l)
data[k] = l
return data
# ---------------------------------------------------------------------
def _json(self):
"""converts an InsensitiveDict to a dictionary"""
d = {}
for k, v in self.items():
if isinstance(v, InsensitiveDict):
d[k] = v._json()
elif type(v) in (list, tuple):
l = []
for i in v:
if isinstance(i, InsensitiveDict):
l.append(i._json())
else:
l.append(i)
if type(v) is tuple:
v = tuple(l)
else:
v = l
d[k] = v
else:
d[k] = v # not list, tuple, or InsensitiveDict
return d
# ----------------------------------------------------------------------
@property
def json(self):
"""returns the value as JSON String"""
o = self._json() # dict(self.copy())
return _ujson.dumps(dict(o))
data = {
"A": [1, 2, 3],
"B": ['a', 'b', None],
}
df = pd.DataFrame(data=data)
df.attrs["metadata"] = InsensitiveDict(data)
print(df.head()) # error here
```
### Issue Description
In Pandas 2.1 and below, we could put whatever we wanted in the `attrs` and the data would not throw a `__deepcopy__` error. We would like it if the `attrs` didn't require information to be deep copied like previous version.
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.attrs.html
### Expected Behavior
When working with data and you have any custom class that doesn't implement `__deepcopy__`, the pandas data is essentially useless on any operation.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : bdc79c146c2e32f2cab629be240f01658cfb6cc2
python : 3.9.18.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 154 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.2.1
numpy : 1.23.5
pytz : 2023.3
dateutil : 2.8.2
setuptools : 67.6.1
pip : 23.3.1
Cython : 0.29.34
pytest : None
hypothesis : 6.72.0
sphinx : None
blosc : None
feather : None
xlsxwriter : 3.1.0
lxml.etree : 4.9.2
html5lib : 1.1
pymysql : 1.0.3
psycopg2 : 2.9.6
jinja2 : 3.1.2
IPython : 8.18.1
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.2
bottleneck : 1.3.7
dataframe-api-compat : None
fastparquet : 2023.2.0
fsspec : 2023.4.0
gcsfs : 2023.4.0
matplotlib : 3.7.1
numba : 0.56.4
numexpr : 2.8.4
odfpy : None
openpyxl : 3.1.2
pandas_gbq : 0.19.1
pyarrow : 15.0.1
pyreadstat : 1.2.1
python-calamine : None
pyxlsb : 1.0.10
s3fs : None
scipy : 1.10.1
sqlalchemy : 2.0.9
tables : 3.8.0
tabulate : 0.9.0
xarray : 2023.4.1
xlrd : 2.0.1
zstandard : 0.21.0
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"To follow up, this PR https://github.com/pandas-dev/pandas/pull/55314 is the breaking change. ",
"As outlined in the PR, deepcopy was introduced to prevent unwanted side effects when attrs are used and propagated. \n\nThe failure mode of using a class without `__deepcopy__` is transparent, just like the fix required (implement deepcopy) to be able to use attrs with the custom class.\n\nOn the other hand, side effects from not deepcopy'ing attrs in the background are intransparent to the end user. So in the end using deepcopy is probably preferable.",
"@achapkowski Maybe try `df.attrs[\"metadata\"] = {**InsensitiveDict(data)}`.",
"@achapkowski Since you are making a custom implementation of a dictionary, you can implement a `__deepcopy__` like this:\r\n\r\n```python\r\n def __deepcopy__(self, memo):\r\n result = InsensitiveDict(deepcopy({**self}))\r\n memo[id(self)] = result\r\n return result\r\n```",
"I know I can implement deepcopy, but the issue is if I use any 3rd party library, like this class from requests, I have to monkey patch it. \r\n",
"How is it problematic? From what I read it's a problem with geopandas not pandas. This is a major regression because most classes and packages do not implement deepcopy. "
] |
2,189,401,535
| 57,858
|
CLN: Remove unused code
|
closed
| 2024-03-15T20:16:22
| 2024-03-15T22:20:55
| 2024-03-15T21:07:26
|
https://github.com/pandas-dev/pandas/pull/57858
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57858
|
https://github.com/pandas-dev/pandas/pull/57858
|
tqa236
| 1
|
I find it a bit hard to believe that this function is unused, but it really _does_ look like that
|
[
"Clean"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @tqa236 "
] |
2,189,381,012
| 57,857
|
REF: Don't materialize range if not needed
|
closed
| 2024-03-15T19:58:55
| 2024-03-16T22:51:38
| 2024-03-16T22:46:31
|
https://github.com/pandas-dev/pandas/pull/57857
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57857
|
https://github.com/pandas-dev/pandas/pull/57857
|
mroeschke
| 1
| null |
[
"Clean"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"thx @mroeschke "
] |
2,189,223,515
| 57,856
|
PERF: Remove slower short circut checks in RangeIndex__getitem__
|
closed
| 2024-03-15T18:18:02
| 2024-03-16T22:25:04
| 2024-03-16T13:41:10
|
https://github.com/pandas-dev/pandas/pull/57856
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57856
|
https://github.com/pandas-dev/pandas/pull/57856
|
mroeschke
| 1
|
- [ ] closes #57712 (Replace xxxx with the GitHub issue number)
Although the `all()` and `not any()` checks were supposed to be for short-circuiting, I think these cases are not common enough to always check in this if branch. The `not any()` check will get reduced by `np.flatnonzero`. The `all()` check cannot short circuit unlike the check in `_shallow_copy`.
A probably more "common case":
```python
In [1]: from pandas import *; import numpy as np
+ /opt/miniconda3/envs/pandas-dev/bin/ninja
[1/1] Generating write_version_file with a custom command
In [2]: N = 10**6
In [3]: s = Series(date_range("2000-01-01", freq="s", periods=N))
In [4]: s[np.random.randint(1, N, 100)] = NaT
In [6]: %timeit s.dropna()
7.49 ms ± 333 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # main
In [5]: %timeit s.dropna()
7.22 ms ± 239 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # PR
```
The worst case where `if key.all()` would have gotten hit (will be improved by https://github.com/pandas-dev/pandas/pull/57812)
```python
In [1]: import pandas as pd, numpy as np, pyarrow as pa
+ /opt/miniconda3/envs/pandas-dev/bin/ninja
[1/1] Generating write_version_file with a custom command
In [2]: ri = pd.RangeIndex(range(100_000))
In [3]: mask = [True] * 100_000
In [4]: %timeit ri[mask]
2.82 ms ± 31.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # PR
In [4]: %timeit ri[mask]
2.41 ms ± 23.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # main
```
|
[
"Performance",
"Index"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"thx @mroeschke "
] |
2,189,173,280
| 57,855
|
PERF: Skip _shallow_copy for RangeIndex._join_empty where other is RangeIndex
|
closed
| 2024-03-15T17:46:12
| 2024-03-16T22:25:50
| 2024-03-16T13:54:07
|
https://github.com/pandas-dev/pandas/pull/57855
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57855
|
https://github.com/pandas-dev/pandas/pull/57855
|
mroeschke
| 1
|
- [ ] closes #57852 (Replace xxxx with the GitHub issue number)
```python
In [1]: from pandas import *; import numpy as np
+ /opt/miniconda3/envs/pandas-dev/bin/ninja
[1/1] Generating write_version_file with a custom command
In [2]: df = DataFrame({"A": np.arange(100_000)})
In [3]: df_empty = DataFrame(columns=["B", "C"], dtype="int64")
In [4]: %timeit df_empty.join(df, how="inner")
369 µs ± 4.17 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) # main
In [4]: %timeit df_empty.join(df, how="inner")
126 µs ± 1.74 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) # PR
```
|
[
"Performance",
"Index"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"thx @mroeschke "
] |
2,188,990,897
| 57,854
|
Backport PR #57848 on branch 2.2.x (DOC: Remove duplicated Series.dt.normalize from docs)
|
closed
| 2024-03-15T16:09:57
| 2024-03-15T18:16:46
| 2024-03-15T18:16:46
|
https://github.com/pandas-dev/pandas/pull/57854
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57854
|
https://github.com/pandas-dev/pandas/pull/57854
|
meeseeksmachine
| 0
|
Backport PR #57848: DOC: Remove duplicated Series.dt.normalize from docs
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
2,188,710,653
| 57,853
|
BUG: upcasting when assigning to an enlarged series does not produce a FutureWarning
|
open
| 2024-03-15T14:46:32
| 2024-03-15T14:46:54
| null |
https://github.com/pandas-dev/pandas/issues/57853
| true
| null | null |
arnaudlegout
| 0
|
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
s = pd.Series([1, 2, 3], index=list('abc'), dtype='UInt8')
print(s.dtype)
# we upcast on enlargement
s.loc[4] = 'x'
print(s.dtype)
```
### Issue Description
According to https://pandas.pydata.org/pdeps/0006-ban-upcasting.html
raising a `FutureWarning` on upcasting during enlargement if out of the scope of this pdep.
However, it is inconsistent with the new behavior. I expected to see an issue report discussing this problem, but I did not find one (searching for `upcasting` in the issue reports). I created one to either start the discussion, or be redirected to where it is discussed.
### Expected Behavior
I would expect to have a FutureWarning in this case (and an error in the future).
### Installed Versions
<details>
In [412]: pd.show_versions()
INSTALLED VERSIONS
------------------
commit : bdc79c146c2e32f2cab629be240f01658cfb6cc2
python : 3.12.0.final.0
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.22631
machine : AMD64
processor : Intel64 Family 6 Model 140 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : fr_FR.cp1252
pandas : 2.2.1
numpy : 1.26.4
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.2.2
pip : 23.3.1
Cython : 3.0.8
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.3
IPython : 8.20.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.2
bottleneck : 1.3.7
dataframe-api-compat : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : 3.8.0
numba : 0.59.0
numexpr : 2.8.7
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 14.0.2
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.11.3
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : 2.4.1
pyqt5 : None
</details>
|
[
"Bug",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
2,188,494,399
| 57,852
|
Potential regression induced by "PERF: RangeIndex.__getitem__ with integers return RangeIndex"
|
closed
| 2024-03-15T13:16:07
| 2024-03-16T13:54:08
| 2024-03-16T13:54:08
|
https://github.com/pandas-dev/pandas/issues/57852
| true
| null | null |
DeaMariaLeon
| 0
|
PR #57770
If it is expected please ignore this issue.
Benchmark:
`join_merge.JoinEmpty.time_inner_join_left_empty` - no permutations
@mroeschke

|
[
"Reshaping"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
2,187,766,731
| 57,851
|
CLN: Remove unused code
|
closed
| 2024-03-15T05:41:57
| 2024-03-15T16:07:14
| 2024-03-15T16:04:49
|
https://github.com/pandas-dev/pandas/pull/57851
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57851
|
https://github.com/pandas-dev/pandas/pull/57851
|
tqa236
| 3
| null |
[
"Clean"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Hi Marc, I think it's indeed unrelated and it's also beyond my understanding to debug it as well.\r\n\r\nJust FYI, Matthew is debugging it here: https://github.com/pandas-dev/pandas/pull/57845",
"Thanks. Yes, I meant to have a look to make sure it's unrelated.",
"Thanks @tqa236 "
] |
2,187,722,576
| 57,850
|
DOC: update link in benchmarks.md
|
closed
| 2024-03-15T04:55:55
| 2024-03-18T23:54:47
| 2024-03-18T23:54:47
|
https://github.com/pandas-dev/pandas/issues/57850
| true
| null | null |
rootsmusic
| 1
|
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://github.com/pandas-dev/pandas/blob/main/web/pandas/community/benchmarks.md#community-benchmarks
### Documentation problem
H2O.ai benchmark was deprecated. It's been forked by DuckDB, which has [updated benchmarks](https://duckdb.org/2023/11/03/db-benchmark-update).
### Suggested fix for documentation
Replace H2O.ai with DuckDB Labs [database-like ops benchmark](https://duckdblabs.github.io/db-benchmark/)
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take"
] |
2,187,677,007
| 57,849
|
ENH: Bring back append() on data frames
|
closed
| 2024-03-15T04:02:34
| 2024-03-16T13:54:57
| 2024-03-16T13:54:57
|
https://github.com/pandas-dev/pandas/issues/57849
| true
| null | null |
arencambre
| 1
|
### Feature Type
- [ ] Adding new functionality to pandas
- [X] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
I liked what the .append() method did for data frames. Please bring it back.
Per #35407, this was the tenth most sought after method in the Pandas API, per traffic on its documentation page. It makes sense as this was a sensible method whose name made clear what it does.
Concerns about performance of .append() could be true. That is an invitation to improve append().
Concerns about a poor name chosen for a method for a different class could be true. Fix the other class.
I am currently wrapping my head around switching my perfectly fine, performant code to use pandas.concat(). While I did it fine, this was not a good use of my time. Let's not make any others suffer through this.
### Feature Description
See **Problem Description** section.
### Alternative Solutions
See **Problem Description** section.
### Additional Context
_No response_
|
[
"Enhancement",
"Needs Triage"
] | 3
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This was discussed and rejected a long time ago. Nothing changed with respect to the reasons that made us remove it"
] |
2,187,490,523
| 57,848
|
DOC: Remove duplicated Series.dt.normalize from docs
|
closed
| 2024-03-15T00:23:54
| 2024-03-15T16:09:31
| 2024-03-15T16:09:24
|
https://github.com/pandas-dev/pandas/pull/57848
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57848
|
https://github.com/pandas-dev/pandas/pull/57848
|
datapythonista
| 2
|
`Series.dt.normalize` is listed both as an attribute and as a method. Removing the attribute instance, since it's a method.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"> Does this need to be backported to 2.2.x?\r\n\r\nNo big difference, but if we do the docs of the 2.2 version won't have the duplicate. So I added it to the milestone.",
"Thanks @datapythonista "
] |
2,187,385,020
| 57,847
|
DEPR: don't silently drop Nones in concat
|
closed
| 2024-03-14T22:32:10
| 2024-04-07T10:59:11
| 2024-03-16T20:16:11
|
https://github.com/pandas-dev/pandas/pull/57847
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57847
|
https://github.com/pandas-dev/pandas/pull/57847
|
twoertwein
| 0
|
- [x] closes #57846 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Might need a deprecation cycle? Or can we get away with this change in 3.0?
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
2,187,315,539
| 57,846
|
Deprecate accepting None in pd.concat
|
open
| 2024-03-14T21:42:31
| 2025-07-16T15:44:15
| null |
https://github.com/pandas-dev/pandas/issues/57846
| true
| null | null |
twoertwein
| 4
|
`pd.concat` accepts iterables that may contain `Series`, `DataFrame` and `None`, where `None` are simply ignored.
```py
pd.concat([None, series, None, series, None]) # works, same as pd.concat([series, series])
pd.concat([None]) # raises
```
It would be great to deprecate/remove accepting `None`s. This will help to better annotate `concat` https://github.com/pandas-dev/pandas-stubs/issues/888
Pandas-stubs currently has to choose between either false-positives (accept `concat([None])` ) or false-negatives (reject some `concat(Iterable[Any/Unknown])` that could succeed).
|
[
"Reshaping",
"Deprecate",
"Needs Discussion"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"I am -1 on simply deprecating this because of typing reasons. Let's think about use cases before we make a decision. We definitely can't do this without a deprecation",
"I agree, it would be good first to understand which use case supporting `None` enables!\r\n\r\n@Dr-Irv found a way to get it to type check without having to decide between false negatives/positives :)",
"If the function should accept None's in `objs`, then it seems most natural to me for it to return an empty DataFrame if `objs` contains **only** None's.\r\n\r\nThis is backwards compatible and would also resolve the typing issue.\r\n\r\nA hypothetical use case for `Iterable[DataFrame | None]`: you have a list of files to load and parse and you want to put them into a single DataFrame, but not all of the filenames you've been given exist, or maybe parsing is allowed to fail. I could see someone implementing this as a function that returns `DataFrame | None`, and putting that into a list comprehension. So a situation where you just want to put everything you have into one table, and if everything you have is None, then your table is empty.",
"I was surprised to find this works. Looks like Nones are filtered out in _clean_keys_and_objs, which has a comment pointing to #1649 back in 2012. I don't see much conversation there about whether this should be allowed.\n\n+1 on deprecating."
] |
2,187,304,348
| 57,845
|
Debug ASAN build
|
closed
| 2024-03-14T21:33:34
| 2024-03-19T17:02:40
| 2024-03-19T17:02:38
|
https://github.com/pandas-dev/pandas/pull/57845
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57845
|
https://github.com/pandas-dev/pandas/pull/57845
|
mroeschke
| 9
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"When I try locally on main I see the following:\r\n\r\n```\r\n$ pip install -ve . --no-build-isolation --config-settings=setup-args=\"-Dbuildtype=debug\" --config-settings editable-verbose=true --config-settings=builddir=\"asan\" --config-settings=setup-args=\"-Db_sanitize=address\"\r\n$ ASAN_OPTIONS=detect_leaks=0 LD_PRELOAD=$(gcc -print-file-name=libasan.so) python -m pytest pandas/tests\r\n+ /home/willayd/mambaforge/envs/pandas-dev/bin/ninja\r\n[1/1] Generating write_version_file with a custom command\r\nFAILED: _version_meson.py \r\n/home/willayd/mambaforge/envs/pandas-dev/bin/python3.10 ../generate_version.py -o _version_meson.py\r\nAddressSanitizer:DEADLYSIGNAL\r\nAddressSanitizer:DEADLYSIGNAL\r\nAddressSanitizer:DEADLYSIGNAL\r\nAddressSanitizer:DEADLYSIGNAL\r\nninja: build stopped: subcommand failed.\r\nImportError while loading conftest '/home/willayd/clones/pandas/pandas/conftest.py'.\r\n../../mambaforge/envs/pandas-dev/lib/python3.10/site-packages/_pandas_editable_loader.py:268: in find_spec\r\n tree = self.rebuild()\r\n../../mambaforge/envs/pandas-dev/lib/python3.10/site-packages/_pandas_editable_loader.py:309: in rebuild\r\n subprocess.run(self._build_cmd, cwd=self._build_path, env=env, stdout=stdout, check=True)\r\n../../mambaforge/envs/pandas-dev/lib/python3.10/subprocess.py:526: in run\r\n raise CalledProcessError(retcode, process.args,\r\nE subprocess.CalledProcessError: Command '['/home/willayd/mambaforge/envs/pandas-dev/bin/ninja']' returned non-zero exit status 1.\r\n```\r\n\r\nSo _might_ have something to do with the meson step we have to generate a version file up front",
"cc @lithomas1 if you have any thoughts here",
"Also I notice that for the version file in `meson.build` , `build_by_default: true` and `build_always_stale: true` which seems like the reason why the version file is always runs when pytest runs. Shouldn't the file version only be build if the git commit changes?",
"I don't think so. There's no inbuilt mechanism to detect a git commit change.",
"My hypothesis is that maybe our ``json`` module is getting imported instead of the stdlib one by versioneer.\r\n(but maybe we renamed our extension module).\r\n\r\nI'm travelling today, but I'll look over the weekend/on Monday.",
"@WillAyd \r\n\r\nDo you have time to debug this further?\r\n\r\nI can't really reproduce ASAN stuff locally on my mac and trying to reproduce on Gitpod seems to hang it. \r\n\r\nAFAICT, the version script at least runs successfully once. \r\n(since the pandas version is correctly set to ``3.0.0dev0`` in the wheel filename)",
"Yep will try to take a look in the next few days",
"Does anyone know any tricks to isolating what happens during fixture collection? I am seeing the DEADLYSIGNAL relatively often during fixture collection, but am unsure where all that messaging is going",
"I think `--setup-show` is the only way to gain insights into what happens during collection"
] |
2,187,080,099
| 57,844
|
CLN: Remove unused functions
|
closed
| 2024-03-14T19:01:45
| 2024-03-15T05:01:50
| 2024-03-14T20:46:17
|
https://github.com/pandas-dev/pandas/pull/57844
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57844
|
https://github.com/pandas-dev/pandas/pull/57844
|
tqa236
| 4
|
It's pretty hard to be 100% sure that these non-private functions are unused, but I'm pretty sure that they are not commonly used, if at all, after some searches in Google, GitHub, and pandas doc.
|
[
"Clean"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Keep them coming! @tqa236 ",
"@mroeschke thanks! I'm combing through `vulture` to see what can be removed. May I know where the documentation about the private directories is, please? It'll surely help me decide if a report is a true or false positive",
"> May I know where the documentation about the private directories is, please?\r\n\r\nhttps://pandas.pydata.org/docs/reference/index.html#api-reference",
"Thank you!"
] |
2,187,018,797
| 57,843
|
DOC: Remove Dask and Modin sections in scale.rst in favor of linking to ecosystem docs.
|
closed
| 2024-03-14T18:34:00
| 2024-03-15T22:03:31
| 2024-03-15T21:43:32
|
https://github.com/pandas-dev/pandas/pull/57843
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57843
|
https://github.com/pandas-dev/pandas/pull/57843
|
yukikitayama
| 3
|
- [x] closes #57831
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.2.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 d4ddc805a03586f9ce0cc1cc541709419ae47c4a\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #57843: DOC: Remove Dask and Modin sections in scale.rst in favor of linking to ecosystem docs.'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.2.x:auto-backport-of-pr-57843-on-2.2.x\n```\n\n5. Create a PR against branch 2.2.x, I would have named this PR:\n\n> \"Backport PR #57843 on branch 2.2.x (DOC: Remove Dask and Modin sections in scale.rst in favor of linking to ecosystem docs.)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Thanks @yukikitayama \r\n\r\n(Backporting since the dask dependency changes were backported too)",
"Thank you for reviewing @mroeschke !"
] |
2,186,890,670
| 57,842
|
CLN: Remove private unused code
|
closed
| 2024-03-14T17:21:56
| 2024-03-14T18:37:40
| 2024-03-14T18:25:20
|
https://github.com/pandas-dev/pandas/pull/57842
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57842
|
https://github.com/pandas-dev/pandas/pull/57842
|
tqa236
| 1
|
Remove some more private unused code
|
[
"Clean"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @tqa236 "
] |
2,186,874,137
| 57,841
|
improve accuracy of to_pytimedelta
|
closed
| 2024-03-14T17:12:55
| 2024-03-25T23:04:19
| 2024-03-25T18:10:55
|
https://github.com/pandas-dev/pandas/pull/57841
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57841
|
https://github.com/pandas-dev/pandas/pull/57841
|
rohanjain101
| 4
|
```
>>> a = pd.Timedelta(1152921504609987375)
>>> a.to_pytimedelta()
datetime.timedelta(days=13343, seconds=86304, microseconds=609988)
>>>
```
609988 for microseconds but expected is 609987
```
>>> a = pd.Timedelta(1152921504609987374)
>>> a.to_pytimedelta()
datetime.timedelta(days=13343, seconds=86304, microseconds=609987)
>>>
```
In this example, a difference of 1 nanosecond, shouldn't result in a difference of 1 microsecond when converted from ns to us unit.
|
[
"Timedelta"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Ok, but the reason this behavior is occurring I don't believe is related to python's round function. It's because python's integer division goes through a floating point which introduces accuracy issues for large integers (and in this case, result is rounded anyway). The issue is with the division, not the rounding done by timedelta.",
"Sorry, you are correct. I didn't realize the problem with the floating point precision.\r\n\r\nI wonder what's the impact on performance of this change, did you execute a benchmark to compare the execution times of both implementations?\r\n\r\n@MarcoGorelli @jbrockmendel thoughts on this change?",
"Added whatsnew, are there existing benchmarks for comparing performance?",
"Thanks @rohanjain101 "
] |
2,186,620,465
| 57,840
|
BUG: pyarrow.Array.from_pandas converts empty timestamp[s][pyarrow, UTC] pandas Series to ChunkedArray, not TimestampArray
|
open
| 2024-03-14T15:09:17
| 2024-03-14T15:33:58
| null |
https://github.com/pandas-dev/pandas/issues/57840
| true
| null | null |
Wainberg
| 1
|
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
>>> import pandas as pd
>>> import pyarrow as pa
>>> pa.Array.from_pandas(pd.Series([], dtype=pd.ArrowDtype(pa.timestamp('s')))) # correct
<pyarrow.lib.TimestampArray object at 0x7fb665f77fa0>
[]
>>> pa.Array.from_pandas(pd.Series([], dtype=pd.ArrowDtype(pa.timestamp('s'))).dt.tz_localize('UTC')) # incorrect
<pyarrow.lib.ChunkedArray object at 0x7fb665fd8680>
[
]
```
Same issue with `pa.array()` instead of `pa.Array.from_pandas()`. The Arrow folks [say](https://github.com/apache/arrow/issues/40538) it's a pandas issue.
### Issue Description
`pyarrow.Array.from_pandas` converts an empty `timestamp[s][pyarrow, UTC]` pandas Series to a `ChunkedArray`, not a `TimestampArray`. It correctly converts to a `TimestampArray` when there is no timezone.
### Expected Behavior
```python
>>> import pandas as pd
>>> import pyarrow as pa
>>> pa.Array.from_pandas(pd.Series([], dtype=pd.ArrowDtype(pa.timestamp('s'))))
<pyarrow.lib.TimestampArray object at 0x7fb665f77fa0>
[]
>>> pa.Array.from_pandas(pd.Series([], dtype=pd.ArrowDtype(pa.timestamp('s'))).dt.tz_localize('UTC'))
<pyarrow.lib.TimestampArray object at 0x7fb665fd8680>
[]
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : f538741432edf55c6b9fb5d0d496d2dd1d7c2457
python : 3.12.2.final.0
python-bits : 64
OS : Linux
OS-release : 4.4.0-22621-Microsoft
Version : #2506-Microsoft Fri Jan 01 08:00:00 PST 2016
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 2.2.0
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.8.2
setuptools : 69.1.0
pip : 24.0
Cython : 3.0.8
pytest : 8.0.0
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : 3.1.9
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.3
IPython : 8.21.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : 3.8.3
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
pyarrow : 14.0.2
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.12.0
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : 0.22.0
tzdata : 2024.1
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Timezones",
"Upstream issue",
"Arrow"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the report. Added a comment upstream to pyarrow that _might_ actually be a pyarrow bug https://github.com/apache/arrow/issues/40538#issuecomment-1997736174"
] |
2,186,467,224
| 57,839
|
CLN: Remove some unused code
|
closed
| 2024-03-14T14:02:25
| 2024-03-14T18:37:43
| 2024-03-14T18:23:48
|
https://github.com/pandas-dev/pandas/pull/57839
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57839
|
https://github.com/pandas-dev/pandas/pull/57839
|
tqa236
| 1
| null |
[
"Clean"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @tqa236 "
] |
2,186,294,542
| 57,838
|
BUG: Can't change datetime precision in columns/rows
|
open
| 2024-03-14T12:49:04
| 2024-04-06T12:14:29
| null |
https://github.com/pandas-dev/pandas/issues/57838
| true
| null | null |
erezinman
| 5
|
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
# ONLY WORKING CONVERSION:
df = pd.DataFrame({'time': pd.to_datetime(['2021-01-01 12:00:00', '2021-01-01 12:00:01', '2021-01-01 12:00:02']), 'value': [1, 2, 3]})
df['time'] = df['time'].astype('M8[us]')
print(df.dtypes)
# time datetime64[us]
# value int64
# dtype: object
# NON-WORKING CONVERSIONS
df = pd.DataFrame({'time': pd.to_datetime(['2021-01-01 12:00:00', '2021-01-01 12:00:01', '2021-01-01 12:00:02']),
'value': [1, 2, 3]})
df.iloc[:, 0] = df.iloc[:, 0].astype('M8[us]')
print(df.dtypes)
# time datetime64[ns]
# value int64
# dtype: object
df = pd.DataFrame({'time': pd.to_datetime(['2021-01-01 12:00:00', '2021-01-01 12:00:01', '2021-01-01 12:00:02']),
'value': [1, 2, 3]})
df.loc[:, ['time']] = df.loc[:, ['time']].astype('M8[us]')
print(df.dtypes)
# time datetime64[ns]
# value int64
# dtype: object
df = pd.DataFrame({'time': pd.to_datetime(['2021-01-01 12:00:00', '2021-01-01 12:00:01', '2021-01-01 12:00:02']),
'value': [1, 2, 3]})
idxs = [0]
axis = 1
df.iloc(axis=axis)[idxs] = df.iloc(axis=axis)[idxs].astype('M8[us]')
print(df.dtypes)
# time datetime64[ns]
# value int64
# dtype: object
```
### Issue Description
Conversion of columns (/rows) between datetime dtypes with different precision does not change the datatype of the columns (except for in the simplest case).
The absurd is that if I were to change the dtype of the "value" column in the above example, all of these example would've worked.
### Expected Behavior
All printouts should be the same as the first:
```
time datetime64[us]
value int64
dtype: object
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : bdc79c146c2e32f2cab629be240f01658cfb6cc2
python : 3.9.18.final.0
python-bits : 64
OS : Linux
OS-release : 5.15.0-91-generic
Version : #101~20.04.1-Ubuntu SMP Thu Nov 16 14:22:28 UTC 2023
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_IL
LOCALE : en_IL.UTF-8
pandas : 2.2.1
numpy : 1.24.4
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.2.2
pip : 23.3
Cython : 3.0.6
pytest : 7.4.3
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : None
numba : None
numexpr : 2.8.6
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Datetime"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"cc @MarcoGorelli is this pdep6 related? It looks like this is a case of upcasting\r\n\r\n@erezinman all cases that don't work set inplace instead of swapping out the underlying data, so different semantics can happen. ",
"thanks for the ping\r\n\r\nlooks like it's been like this since at least 2.0.2, so I don't think it's related to any pdep-6 work (which only started in 2.1):\r\n```python\r\n\r\nIn [2]: import pandas as pd\r\n\r\nIn [3]:\r\n ...: df = pd.DataFrame({'time': pd.to_datetime(['2021-01-01 12:00:00', '2021-01-01 12:00:01', '2021-01-01 12:00:02'])\r\n ...: ,\r\n ...: 'value': [1, 2, 3]})\r\n\r\nIn [4]: df.iloc[:, 0] = df.iloc[:, 0].astype('M8[us]')\r\n\r\nIn [5]: df.dtypes\r\nOut[5]:\r\ntime datetime64[ns]\r\nvalue int64\r\ndtype: object\r\n\r\nIn [6]: pd.__version__\r\nOut[6]: '2.0.2'\r\n```",
"take",
"Hello @MarcoGorelli and @phofl \r\n\r\nI believe I have corrected this bug, however one of the tests (pandas/tests/copy_view/test_indexing.py::test_subset_set_column_with_loc) seems to be failing with my solution. The output is as follows:\r\n\r\n```python\r\n\r\[email protected](\r\n \"dtype\", [\"int64\", \"float64\"], ids=[\"single-block\", \"mixed-block\"]\r\n )\r\n def test_subset_set_column_with_loc(backend, dtype):\r\n # Case: setting a single column with loc on a viewing subset\r\n # -> subset.loc[:, col] = value\r\n _, DataFrame, _ = backend\r\n df = DataFrame(\r\n {\"a\": [1, 2, 3], \"b\": [4, 5, 6], \"c\": np.array([7, 8, 9], dtype=dtype)}\r\n )\r\n df_orig = df.copy()\r\n subset = df[1:3]\r\n\r\n subset.loc[:, \"a\"] = np.array([10, 11], dtype=\"int64\")\r\n\r\n subset._mgr._verify_integrity()\r\n expected = DataFrame(\r\n {\"a\": [10, 11], \"b\": [5, 6], \"c\": np.array([8, 9], dtype=dtype)},\r\n index=range(1, 3),\r\n )\r\n> tm.assert_frame_equal(subset, expected)\r\nE AssertionError: Attributes of DataFrame.iloc[:, 0] (column name=\"a\") are different\r\nE\r\nE Attribute \"dtype\" are different\r\nE [left]: int64\r\nE [right]: Int64\r\n\r\n``` \r\n\r\nIf I switch the indexing method to subset[\"a\"] = np.array([10, 11], dtype=\"int64\") (instead of subset.loc[:, \"a\"]) and run the test with the original code (without my alterations), the test fails with the exact same error as mine.\r\n\r\nMy question is: if, according to the issue, the only indexing method providing the correct output is using the name of the column itself, i.e. subset[\"a\"], and when running it the test fails, could this test be wrong? \r\n\r\nThank you in advance",
"@MarcoGorelli I think this is a duplicate of https://github.com/pandas-dev/pandas/issues/52593 since the int equivalent of\r\n\r\n```py\r\ndf = pd.DataFrame({'a': [1,2,3]}, dtype='int64')\r\ndf.loc[:, 'a'] = df.loc[:, 'a'].astype('int32')\r\nprint(df.dtypes) # a is still int64\r\n```\r\nalso doesn't change the dtype"
] |
2,186,007,345
| 57,837
|
BUG: Using DateOffset with shift on a daylight savings transition produces error
|
open
| 2024-03-14T10:34:48
| 2025-07-15T21:01:58
| null |
https://github.com/pandas-dev/pandas/issues/57837
| true
| null | null |
martheveldhuis
| 3
|
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
dt = pd.date_range("2024-03-31 00:00", "2024-03-31 07:00", freq="1h", tz="utc")
df = pd.DataFrame(index=dt, data={"A":range(0, len(dt))})
df_nl = df.tz_convert(tz="Europe/Amsterdam")
df_nl["B"] = df_nl["A"].shift(freq=pd.DateOffset(hours=1))
```
### Issue Description
This last line gives an error:
`pytz.exceptions.NonExistentTimeError: 2024-03-31 02:00:00`
With full traceback:
```
File "<stdin>", line 1, in <module>
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/pandas/core/generic.py", line 11230, in shift
return self._shift_with_freq(periods, axis, freq)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/pandas/core/generic.py", line 11263, in _shift_with_freq
new_ax = index.shift(periods, freq)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/pandas/core/indexes/datetimelike.py", line 503, in shift
return self + offset
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/pandas/core/ops/common.py", line 76, in new_method
return method(self, other)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/pandas/core/arraylike.py", line 186, in __add__
return self._arith_method(other, operator.add)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/pandas/core/indexes/base.py", line 7238, in _arith_method
return super()._arith_method(other, op)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/pandas/core/base.py", line 1382, in _arith_method
result = ops.arithmetic_op(lvalues, rvalues, op)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/pandas/core/ops/array_ops.py", line 273, in arithmetic_op
res_values = op(left, right)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/pandas/core/ops/common.py", line 76, in new_method
return method(self, other)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/pandas/core/arrays/datetimelike.py", line 1372, in __add__
result = self._add_offset(other)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/pandas/core/arrays/datetimes.py", line 828, in _add_offset
result = result.tz_localize(self.tz)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/pandas/core/arrays/_mixins.py", line 81, in method
return meth(self, *args, **kwargs)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/pandas/core/arrays/datetimes.py", line 1088, in tz_localize
new_dates = tzconversion.tz_localize_to_utc(
File "tzconversion.pyx", line 431, in pandas._libs.tslibs.tzconversion.tz_localize_to_utc
```
### Expected Behavior
This would be the desired ouput:
```
A B
2024-03-31 01:00:00+01:00 0 NaN
2024-03-31 03:00:00+02:00 1 NaN
2024-03-31 04:00:00+02:00 2 1
2024-03-31 05:00:00+02:00 3 2
2024-03-31 06:00:00+02:00 4 3
2024-03-31 07:00:00+02:00 5 4
2024-03-31 08:00:00+02:00 6 5
2024-03-31 09:00:00+02:00 7 6
```
The point of converting a UTC timeseries to Europe/Amsterdam time is that I want to look up behaviour of people, which stays consistent to their timezone. E.g. if someone goes to work every day at 08:00, that remains at 08:00 in their timezone, even after the daylight savings shift. In UTC, that person appears to leave one hour earlier (at 07:00). By converting to Europe/Amsterdam time, then shifting, this should be handled correctly.
### Installed Versions
<details>
commit : bdc79c146c2e32f2cab629be240f01658cfb6cc2
python : 3.10.11.final.0
python-bits : 64
OS : Linux
OS-release : 5.15.0-1040-azure
Version : #47~20.04.1-Ubuntu SMP Fri Jun 2 21:38:08 UTC 2023
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.1
numpy : 1.25.0
pytz : 2024.1
dateutil : 2.8.2
setuptools : 67.8.0
pip : 23.1.2
Cython : 0.29.35
pytest : 8.1.1
hypothesis : 6.99.5
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.14.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2023.6.0
gcsfs : None
matplotlib : 3.7.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 15.0.1
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.10.1
sqlalchemy : 2.0.16
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Frequency",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This works as a workaround:\r\n```\r\ndf_nl[\"B\"] = df_nl[\"A\"].shift(freq=pd.Timedelta(\"1h\"))\r\n```\r\nIdeally `shift` would handle `pd.DateOffset` DST jumps.",
"Another example:\r\n```python\r\npd.Timestamp(\"2024-04-25\", tz=\"Africa/Cairo\") + pd.DateOffset(days=1)\r\n```\r\nwhich raises\r\n```python\r\npytz.exceptions.NonExistentTimeError: 2024-04-26 00:00:00\r\n```\r\n\r\nI think the best solution would be to add the options 'nonexistent' and 'ambiguous' to `pd.DateOffset` (similar as we have for e.g. the [floor](https://pandas.pydata.org/docs/reference/api/pandas.Timestamp.floor.html) method), such that one can do:\r\n```python\r\npd.Timestamp(\"2024-04-25\", tz=\"Africa/Cairo\") + pd.DateOffset(days=1, nonexistent=\"shift_forward\", ambiguous=False)\r\n```\r\nand get as result:\r\n```python\r\nTimestamp('2024-04-26 01:00:00+0300', tz='Africa/Cairo')\r\n```\r\nI think that having this capability will also make it easier to resolve bugs like #58380 and #51211.",
"> This works as a workaround:\r\n> \r\n> ```\r\n> df_nl[\"B\"] = df_nl[\"A\"].shift(freq=pd.Timedelta(\"1h\"))\r\n> ```\r\n> \r\n> Ideally `shift` would handle `pd.DateOffset` DST jumps.\r\n\r\nUnfortunately, this doesn't solve the issue. I will provide a better example:\r\n\r\n\r\n```\r\nstart_date = pd.to_datetime(\"2024-03-30 07:00:00\").tz_localize(\"Europe/Amsterdam\")\r\nend_date = start_date + timedelta(weeks=1)\r\ndatetime_index = pd.date_range(start=start_date, end=end_date, freq=\"h\")\r\ndf = pd.DataFrame({\"A\": range(len(datetime_index))}, index=datetime_index)\r\ndf[\"B\"] = df[\"A\"].shift(freq=pd.Timedelta(weeks=1))\r\nprint(df)\r\n```\r\nWhich outputs:\r\n\r\n```\r\n A B\r\n2024-03-30 07:00:00+01:00 0 NaN\r\n2024-03-30 08:00:00+01:00 1 NaN\r\n2024-03-30 09:00:00+01:00 2 NaN\r\n2024-03-30 10:00:00+01:00 3 NaN\r\n2024-03-30 11:00:00+01:00 4 NaN\r\n... ... ...\r\n2024-04-06 04:00:00+02:00 164 NaN\r\n2024-04-06 05:00:00+02:00 165 NaN\r\n2024-04-06 06:00:00+02:00 166 NaN\r\n2024-04-06 07:00:00+02:00 167 NaN\r\n2024-04-06 08:00:00+02:00 168 0.0\r\n\r\n[169 rows x 2 columns]\r\n```\r\n\r\nEven though I would expect:\r\n\r\n```\r\n A B\r\n2024-03-30 07:00:00+01:00 0 NaN\r\n2024-03-30 08:00:00+01:00 1 NaN\r\n2024-03-30 09:00:00+01:00 2 NaN\r\n2024-03-30 10:00:00+01:00 3 NaN\r\n2024-03-30 11:00:00+01:00 4 NaN\r\n... ... ...\r\n2024-04-06 04:00:00+02:00 164 NaN\r\n2024-04-06 05:00:00+02:00 165 NaN\r\n2024-04-06 06:00:00+02:00 166 NaN\r\n2024-04-06 07:00:00+02:00 167 0.0\r\n2024-04-06 08:00:00+02:00 168 1.0\r\n\r\n[169 rows x 2 columns]\r\n```"
] |
2,185,630,618
| 57,836
|
read_csv date_format parameter should allow lambda as a value
|
closed
| 2024-03-14T07:27:32
| 2024-06-27T17:38:37
| 2024-06-27T17:38:37
|
https://github.com/pandas-dev/pandas/issues/57836
| true
| null | null |
hasandiwan
| 2
|
It's all well and good to deprecate date_parser and suggest using date_format or pd.to_datetime
instead, but why I should need to read in the column and then convert to dates is beyond me? Hence, I would suggest that date_format be permitted to take a lambda as well.
If there are a few +1s to this issue, I will write a patch to sort the issue and attach it here.
|
[
"Enhancement",
"IO CSV",
"Closing Candidate"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the suggestion. The idea is that `read_csv` should not handle complicated parsing and that you should use `to_datetime` for that.",
"But it is still possible to read the column as a date directly by using the `date_format` and `parse_dates` parameters.\n\nAdditionally it seems like not many people are interested in this feature so going to close unless you bring a good argument of why we should allow complex parsing directly"
] |
2,185,343,261
| 57,835
|
BUG: concat with datetime index returns Series instead of scalar if microsecond=0
|
closed
| 2024-03-14T03:52:26
| 2025-04-24T20:42:33
| 2025-04-24T20:42:33
|
https://github.com/pandas-dev/pandas/issues/57835
| true
| null | null |
davetapley
| 2
|
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
from datetime import UTC, datetime
from pandas import DataFrame, concat
t1 = datetime.now(tz=UTC)
t2 = datetime.now(tz=UTC).replace(microsecond=0)
t1_str = str(t1)
t2_str = str(t2)
df1 = DataFrame({'a': [1]}, index=[t1])
print(type(df1.loc[t1].a))
print(type(df1.loc[t1_str].a))
df2 = DataFrame({'a': [2]}, index=[t2])
print(type(df2.loc[t2].a))
print(type(df2.loc[t2_str].a))
df = concat([df1, df2])
print(type(df.loc[t1].a))
print(type(df.loc[t1_str].a))
print(type(df.loc[t2].a))
print(type(df.loc[t2_str].a))
```
### Issue Description
`.a` is correctly returned as `numpy.int64` in all cases, except for the last line when I use `t2_str` and suddenly it's a one element `Series` containing the `numpy.int64`.
I have no idea what on earth is going on, it took a lot of 🔍 to get a repro.
I found it while writing a unit test where I was passing a timestamp from test data as a string.
If you remove the `.replace(microsecond=0)` you'll see it works as expected 🤯
### Expected Behavior
`.loc` should be consistent before and after a `concat`.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : bdc79c146c2e32f2cab629be240f01658cfb6cc2
python : 3.11.6.final.0
python-bits : 64
OS : Linux
OS-release : 6.2.0-1019-azure
Version : #19~22.04.1-Ubuntu SMP Wed Jan 10 22:57:03 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.1
numpy : 1.26.1
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.2.2
pip : 23.3.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : None
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
dataframe-api-compat : None
fastparquet : 2023.2.0
fsspec : 2023.10.0
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 14.0.1
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.10.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Datetime"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Maybe, @jbrockmendel knows more, he last changed the relevant code in `datetime._parse_with_reso`.\r\n\r\nThere is some additional logic for handling the lookup with `str` labels on a `DateTimeIndex` that infers a lookup-resolution from the string itself. In the end, the `str(t2)` has no zero'd microseconds in it, thus a sec resolution is inferred, matching both `t1`and `t2`.\r\n\r\nIf one wants to keep this feature of resolution dependent lookup, one would have to add the trailing zeros to times like `str(t2)` manually, since datetime always removes them from what I could see. ",
"Correct, this is a feature called “partial string slicing” on dateteindex"
] |
2,185,088,458
| 57,834
|
Backport PR #57796 on branch 2.2.x (Fix issue with Tempita recompilation)
|
closed
| 2024-03-13T23:29:56
| 2024-03-14T00:39:41
| 2024-03-14T00:39:41
|
https://github.com/pandas-dev/pandas/pull/57834
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57834
|
https://github.com/pandas-dev/pandas/pull/57834
|
meeseeksmachine
| 0
|
Backport PR #57796: Fix issue with Tempita recompilation
|
[
"Build"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
2,185,047,630
| 57,833
|
PERF: RangeIndex.insert maintains RangeIndex when empty
|
closed
| 2024-03-13T23:02:56
| 2024-03-17T00:35:06
| 2024-03-16T23:13:31
|
https://github.com/pandas-dev/pandas/pull/57833
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57833
|
https://github.com/pandas-dev/pandas/pull/57833
|
mroeschke
| 5
|
Discovered in https://github.com/pandas-dev/pandas/pull/57441
|
[
"Performance",
"Index"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"cc @WillAyd if you have time could you help look at the ASAN / UBSAN builds? It looks like recently they have been failing with `AddressSanitizer: DEADLY SIGNAL`?",
"That is definitely strange. Do you know when that first started? My guess is the error is coming from something in our conftest or a third party dependency, since it looks like it happens before the test suite even begins",
"This commit on main looks like it was the first one to exhibit the failure (but doesn't look like these changes should have caused it?) https://github.com/pandas-dev/pandas/commit/10f31f6a242fb01fdf37f5db2e8c6f4f82f5af16",
"Yea that looks unrelated. Guessing it's a dependency version issue",
"thx @mroeschke "
] |
2,184,656,042
| 57,832
|
Backport PR #57830 on branch 2.2.x (DOC: Pin dask/dask-expr for scale.rst)
|
closed
| 2024-03-13T18:35:03
| 2024-03-13T19:30:33
| 2024-03-13T19:30:33
|
https://github.com/pandas-dev/pandas/pull/57832
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57832
|
https://github.com/pandas-dev/pandas/pull/57832
|
meeseeksmachine
| 0
|
Backport PR #57830: DOC: Pin dask/dask-expr for scale.rst
|
[
"Docs",
"Dependencies"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
2,184,582,507
| 57,831
|
DOC: Remove Dask and Modin sections in `scale.rst` in favor of linking to ecosystem docs.
|
closed
| 2024-03-13T17:52:22
| 2024-03-15T21:43:33
| 2024-03-15T21:43:33
|
https://github.com/pandas-dev/pandas/issues/57831
| true
| null | null |
mroeschke
| 3
|
xref https://github.com/pandas-dev/pandas/pull/57586#pullrequestreview-1922295036
1. Remove `Use Dask` and `Use Modin` sections in `doc/source/user_guide/scale.rst`
2. Add a new section in `doc/source/user_guide/scale.rst` (`Use Other Libraries`) and link to `Out-of-core` section in `web/pandas/community/ecosystem.md`
3. Remove `dask-expr` in `environment.yml` and `requirements.txt`
4. Remove version pinning on `dask` post https://github.com/pandas-dev/pandas/pull/57830
|
[
"Docs",
"good first issue"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take",
"Hi @mroeschke I don't know which code I need to update regarding \"4. Remove version pinnig on `dask` post\". Sorry if I'm asking too basic. I'm new to contribution.",
"`environment.yml` and `requirements.txt`"
] |
2,184,570,036
| 57,830
|
DOC: Pin dask/dask-expr for scale.rst
|
closed
| 2024-03-13T17:45:16
| 2024-03-13T18:34:36
| 2024-03-13T18:34:33
|
https://github.com/pandas-dev/pandas/pull/57830
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57830
|
https://github.com/pandas-dev/pandas/pull/57830
|
mroeschke
| 1
|
Currently failing on main e.g. https://github.com/pandas-dev/pandas/actions/runs/8268872378/job/22622790949?pr=57812
|
[
"Docs",
"Dependencies"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Looks like the doc build is passing here so going to merge to get back to green"
] |
2,183,819,691
| 57,829
|
Potential regression induced by PR "PERF: Categorical(range).categories returns RangeIndex instead of Index"
|
closed
| 2024-03-13T11:57:05
| 2024-05-05T13:43:33
| 2024-05-05T13:43:33
|
https://github.com/pandas-dev/pandas/issues/57829
| true
| null | null |
DeaMariaLeon
| 1
|
PR #57787
If this is expected please ignore the issue.
The regressions seem to be here:
`indexing.MultiIndexing.time_xs_full_key` (Python) with unique_levels=True
`indexing.MultiIndexing.time_loc_all_scalars` (Python) with unique_levels=True
@mroeschke
<img width="797" alt="Screenshot 2024-03-13 at 12 50 02" src="https://github.com/pandas-dev/pandas/assets/11835246/54a6a689-7ffc-4957-8e5c-5c1848efc1f0">
<img width="811" alt="Screenshot 2024-03-13 at 12 52 54" src="https://github.com/pandas-dev/pandas/assets/11835246/bcc22b80-e213-49ce-8635-dac8c66163f9">
|
[
"Performance",
"MultiIndex"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the report. After profiling for a bit, it looks like there's an slight increase when creating the `MulitiIndex._engine`, but given that this is cached I am not too concerned given the memory reduction of the Index level class."
] |
2,183,235,082
| 57,828
|
Copy-on-Write Guide - "Previous behavior" output is what would happen post-migration to CoW
|
closed
| 2024-03-13T06:42:29
| 2024-03-16T17:24:06
| 2024-03-16T17:24:06
|
https://github.com/pandas-dev/pandas/issues/57828
| true
| null | null |
gah-bo
| 1
|
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html
### Documentation problem
Copy-on-Write (CoW) section of "Previous behavior" states that the behavior before CoW leads to this output:
```
In [1]: df = pd.DataFrame({"foo": [1, 2, 3], "bar": [4, 5, 6]})
In [2]: subset = df["foo"]
In [3]: subset.iloc[0] = 100
In [4]: df
Out[4]:
foo bar
0 1 4
1 2 5
2 3 6
```
But that's the behavior that would happen after migrating to CoW
Note: this codeblock seems to be used more than once, each time having this issue
### Suggested fix for documentation
Actually show "Previous behavior" on the codeblock
|
[
"Docs",
"Copy / view semantics"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"cc @phofl"
] |
2,182,905,787
| 57,827
|
DOC: Fix remove_unused_levels doctest on main
|
closed
| 2024-03-13T00:45:40
| 2024-03-13T02:40:01
| 2024-03-13T02:39:58
|
https://github.com/pandas-dev/pandas/pull/57827
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57827
|
https://github.com/pandas-dev/pandas/pull/57827
|
mroeschke
| 1
|
e.g https://github.com/pandas-dev/pandas/actions/runs/8257178732/job/22587251828
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Merging to get back to green"
] |
2,182,894,169
| 57,826
|
CI: speedup docstring check consecutive runs
|
closed
| 2024-03-13T00:30:04
| 2024-03-17T22:02:17
| 2024-03-17T21:59:40
|
https://github.com/pandas-dev/pandas/pull/57826
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57826
|
https://github.com/pandas-dev/pandas/pull/57826
|
dontgoto
| 14
|
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
This PR brings the runtime of the docstring check CI down to 2-3 minutes from about 20 minutes.
Currently, `check_code.sh docstring` does multiple calls to `validate_docstrings.py` due to various error types that have function exceptions. Each run of `validate_docstrings`takes about 2-3 minutes, leading to the current 20 minutes runtime just for the docstring checks.
The runtime for consecutive calls is brought down to that of a single call by changing the argument parsing of `validate_docstrings`, adding `--` to separate parameters that previously were separate function calls. Additionally, a cache is added to reuse the parsing results between runs. The cache size should pose no problem, the Python instance ran by `check_code.sh docstring' only reserves about 95MB of memory on my machine.
|
[
"Docs",
"CI"
] | 0
| 0
| 1
| 0
| 0
| 2
| 0
| 0
|
[
"> This PR brings the runtime of the docstring check CI down to 2-3 minutes from about 20 minutes.\r\n\r\nWow - my hat is raised above my head\r\n\r\n@datapythonista fancy taking a look?",
"Thanks @dontgoto for making the CI much faster.\r\n\r\nWhile I'm open to getting this merged, I'm a bit unsure about using this approach. In an ideal world we'd like to call validate_docstrings just once for all errors and for all files. This is clearly not the case today, but hopefully we'll eventually get there, and making things significantly more complex for something hopefully lasting only few months may not be worth.\r\n\r\nAlso, if we want to implement this I think I'd prefer another API where we can simply specify which files and errors to ignore together. Not sure exactly the best way to do this, but something like this should be clearer and simpler IMHO: `./validate_docstring.py --ignore pandas/core.py,PR03 --ignore pandas/frame.py,EX01,EX02`\r\n\r\nAs I said, I'm not opposed to merge this PR, but I think it's making the validator significantly more complex to understand, which is probably worth for the speedup now, but thinking longer term not so much.\r\n\r\nWhat do you think @dontgoto ?",
"I think I can change the command line args and their preprocessing quite easily to match your `--ignore` idea, but in the end still run everything sequentially. Your parameter variant is indeed easier to understand.\n\nI agree that making the rest of the code match this kind of structure and ditching the repeated sequential calls would be preferable, but maybe a bit much of a time investment.\n\nI might look into that in the future though, I have an idea for another PR that might make the docstring checks \"commit-hookably\" fast, but I still need to test it out.",
"I think implementing what I said is not trivial, but I think just one validate call per function would be enough if we do that, so I think it should be as fast as your implementation here, or maybe even a bit faster.",
"I pushed a first version of the `--ignore` parameter. In my opinion, the behavior of the CLI parameters is now easier to understand, but the parsing logic in the script got more complex.\n\nLet me know what you think about the changes to the `.py`. I am ambivalent, both versions have their own pain points. \n\nIf we go ahead with the current version, I'll tidy it up and add tests for the parsing. Merge conflicts for docstring exceptions removed from main in the meantime are to be expected.",
"> think just one validate call per function would be enough if we do that, so I think it should be as fast as your implementation here, or maybe even a bit faster.\n\nRegarding performance, everything outside the cached validate function is basically free. Calls of `main` finish in milliseconds or less once the validate document cache is filled in the first run. So no need to minimize calls of `main` just from the performance angle.\n",
"Thanks for the work on this @dontgoto. \r\n\r\nWhat I had in mind is different to what you implemented. And I think it should be simpler, and reasonably fast.\r\n\r\nRegardless of the command line API we implement, we would end up with a list of function names and which errors we need to ignore for them. We have this stored in a variable, and then we call the validator normally: https://github.com/pandas-dev/pandas/blob/main/scripts/validate_docstrings.py#L317\r\n\r\nAt this point, we have in the result the errors that have been found in the function. If the error is in the list of errors to ignore, we can just remove that from the result. This way we don't need cache, we don't need to much extra complexity, and we call the parsing and the validation just once.\r\n\r\nWhat do you think?",
"I agree. The version I previously pushed was a half measure, just adding a variant of the CMD parameter required for this solution, but shoehorning it into the old logic. I refactored everything to use the new parameter, simplifying the parsing. I am satisfied with this solution, let me know whether you agree.\r\n\r\nFor the new CMD parameter `for_error_ignore_functions` I find a mapping of `error: list[funcs_to_ignore]` to be the best fit when considering the maintenance of the exception lists in the `code_checks.sh`. Just the initial formatting changes there are not nice. \r\n\r\nOpen for a different name for the parameter though.",
"Looks great, thanks a lot for the work here @dontgoto \r\n\r\n@jordan-d-murphy @tqa236 do you mind having a look here and sharing any feedback on this change? Thanks!",
"This is such a refreshing PR! Love to see this. I am in full support of this new approach. I added one suggestion - but leave it up to you if you think it's valuable to include or not. \r\n\r\nMy main thoughts on this are: \r\n\r\n1) I love this new approach. I appreciate all the work that was done on this. Would love to see this merged in. \r\n\r\n2) Once this is merged in, I can close the following Issues which I opened based on the previous approach we were using in check_code.sh / validate_docstrings.py \r\n\r\n\r\n> [DOC: fix GL08 errors in docstrings](https://github.com/pandas-dev/pandas/issues/57443)\r\n> [DOC: fix PR01 errors in docstrings](https://github.com/pandas-dev/pandas/issues/57438)\r\n> [DOC: fix PR07 errors in docstrings](https://github.com/pandas-dev/pandas/issues/57420)\r\n> [DOC: fix SA01 errors in docstrings](https://github.com/pandas-dev/pandas/issues/57417)\r\n> [DOC: fix RT03 errors in docstrings](https://github.com/pandas-dev/pandas/issues/57416)\r\n> [DOC: fix PR02 errors in docstrings](https://github.com/pandas-dev/pandas/issues/57111)\r\n\r\n3) After closing the above issues, I can open a new issue to address fixing the docstrings that follows this new approach\r\n\r\n4) And finally, there seems to be one cryptic failing CI check, [ASAN / UBSAN](https://github.com/pandas-dev/pandas/actions/runs/8310857572/job/22743812047?pr=57826#logs) - would like to see this resolved and all green on the CI, but as the logs got deleted, it's hard to tell if this is related to this PR or some outside issue. \r\n\r\n\r\n",
"Thanks!\r\n\r\n> * And finally, there seems to be one cryptic failing CI check, [ASAN / UBSAN](https://github.com/pandas-dev/pandas/actions/runs/8310857572/job/22743812047?pr=57826#logs) - would like to see this resolved and all green on the CI, but as the logs got deleted, it's hard to tell if this is related to this PR or some outside issue.\r\n\r\nIt seems that this test is currently failing on this and [many other PRs](https://github.com/pandas-dev/pandas/actions/runs/8307926216/job/22737627560?pr=57864), unit tests are failing on main as well.\r\n\r\nI'd be happy to see this merged since resolving the conflicts with ongoing doc fix PRs is a hassle. Let me know if there is anything else blocking this.\r\n\r\nThanks again for all the great feedback and the welcoming atmosphere :)",
"Awesome! Lgtm 🙂",
"Hello, this is a great improvement! LGTM too.",
"Amazing job @dontgoto.\r\n\r\nI see the `code_checks` job takes 20 minutes instead of 40 after this, and I think this will help a lot with the efforts to fix errors in docstrings."
] |
2,182,861,259
| 57,825
|
PERF: Unary methods on RangeIndex returns RangeIndex
|
closed
| 2024-03-12T23:48:52
| 2024-03-14T16:27:25
| 2024-03-14T16:27:22
|
https://github.com/pandas-dev/pandas/pull/57825
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57825
|
https://github.com/pandas-dev/pandas/pull/57825
|
mroeschke
| 1
| null |
[
"Performance",
"Index"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Conflict but otherwise lgtm"
] |
2,182,799,548
| 57,824
|
PERF: RangeIndex.round returns RangeIndex when possible
|
closed
| 2024-03-12T22:35:08
| 2024-03-13T19:32:26
| 2024-03-13T19:32:22
|
https://github.com/pandas-dev/pandas/pull/57824
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57824
|
https://github.com/pandas-dev/pandas/pull/57824
|
mroeschke
| 0
| null |
[
"Performance",
"Index"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
2,182,684,864
| 57,823
|
PERF: RangeIndex.argmin/argmax
|
closed
| 2024-03-12T21:13:41
| 2024-03-14T15:07:45
| 2024-03-14T02:02:54
|
https://github.com/pandas-dev/pandas/pull/57823
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57823
|
https://github.com/pandas-dev/pandas/pull/57823
|
mroeschke
| 1
|
```python
In [1]: import pandas as pd
In [2]: ri = pd.RangeIndex(100_000)
In [3]: %timeit ri.argmin()
651 ns ± 8.43 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) # PR
In [3]: %timeit ri.argmin()
19.6 µs ± 73 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) # main
```
|
[
"Performance",
"Index"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @mroeschke "
] |
2,182,578,273
| 57,822
|
Backport PR #57821 on branch 2.2.x (Fix doc build)
|
closed
| 2024-03-12T20:06:11
| 2024-03-12T21:15:25
| 2024-03-12T21:15:24
|
https://github.com/pandas-dev/pandas/pull/57822
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57822
|
https://github.com/pandas-dev/pandas/pull/57822
|
meeseeksmachine
| 0
|
Backport PR #57821: Fix doc build
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
2,182,332,520
| 57,821
|
Fix doc build
|
closed
| 2024-03-12T17:55:31
| 2024-03-12T20:06:34
| 2024-03-12T20:05:36
|
https://github.com/pandas-dev/pandas/pull/57821
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57821
|
https://github.com/pandas-dev/pandas/pull/57821
|
tqa236
| 8
|
@mrocklin would you mind taking a look at this PR? I think it'll fix the doc build on `main`
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Sorry for the noise, the error seems to be more complicated than an error in a single file, will try something else",
"Could you add `dask-expr` to `environment.yml`. I think that's the error from what I get locally\r\n\r\n```python\r\nWARNING: ources... [ 95%] user_guide/scale\r\n>>>-------------------------------------------------------------------------\r\nException in /doc/source/user_guide/scale.rst at block ending on line None\r\nSpecify :okexcept: as an option in the ipython:: block to suppress this message\r\n---------------------------------------------------------------------------\r\nModuleNotFoundError Traceback (most recent call last)\r\nFile /opt/miniconda3/envs/pandas-dev/lib/python3.10/site-packages/dask/dataframe/__init__.py:22, in _dask_expr_enabled()\r\n 21 try:\r\n---> 22 import dask_expr # noqa: F401\r\n 23 except ImportError:\r\n\r\nModuleNotFoundError: No module named 'dask_expr'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nValueError Traceback (most recent call last)\r\nCell In[33], line 1\r\n----> 1 import dask.dataframe as dd\r\n\r\nFile /opt/miniconda3/envs/pandas-dev/lib/python3.10/site-packages/dask/dataframe/__init__.py:87\r\n 84 except ImportError:\r\n 85 pass\r\n---> 87 if _dask_expr_enabled():\r\n 88 import dask_expr as dd\r\n 90 # trigger loading of dask-expr which will in-turn import dask.dataframe and run remainder\r\n 91 # of this module's init updating attributes to be dask-expr\r\n 92 # note: needs reload, incase dask-expr imported before dask.dataframe; works fine otherwise\r\n\r\nFile /opt/miniconda3/envs/pandas-dev/lib/python3.10/site-packages/dask/dataframe/__init__.py:24, in _dask_expr_enabled()\r\n 22 import dask_expr # noqa: F401\r\n 23 except ImportError:\r\n---> 24 raise ValueError(\"Must install dask-expr to activate query planning.\")\r\n 25 return True\r\n\r\nValueError: Must install dask-expr to activate query planning.\r\n\r\n<<<-------------------------------------------------------------------------\r\n```",
"@mroeschke I added the package. If you don't mind me asking, is the reason \"single file\" build works for me locally after fixing the `pathlib` error is because I use an old environment? \r\n\r\nIs there an easy way to only test with the latest environment, or the one that's the same as CI?",
"> If you don't mind me asking, is the reason \"single file\" build works for me locally after fixing the pathlib error is because I use an old environment?\r\n\r\nAre you referring to build a doc build or a single doc page? I'm not sure if it has to do with environment, but the CI runs the doc build over all files, so building a single file of the docs is not equivalent to what is run on the CI",
"@mroeschke I think I would like to know how to update the local environment so that it'll use the latest versions as in the CI.\r\n\r\nCurrently I have `dask==2024.2.1` locally, but I think the CI fails because of the new release `dask=2024.3.0`\r\n\r\nLooking at the commit [history](https://github.com/dask/dask/commits/main/), some changes are directly related to `dask-expr` so it's probably the reason for the failed build on CI, but I can't reproduce locally because of my old `dask` version",
"Ah I see. In that case, if you're using a conda environment, you can run `conda env update -f environment.yml` and it _should_ update the dependency versions as well. If not, you can just recreate the environment.",
"@mroeschke thank you for your debugging and the update command. I think this PR should be ready for review, as the \"doc build\" job passes in the previous commit",
"Thanks for the quick fix here @tqa236 "
] |
2,181,624,666
| 57,820
|
CLN: enforce deprecation of `interpolate` with object dtype
|
closed
| 2024-03-12T13:27:42
| 2024-03-30T01:24:02
| 2024-03-15T16:01:10
|
https://github.com/pandas-dev/pandas/pull/57820
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57820
|
https://github.com/pandas-dev/pandas/pull/57820
|
natmokval
| 2
|
xref #53638
enforced deprecation of `interpolate` with object dtype
|
[
"Missing-data",
"Clean"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"I enforced deprecation of `interpolate` with `object` dtype. @phofl could you please take a look at this PR? I think CI failures are unrelated to my changes.",
"Thanks @natmokval "
] |
2,180,810,524
| 57,819
|
CLN: Remove unused private code in sas module
|
closed
| 2024-03-12T06:09:12
| 2024-03-12T17:46:01
| 2024-03-12T17:44:12
|
https://github.com/pandas-dev/pandas/pull/57819
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57819
|
https://github.com/pandas-dev/pandas/pull/57819
|
tqa236
| 1
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Clean",
"IO SAS"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @tqa236 "
] |
2,180,803,829
| 57,818
|
CLN: Remove unused private attributes in stata module
|
closed
| 2024-03-12T06:03:32
| 2024-03-12T17:46:03
| 2024-03-12T17:43:32
|
https://github.com/pandas-dev/pandas/pull/57818
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57818
|
https://github.com/pandas-dev/pandas/pull/57818
|
tqa236
| 1
|
According to https://github.com/pandas-dev/pandas/pull/49228, all of the internal state of the reader object is now `_private`, so I try to remove the unused one detected by `vulture`
|
[
"IO Stata",
"Clean"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @tqa236 "
] |
2,180,589,157
| 57,817
|
DOC: Updated the returns for DataFrame.any/all to return either a Series or scalar
|
closed
| 2024-03-12T02:21:46
| 2024-03-19T17:19:41
| 2024-03-19T17:19:40
|
https://github.com/pandas-dev/pandas/pull/57817
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57817
|
https://github.com/pandas-dev/pandas/pull/57817
|
5ammiches
| 4
|
#57088
updated documentation for reflect the return types for Dataframe.any/all calls.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"/preview",
"Website preview of this PR available at: https://pandas.pydata.org/preview/pandas-dev/pandas/57817/",
"@sammig6i do you mind having a look at #57682 and see if the changes here conflict with the changes there please? I think you're modifying the same exact part of the documentation, and if that's the case it's probably better to discontinue this one (feel free to provide any feedback to that PR). Thanks!",
"Closing, as this seems to be duplicated and stale."
] |
2,180,187,600
| 57,816
|
Fix some typing errors
|
closed
| 2024-03-11T20:42:08
| 2024-03-12T23:43:24
| 2024-03-12T23:39:34
|
https://github.com/pandas-dev/pandas/pull/57816
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57816
|
https://github.com/pandas-dev/pandas/pull/57816
|
tqa236
| 2
| null |
[
"Typing"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"pre-commit.ci autofix",
"Thanks @tqa236 !"
] |
2,180,152,456
| 57,815
|
CLN: remove deprecated classes 'NumericBlock' and 'ObjectBlock'
|
closed
| 2024-03-11T20:24:05
| 2024-03-12T17:49:28
| 2024-03-12T17:49:21
|
https://github.com/pandas-dev/pandas/pull/57815
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57815
|
https://github.com/pandas-dev/pandas/pull/57815
|
natmokval
| 1
|
xref #52817
removed deprecated classes `NumericBlock` and `ObjectBlock`
|
[
"Clean"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @natmokval "
] |
2,180,137,032
| 57,814
|
Remove maybe unused function
|
closed
| 2024-03-11T20:16:50
| 2024-03-12T03:17:23
| 2024-03-11T21:18:13
|
https://github.com/pandas-dev/pandas/pull/57814
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57814
|
https://github.com/pandas-dev/pandas/pull/57814
|
tqa236
| 1
|
This is a private function that's only a part of one `xfail` test. I wonder if it's possible to remove it or not, given that the current minimum required version of `matplotlib` is already `3.6.3`.
|
[
"Testing",
"Visualization"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @tqa236 "
] |
2,180,108,115
| 57,813
|
CI: Fail tests on all builds for FutureWarning/DeprecationWarning from numpy or pyarrow
|
closed
| 2024-03-11T20:01:39
| 2024-04-01T18:39:49
| 2024-04-01T18:39:46
|
https://github.com/pandas-dev/pandas/pull/57813
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57813
|
https://github.com/pandas-dev/pandas/pull/57813
|
mroeschke
| 1
|
I don't think we should necessarily limit this to just to nightly build of these libraries
|
[
"Testing",
"CI"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"I think this may be a a little too cumbersome to do as we test downstream libraries and introduce our own deprecations that get reflected in those test so going to close for now"
] |
2,179,938,698
| 57,812
|
PERF: Avoid np.divmod in maybe_sequence_to_range
|
closed
| 2024-03-11T18:34:44
| 2024-03-21T17:14:48
| 2024-03-21T17:14:44
|
https://github.com/pandas-dev/pandas/pull/57812
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57812
|
https://github.com/pandas-dev/pandas/pull/57812
|
mroeschke
| 1
|
xref https://github.com/pandas-dev/pandas/pull/57534#issuecomment-1957832841
Made a `is_range` method (like `is_range_indexer`) that avoids a `np.divmod` operation
```python
In [1]: from pandas import *; import numpy as np
...: np.random.seed(123)
...: size = 1_000_000
...: ngroups = 1000
...: data = Series(np.random.randint(0, ngroups, size=size))
+ /opt/miniconda3/envs/pandas-dev/bin/ninja
[1/1] Generating write_version_file with a custom command
In [2]: %timeit data.groupby(data).groups
14 ms ± 552 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # PR
In [3]: %timeit data.groupby(data).groups
17.8 ms ± 84.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # main
```
|
[
"Performance",
"Index"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Any other feedback here @jbrockmendel @WillAyd?"
] |
2,179,797,779
| 57,811
|
BUG: improve pd.io.json_normalize
|
closed
| 2024-03-11T17:48:55
| 2024-05-15T21:05:26
| 2024-05-15T17:54:17
|
https://github.com/pandas-dev/pandas/pull/57811
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57811
|
https://github.com/pandas-dev/pandas/pull/57811
|
slavanorm
| 9
|
- [x] closes #57810
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v2.2.2.rst` file if fixing a bug or adding a new feature.
|
[
"Bug",
"IO JSON"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@slavanorm in case you haven't seen it, your changes are making the tests fail: https://github.com/pandas-dev/pandas/actions/runs/8239141116/job/22531710877?pr=57811#step:8:53",
"I'm not sure about this change - the point of the `errors` argument is to ignore missing keys. Shouldn't the test case you added still create the column but with all empty data?",
"yes it should but it creates rows only for the dictionaries with record path.\r\ni will edit the fixture and assertion in order to get into this case of if\r\n",
"This docs check is failing, and its not fixable. Wonder what should we do now ",
"I fixed the code, wish someone reviewed it",
"sorry i closed it by mistake, just reopened it",
"@WillAyd could you please review the code again, it's been 2 weeks i think",
"Thanks for the pull request, but it appears to have gone stale. If interested in continuing, please merge in the main branch, address any review comments and/or failing tests, and we can reopen.",
"thank you too.\r\nnow the code is obsolete vs main branch but its ok.\r\n\r\non second thought, the code does not actually require merging to pandas library.\r\nit got stale because of lack of interaction. \r\neverything was done on my side and all requests were adressed, but were not reviewed on time. \r\n\r\nbest regards"
] |
2,179,589,085
| 57,810
|
BUG: pd.json_normalize improvement
|
open
| 2024-03-11T16:36:25
| 2024-03-19T23:44:47
| null |
https://github.com/pandas-dev/pandas/issues/57810
| true
| null | null |
slavanorm
| 1
|
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
pd.json_normalize(
data=dict(x=[1, 2], y=[]),
record_path='x',
meta=[['y', 'yy']])
```
```python-traceback
TypeError: list indices must be integers or slices, not str
```
### Issue Description
Hello.
json_normalize is great function. it has feature that allows to coerce errors when provided json structure has some missing keys.
I wish to improve the coercion algorithm.
### Expected Behavior
above code should result with {x:[1,2],y:[pd.nan,pd.nan]}.
now it just throws typeError.
I have already made a fix for that, but would be happy to discuss some other improvements before proposing a PR.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : bdc79c146c2e32f2cab629be240f01658cfb6cc2
python : 3.11.3.final.0
python-bits : 64
OS : Darwin
OS-release : 21.4.0
Version : Darwin Kernel Version 21.4.0: Mon Feb 21 20:36:53 PST 2022; root:xnu-8020.101.4~2/RELEASE_ARM64_T8101
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : None
LOCALE : en_US.UTF-8
pandas : 2.2.1
numpy : 1.26.3
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.2.2
pip : 24.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 5.0.1
html5lib : 1.1
pymysql : None
psycopg2 : 2.9.9
jinja2 : 3.1.2
IPython : 8.19.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.2
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2023.12.2
gcsfs : None
matplotlib : 3.8.2
numba : 0.58.1
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.11.4
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
zstandard : 0.22.0
tzdata : 2023.4
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"IO JSON"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"I've edited the pd.io.json._normalize:394 \r\nfrom\r\n if result is None:\r\nto\r\n if result is None or result==[]:\r\n\r\nand it fixes. but maybe we could use something better"
] |
2,179,546,468
| 57,809
|
ENH: simple add row capactiy to dataframes
|
closed
| 2024-03-11T16:18:19
| 2024-03-11T21:15:20
| 2024-03-11T21:15:19
|
https://github.com/pandas-dev/pandas/issues/57809
| true
| null | null |
R3dan
| 2
|
### Feature Type
- [X] Adding new functionality to pandas
- [X] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
I wish to be able to easily add a new row to a dataframe with a function like 'append' or 'new_row' etc. I am fairly sure that this is already accomplishable with additional code etc but I could not find it. If someone could tell me how to do it now that would be great but I also feel it could be made easier.
### Feature Description
An example could be:
```python
def add_new_row(data):
self.rows.append(data)
```
or something similar as I do not know how pandas store indexes etc and with additional checks and functionality
### Alternative Solutions
```python
import pandas as pd
df = pd.DataFrame(columns=["a", "b", "c", "d"])
df.iloc[0] = pd.Series([1,2,3,4])
```
### Additional Context
An in use example could be like:
```python
import pandas as pd
df = pd.DataFrame(columns=["a", "b", "c", "d"])
df.new([1,2,3,4)]
# or:
df.new(pd.Series([1,2,3,4]))
# etc
```
|
[
"Closing Candidate"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Googling `add new row to pandas dataframe` gives the first result: https://www.geeksforgeeks.org/how-to-add-one-row-in-an-existing-pandas-dataframe/\r\n\r\nOne of methods contained within demonstrates `DataFrame.append(data)` which is basically what you are proposing.",
"Thanks for the suggestion but agreed with above. You can achieve this currently with indexing or `pd.concat` so closing"
] |
2,178,376,190
| 57,808
|
DOC: fix typo in `DataFrame.plot.hist` docstring
|
closed
| 2024-03-11T07:04:10
| 2024-03-12T04:50:06
| 2024-03-11T17:26:05
|
https://github.com/pandas-dev/pandas/pull/57808
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57808
|
https://github.com/pandas-dev/pandas/pull/57808
|
yuanx749
| 1
|
Spot a rendering issue [here](https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.plot.hist.html)

|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @yuanx749 "
] |
2,178,046,101
| 57,807
|
Doc: Fix GL08 error for pandas.ExcelFile.book
|
closed
| 2024-03-11T01:40:10
| 2024-03-12T06:33:22
| 2024-03-11T04:48:25
|
https://github.com/pandas-dev/pandas/pull/57807
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57807
|
https://github.com/pandas-dev/pandas/pull/57807
|
jordan-d-murphy
| 1
|
All GL08 Errors resolved in the following cases:
1. scripts/validate_docstrings.py --format=actions --errors=GL08 pandas.ExcelFile.book
- [x] xref #57443
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jordan-d-murphy "
] |
2,177,990,166
| 57,806
|
Fix PR01 errors for melt, option_context, read_fwf, reset_option
|
closed
| 2024-03-11T00:27:14
| 2024-03-11T04:39:03
| 2024-03-11T04:35:40
|
https://github.com/pandas-dev/pandas/pull/57806
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57806
|
https://github.com/pandas-dev/pandas/pull/57806
|
jordan-d-murphy
| 1
|
All PR01 Errors resolved in the following cases:
1. scripts/validate_docstrings.py --format=actions --errors=PR01 pandas.melt
2. scripts/validate_docstrings.py --format=actions --errors=PR01 pandas.DataFrame.melt
3. scripts/validate_docstrings.py --format=actions --errors=PR01 pandas.option_context
4. scripts/validate_docstrings.py --format=actions --errors=PR01 pandas.read_fwf
5. scripts/validate_docstrings.py --format=actions --errors=PR01 pandas.reset_option
- [x] xref #57438
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jordan-d-murphy "
] |
2,177,962,689
| 57,805
|
Doc: fix PR07 errors in DatetimeIndex - indexer_between_time, mean and HDFStore - append, get, put
|
closed
| 2024-03-10T23:22:57
| 2024-03-11T04:38:14
| 2024-03-11T04:34:09
|
https://github.com/pandas-dev/pandas/pull/57805
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57805
|
https://github.com/pandas-dev/pandas/pull/57805
|
jordan-d-murphy
| 1
|
All PR07 Errors resolved in the following cases:
1. scripts/validate_docstrings.py --format=actions --errors=PR07 pandas.DatetimeIndex.indexer_between_time
2. scripts/validate_docstrings.py --format=actions --errors=PR07 pandas.DatetimeIndex.mean
3. scripts/validate_docstrings.py --format=actions --errors=PR07 pandas.HDFStore.append
4. scripts/validate_docstrings.py --format=actions --errors=PR07 pandas.HDFStore.get
5. scripts/validate_docstrings.py --format=actions --errors=PR07 pandas.HDFStore.put
- [x] xref https://github.com/pandas-dev/pandas/issues/57420
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jordan-d-murphy "
] |
2,177,935,110
| 57,804
|
Doc: fix PR07 errors for pandas.DataFrame get, rolling, to_hdf
|
closed
| 2024-03-10T22:33:49
| 2024-03-10T23:29:02
| 2024-03-10T23:27:06
|
https://github.com/pandas-dev/pandas/pull/57804
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57804
|
https://github.com/pandas-dev/pandas/pull/57804
|
jordan-d-murphy
| 1
|
All PR07 Errors resolved in the following cases:
1. scripts/validate_docstrings.py --format=actions --errors=PR07 pandas.DataFrame.get
2. scripts/validate_docstrings.py --format=actions --errors=PR07 pandas.DataFrame.rolling
3. scripts/validate_docstrings.py --format=actions --errors=PR07 pandas.DataFrame.to_hdf
- [x] xref #57420
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jordan-d-murphy "
] |
2,177,925,490
| 57,803
|
Doc: fix SA01 errors for as_ordered and as_unordered
|
closed
| 2024-03-10T22:05:13
| 2024-03-10T23:08:59
| 2024-03-10T22:47:23
|
https://github.com/pandas-dev/pandas/pull/57803
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57803
|
https://github.com/pandas-dev/pandas/pull/57803
|
jordan-d-murphy
| 1
|
All SA01 Errors resolved in the following cases:
1. scripts/validate_docstrings.py --format=actions --errors=SA01 pandas.Categorical.as_ordered
2. scripts/validate_docstrings.py --format=actions --errors=SA01 pandas.Categorical.as_unordered
3. scripts/validate_docstrings.py --format=actions --errors=SA01 pandas.CategoricalIndex.as_ordered
4. scripts/validate_docstrings.py --format=actions --errors=SA01 pandas.CategoricalIndex.as_unordered
5. scripts/validate_docstrings.py --format=actions --errors=SA01 pandas.Series.cat.as_ordered
6. scripts/validate_docstrings.py --format=actions --errors=SA01 pandas.Series.cat.as_unordered
- [x] xref #57417
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jordan-d-murphy "
] |
2,177,911,402
| 57,802
|
Doc: fix SA01 errors for pandas.BooleanDtype and pandas.StringDtype
|
closed
| 2024-03-10T21:23:43
| 2024-03-10T23:08:47
| 2024-03-10T22:46:25
|
https://github.com/pandas-dev/pandas/pull/57802
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57802
|
https://github.com/pandas-dev/pandas/pull/57802
|
jordan-d-murphy
| 1
|
All SA01 Errors resolved in the following cases:
1. scripts/validate_docstrings.py --format=actions --errors=SA01 pandas.BooleanDtype
2. scripts/validate_docstrings.py --format=actions --errors=SA01 pandas.StringDtype
- [x] xref #57417
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jordan-d-murphy "
] |
2,177,897,584
| 57,801
|
Doc: Fix RT03 errors for read_orc, read_sas, read_spss, read_stata
|
closed
| 2024-03-10T20:46:29
| 2024-03-11T04:38:52
| 2024-03-11T04:34:44
|
https://github.com/pandas-dev/pandas/pull/57801
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57801
|
https://github.com/pandas-dev/pandas/pull/57801
|
jordan-d-murphy
| 2
|
All RT03 Errors resolved in the following cases:
1. scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.read_orc
2. scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.read_sas
3. scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.read_spss
4. scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.read_stata
- [x] xref #57416
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@mroeschke merge conflicts have been resolved 🙂",
"Thanks @jordan-d-murphy "
] |
2,177,894,665
| 57,800
|
BUG: #57775 Fix groupby apply in case func returns None for all groups
|
closed
| 2024-03-10T20:37:55
| 2024-03-12T17:27:32
| 2024-03-12T17:27:25
|
https://github.com/pandas-dev/pandas/pull/57800
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57800
|
https://github.com/pandas-dev/pandas/pull/57800
|
dontgoto
| 1
|
- [x] closes #57775
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest `doc/source/whatsnew/v2.2.2.rst` file if fixing a bug or adding a new feature.
In case the func passed to DataFrameGroupBy.apply returns None for all groups (filtering out all groups), the original dataframe's columns and dtypes are now part of the empty result dataframe.
|
[
"Groupby",
"Apply"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @dontgoto "
] |
2,177,882,691
| 57,799
|
Doc: fix RT03 pandas.timedelta_range and pandas.util.hash_pandas_object
|
closed
| 2024-03-10T20:04:00
| 2024-03-10T23:08:13
| 2024-03-10T22:44:35
|
https://github.com/pandas-dev/pandas/pull/57799
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57799
|
https://github.com/pandas-dev/pandas/pull/57799
|
jordan-d-murphy
| 1
|
All RT03 Errors resolved in the following cases:
1. scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.timedelta_range
2. scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.util.hash_pandas_object
- [x] xref #57416
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jordan-d-murphy "
] |
2,177,827,298
| 57,798
|
ENH: DataFrame argument `columns` should accept dict/iterable type
|
closed
| 2024-03-10T17:51:13
| 2024-03-18T13:41:46
| 2024-03-18T01:07:19
|
https://github.com/pandas-dev/pandas/issues/57798
| true
| null | null |
5j9
| 8
|
### Feature Type
- [X] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
It is convenient to pass a dict to `columns` argument of `DataFrame` and use the same dict for `astype` conversions later. The following sample code works fine, but it fails type checking:
```python
from pandas import DataFrame
columns = {"A": 'int64', "B": 'int32', "C": 'int16'}
df = DataFrame([[1,2,3],[4,5,6]], columns=columns) # fails type check
df = df.astype(columns)
print(df.dtypes)
```
pyright output:
```bash
$ pyright 'filename.py'
filename.py
filename.py:4:43 - error: Argument of type "dict[str, str]" cannot be assigned to parameter "columns" of type "Axes | None" in function "__init__"
Type "dict[str, str]" cannot be assigned to type "Axes | None"
"dict[str, str]" is incompatible with "ExtensionArray"
"dict[str, str]" is incompatible with "ndarray[Unknown, Unknown]"
"dict[str, str]" is incompatible with "Index"
"dict[str, str]" is incompatible with "Series"
"dict[str, str]" is incompatible with protocol "SequenceNotStr[Unknown]"
"index" is not present
"count" is not present
... (reportArgumentType)
1 error, 0 warnings, 0 informations
```
### Feature Description
Change type annotations for columns to accept dict/iterable type.
### Alternative Solutions
Convert columns value to a `Series` or numpy array or some other compatible type.
<!--
**Update**: The list issue has been fixed by [changing the index signature](https://github.com/pandas-dev/pandas/commit/486b44078135a3a2d69a4d544cfec7ad3f5a94fa#diff-80af4c7d6ece8cfeec6401a5f5babe27f19a7fc7476ad9148dca02f58a07a8dcL140).
interestingly, pyright raises error even for a list of strings (`columns=[*columns]`):
```
error: Argument of type "list[str]" cannot be assigned to parameter "columns" of type "Axes | None" in function "__init__"
Type "list[str]" cannot be assigned to type "Axes | None"
"list[str]" is incompatible with "ExtensionArray"
"list[str]" is incompatible with "ndarray[Unknown, Unknown]"
"list[str]" is incompatible with "Index"
"list[str]" is incompatible with "Series"
"list[str]" is incompatible with protocol "SequenceNotStr[Unknown]"
"index" is an incompatible type
Type "(__value: str, __start: SupportsIndex = 0, __stop: SupportsIndex = sys.maxsize, /) -> int" cannot be assigned to type "(value: Any, /, start: int = 0, stop: int = ...) -> int"
... (reportArgumentType)
1 error, 0 warnings, 0 informations
```
-->
### Additional Context
_No response_
|
[
"Enhancement",
"Typing"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"do you have `pandas-stubs` installed?\r\n\r\nthis only reproduces without `pandas-stubs` for me",
"> do you have `pandas-stubs` installed?\r\n> \r\n> this only reproduces without `pandas-stubs` for me\r\n\r\nI don't have pandas-stubs. You are right, works fine with stubs. I still think this should not raise error even without stubs given that pandas-stubs describes itself as \"narrower than what is possibly allowed by pandas\".",
"Would need to adjust\r\n\r\nhttps://github.com/pandas-dev/pandas/blob/10f31f6a242fb01fdf37f5db2e8c6f4f82f5af16/pandas/core/frame.py#L681\r\n\r\nto include `Mapping`.\r\n",
"It is completely conicidential that this works and nobody should rely on it. Closing",
"> It is completely conicidential that this works and nobody should rely on it. Closing\r\n\r\n@Dr-Irv pandas-stubs currently declares that any `dict` would be okay",
"> It is completely conicidential that this works and nobody should rely on it. Closing\r\n\r\n@phofl So should we test for a `dict` and then raise an Exception?",
"> > It is completely conicidential that this works and nobody should rely on it. Closing\r\n> \r\n> @Dr-Irv pandas-stubs currently declares that any `dict` would be okay\r\n\r\nI checked via Blame and I put that in there at some point without a test. I just tested removing `dict` from `Axes` in pandas-stubs and it works fine, so maybe we should remove it??\r\n\r\n",
"\r\n> I don't have pandas-stubs. You are right, works fine with stubs. I still think this should not raise error even without stubs given that pandas-stubs describes itself as \"narrower than what is possibly allowed by pandas\".\r\n\r\nYes, but the typing in the pandas source is much wider than what is actually allowed by pandas. Also, type checking is faster using the stubs than using the pandas source.\r\n\r\n"
] |
2,177,760,359
| 57,797
|
DOC: Remove RT03 docstring errors for selected methods
|
closed
| 2024-03-10T15:03:33
| 2024-03-10T22:44:09
| 2024-03-10T22:44:03
|
https://github.com/pandas-dev/pandas/pull/57797
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57797
|
https://github.com/pandas-dev/pandas/pull/57797
|
bergnerjonas
| 1
|
Resolve all RT03 errors for the following cases:
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.pop
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.reindex
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.reorder_levels
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.swapaxes - deprecated in favor of .transpose, which already has valid docstring
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.to_numpy
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.to_orc
xref DOC: fix RT03 errors in docstrings DOC: fix RT03 errors in docstrings #57416
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @bergnerjonas "
] |
2,177,738,191
| 57,796
|
Fix issue with Tempita recompilation
|
closed
| 2024-03-10T14:10:26
| 2024-03-18T23:09:59
| 2024-03-13T23:29:49
|
https://github.com/pandas-dev/pandas/pull/57796
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57796
|
https://github.com/pandas-dev/pandas/pull/57796
|
WillAyd
| 4
|
xref https://github.com/mesonbuild/meson-python/issues/589#issuecomment-1987206217 @rhshadrach I think you mentioned this originally on Slack
|
[
"Build"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@lithomas1 - thought you might want to review.",
"No objections here - I think adding the files as sources was originally a hack to workaround a bug in meson.",
"Thanks @WillAyd ",
"The more I have used this the less I am sure that it actually fixed the issue. Whenever I edit a tempita file now it does _not_ trigger a recompilation as it should"
] |
2,177,622,239
| 57,795
|
Fix some typing errors
|
closed
| 2024-03-10T09:26:50
| 2024-03-11T05:40:23
| 2024-03-10T22:43:11
|
https://github.com/pandas-dev/pandas/pull/57795
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57795
|
https://github.com/pandas-dev/pandas/pull/57795
|
tqa236
| 1
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Typing"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@twoertwein thank you for your review. I addressed all the comments."
] |
2,177,595,898
| 57,794
|
DEPR: rename `startingMonth` to `starting_month` (argument in BQuarterBegin)
|
open
| 2024-03-10T08:13:29
| 2024-04-01T17:24:38
| null |
https://github.com/pandas-dev/pandas/issues/57794
| true
| null | null |
MarcoGorelli
| 1
|
Renamings should be done with care, but this one strikes me as especially odd
```python
pandas.tseries.offsets.QuarterBegin(startingMonth=1)
```
It looks very odd in Python to have a camelcase argument name...I thought this was probably a typo in the docs when I saw it, but no, it runs
OK with deprecating in favour of `starting_month`?
I think this is the only place in pandas where this happens:
```console
$ git grep -E ' [a-z]+[A-Z][a-z]+: ' pandas
pandas/_libs/tslibs/offsets.pyi: self, n: int = ..., normalize: bool = ..., startingMonth: int | None = ...
pandas/_libs/tslibs/offsets.pyi: startingMonth: int = ...,
pandas/_libs/tslibs/offsets.pyi: startingMonth: int = ...,
$ git grep -E ' [a-z]+[A-Z][a-z]+ : ' pandas
pandas/_libs/tslibs/offsets.pyx: startingMonth : int, default 3
pandas/_libs/tslibs/offsets.pyx: startingMonth : int, default 3
pandas/_libs/tslibs/offsets.pyx: startingMonth : int, default 3
pandas/_libs/tslibs/offsets.pyx: startingMonth : int, default 3
pandas/_libs/tslibs/offsets.pyx: startingMonth : int {1, 2, ... 12}, default 1
pandas/_libs/tslibs/offsets.pyx: startingMonth : int {1, 2, ..., 12}, default 1
```
|
[
"Frequency",
"Deprecate"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take"
] |
2,177,457,088
| 57,793
|
CLN: remove deprecated strings 'BA', 'BAS', 'AS' denoting frequencies for timeseries
|
closed
| 2024-03-10T00:15:59
| 2024-03-11T14:56:03
| 2024-03-11T14:56:02
|
https://github.com/pandas-dev/pandas/pull/57793
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57793
|
https://github.com/pandas-dev/pandas/pull/57793
|
natmokval
| 0
|
xref #55479
remove deprecated strings `‘BA, ‘BAS', ‘AS', ‘BA-DEC' ,‘BAS-DEC', ‘AS-DEC',` etc. denoting frequencies for timeseries
|
[
"Frequency",
"Clean"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
2,177,448,416
| 57,792
|
BUG: read_csv inconsistent behavior
|
closed
| 2024-03-09T23:44:19
| 2024-04-01T18:42:56
| 2024-04-01T18:42:55
|
https://github.com/pandas-dev/pandas/issues/57792
| true
| null | null |
fgr1986
| 7
|
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
from io import StringIO
csv_data = """0,1.992422433360555e-06,0,1.992422433374335e-06,0,0,0,0.01867750880504259,0,0.8,0,0.002722413402762714,0,0.002722413278797551
3e-12,1.992418292306889e-06,3e-12,1.992418292319521e-06,3e-12,0,3e-12,0.01867750887016915,3e-12,0.8,3e-12,0.002722413686257528,3e-12,0.002722413562292257
9.000000000000001e-12,1.992416874989637e-06,9.000000000000001e-12,1.992416875007223e-06,9.000000000000001e-12,0,9.000000000000001e-12,0.01867750900743637,9.000000000000001e-12,0.8,9.000000000000001e-12,0.00272241426436429,9.000000000000001e-12,0.002722414140398993"""
# Use StringIO to create a file-like object from the string
csv_data_io1 = StringIO(csv_data)
csv_data_io2 = StringIO(csv_data)
# Use pandas.read_csv to read from the file-like object
df1 = pd.read_csv(csv_data_io1)
df2 = pd.read_csv(csv_data_io2, header=None)
# assertions
print(df1.columns.tolist())
print(df2.iloc[0].tolist())
```
### Issue Description
Columns are incorrectly read from CSV
### Expected Behavior
df1.columns.tolist() being the same as df2.iloc[0].tolist()
i.e.: [0.0, 1.992422433360555e-06, 0.0, 1.992422433374335e-06, 0.0, 0.0, 0.0, 0.0186775088050425, 0.0, 0.8, 0.0, 0.0027224134027627, 0.0, 0.0027224132787975]
However, it is:
['0', '1.992422433360555e-06', '0.1', '1.992422433374335e-06', '0.2', '0.3', '0.4', '0.01867750880504259', '0.5', '0.8', '0.6', '0.002722413402762714', '0.7', '0.002722413278797551']
which is not any row in the file
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : bdc79c146c2e32f2cab629be240f01658cfb6cc2
python : 3.9.7.final.0
python-bits : 64
OS : Linux
OS-release : 5.15.133.1-microsoft-standard-WSL2
Version : #1 SMP Thu Oct 5 21:02:42 UTC 2023
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.1
numpy : 1.26.4
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.2.2
pip : 23.3.1
Cython : None
pytest : 7.4.1
hypothesis : None
sphinx : 5.0.2
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.9.3
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.3
IPython : 8.15.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.2
bottleneck : 1.3.7
dataframe-api-compat : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : 3.8.0
numba : None
numexpr : 2.8.7
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
pyarrow : None
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.11.4
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
|
[
"Docs",
"IO CSV",
"good first issue"
] | 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"I believe in your df1, pandas has used the first line as the header. However, headers should be distinct and hence pandas has added the \".1\", \".2\", \".3\", etc. to all the headers that are the same, i.e. 0, to differentiate them.",
"Correct @wleong1 - I think this behavior should be documented in the docstring of read_csv for the header argument.",
"Noted, thanks!",
"Is anybody working on that? If Not, I could help here.",
"Hi @rhshadrach \r\n\r\nI noticed that this issue is currently unassigned, and I'm interested in working on it. Could you please assign it to me? I believe I can contribute to document in the docstring of read_csv for the header argument (as per you mentioned).\r\n\r\nThanks!",
"is this still open for contribution , im new to open source please be kind ?",
"> is this still open for contribution , im new to open source please be kind ?\r\n\r\n@Dxuian if you noticed above your message, it says that a PR has already been created by @quangngd "
] |
2,177,415,929
| 57,791
|
Migrate ruff config to the latest format
|
closed
| 2024-03-09T21:49:25
| 2024-03-17T07:56:12
| 2024-03-09T22:19:03
|
https://github.com/pandas-dev/pandas/pull/57791
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57791
|
https://github.com/pandas-dev/pandas/pull/57791
|
tqa236
| 1
|
Resolves these warnings (available if running `ruff` directly) and fix some formatting errors discovered in the new config that's arguably should be equivalent to the old one.
```bash
warning: The top-level linter settings are deprecated in favour of their counterparts in the `lint` section. Please update the following options in `pyproject.toml`:
- 'ignore' -> 'lint.ignore'
- 'select' -> 'lint.select'
- 'typing-modules' -> 'lint.typing-modules'
- 'unfixable' -> 'lint.unfixable'
- 'per-file-ignores' -> 'lint.per-file-ignores'
warning: `PGH001` has been remapped to `S307`.
```
|
[
"Code Style"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @tqa236 "
] |
2,177,354,563
| 57,790
|
Updated the pandas.DatetimeIndex.day_name and pandas.DatetimeIndex.month_name docstring
|
closed
| 2024-03-09T18:50:12
| 2024-03-21T00:09:56
| 2024-03-20T17:26:26
|
https://github.com/pandas-dev/pandas/pull/57790
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57790
|
https://github.com/pandas-dev/pandas/pull/57790
|
pmhatre1
| 2
|
- closes #57111 partially
- All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@mroeschke not sure why the Docstring validation is failing for a particular locale. However when I tested it locally using the command,scripts/validate_docstrings.py --format=actions --errors=PR02 pandas.DatetimeIndex.month_name\r\nit passed. Any suggestions?\r\n",
"Thanks for your contribution here and sorry for not responding earlier, but it appears this issue has already been addressed so closing. Happy to have your contributions toward other open issues"
] |
2,177,091,933
| 57,789
|
Small refactoring
|
closed
| 2024-03-09T07:01:18
| 2024-03-09T21:13:33
| 2024-03-09T19:51:00
|
https://github.com/pandas-dev/pandas/pull/57789
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57789
|
https://github.com/pandas-dev/pandas/pull/57789
|
tqa236
| 1
|
- [x] closes #52229(Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Refactor"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @tqa236 "
] |
2,177,028,218
| 57,788
|
API: Revert 57042 - MultiIndex.names|codes|levels returns tuples
|
closed
| 2024-03-09T03:18:34
| 2024-04-11T21:31:05
| 2024-04-11T15:38:21
|
https://github.com/pandas-dev/pandas/pull/57788
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57788
|
https://github.com/pandas-dev/pandas/pull/57788
|
rhshadrach
| 6
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Manual revert of #57042
Closes #57607
The behavior described in that issue seems quite undesirable, especially for a breaking change.
cc @mroeschke
|
[
"MultiIndex"
] | 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"I would be OK reverting this but would like:\r\n\r\n1. To expose `FrozenList` in `pandas.api.typing`\r\n2. Deprecate the `union` and `difference` methods",
"> I would be OK reverting this but would like:\r\n> \r\n> 1. To expose `FrozenList` in `pandas.api.typing`\r\n> 2. Deprecate the `union` and `difference` methods\r\n\r\nOkay with doing this in a follow up? (I likely won't be able to return to this for ~1 week in any case)",
"> Okay with doing this in a follow up? (I likely won't be able to return to this for ~1 week in any case)\r\n\r\nYup that's good",
"> Deprecate the union and difference methods\r\n\r\nwhat's the reasoning for those deprecations?",
"> what's the reasoning for those deprecations?\r\n\r\nMainly cleanup. IMO if we keep `FrozenList` and make it available as a public API, that API should be minimal",
"Thanks @rhshadrach "
] |
2,176,932,349
| 57,787
|
PERF: Categorical(range).categories returns RangeIndex instead of Index
|
closed
| 2024-03-08T23:48:13
| 2024-03-11T04:32:33
| 2024-03-11T03:18:09
|
https://github.com/pandas-dev/pandas/pull/57787
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57787
|
https://github.com/pandas-dev/pandas/pull/57787
|
mroeschke
| 1
| null |
[
"Performance",
"Categorical",
"Index"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @mroeschke "
] |
2,176,910,155
| 57,786
|
PERF: Allow ensure_index_from_sequence to return RangeIndex
|
closed
| 2024-03-08T23:15:11
| 2024-03-19T20:37:37
| 2024-03-19T20:01:32
|
https://github.com/pandas-dev/pandas/pull/57786
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57786
|
https://github.com/pandas-dev/pandas/pull/57786
|
mroeschke
| 2
|
Discovered in https://github.com/pandas-dev/pandas/pull/57441
Builds on https://github.com/pandas-dev/pandas/pull/57752
|
[
"Performance",
"Index"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Any other comments here @WillAyd?",
"Nope all good. And thanks for the ping - sorry sometimes fall behind on notifications"
] |
2,176,832,774
| 57,785
|
DOC: Resolve RT03 errors in several methods #2
|
closed
| 2024-03-08T21:54:56
| 2024-03-08T23:50:38
| 2024-03-08T23:50:31
|
https://github.com/pandas-dev/pandas/pull/57785
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57785
|
https://github.com/pandas-dev/pandas/pull/57785
|
bergnerjonas
| 1
|
Resolve all RT03 errors for the following cases:
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.nsmallest
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.nunique
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.pipe
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.plot.box
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.plot.density
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.plot.kde
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.plot.scatter
- xref DOC: fix RT03 errors in docstrings #57416
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @bergnerjonas "
] |
2,176,722,102
| 57,784
|
Backport PR #57780 on branch 2.2.x (COMPAT: Adapt to Numpy 2.0 dtype changes)
|
closed
| 2024-03-08T20:28:21
| 2024-03-08T21:36:00
| 2024-03-08T21:36:00
|
https://github.com/pandas-dev/pandas/pull/57784
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57784
|
https://github.com/pandas-dev/pandas/pull/57784
|
meeseeksmachine
| 0
|
Backport PR #57780: COMPAT: Adapt to Numpy 2.0 dtype changes
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
2,176,593,127
| 57,783
|
Fix SparseDtype comparison
|
closed
| 2024-03-08T18:49:49
| 2024-03-09T21:13:09
| 2024-03-09T20:09:43
|
https://github.com/pandas-dev/pandas/pull/57783
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57783
|
https://github.com/pandas-dev/pandas/pull/57783
|
tqa236
| 2
|
- [x] closes #54770(Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Sparse"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thank you for your review. I added a whatsnew note.",
"Thanks @tqa236 "
] |
2,176,538,074
| 57,782
|
DOC: Resolve RT03 errors for selected methods
|
closed
| 2024-03-08T18:15:20
| 2024-03-08T22:47:32
| 2024-03-08T22:47:25
|
https://github.com/pandas-dev/pandas/pull/57782
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57782
|
https://github.com/pandas-dev/pandas/pull/57782
|
bergnerjonas
| 2
|
Resolve all RT03 errors for the following cases:
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.expanding
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.filter
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.first_valid_index
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.last_valid_index
scripts/validate_docstrings.py --format=actions --errors=RT03 pandas.DataFrame.get
- xref DOC: fix RT03 errors in docstrings #57416
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Any idea why the unit tests / Numpy Dev failed? I only changed things in the docstrings not in the code itself.",
"Thanks @bergnerjonas "
] |
2,176,500,721
| 57,781
|
BUG: timedelta.round fails to round to nearest day
|
open
| 2024-03-08T17:48:05
| 2025-07-15T21:01:46
| null |
https://github.com/pandas-dev/pandas/issues/57781
| true
| null | null |
jbogaardt
| 5
|
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [ ] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
It seems like timedelta rounding broke with the 2.2.0 release or the naming convention for rounding frequency is not properly documented.
```
### Issue Description
Under pandas>=2.2.0, I cannot get any rounding to occur whether I use 'D', 'd', 'day', or 'days':
```python
>>> import pandas as pd
>>> print(pd.__version__)
2.2.1
>>> pd.Series([
... pd.Timedelta('1 days 23:18:00'),
... pd.Timedelta('2 days 07:00:00')]).round('d')
0 1 days 23:18:00
1 2 days 07:00:00
dtype: timedelta64[ns]
```
### Expected Behavior
Before 2.2.0 timedeltas rounded to the nearest day appropriately:
```python
>>> import pandas as pd
>>> print(pd.__version__)
2.1.4
dtype: timedelta64[ns]
>>> pd.Series([
... pd.Timedelta('1 days 23:18:00'),
... pd.Timedelta('2 days 07:00:00')]).round('d')
0 2 days
1 2 days
dtype: timedelta64[ns]
```
### Installed Versions
INSTALLED VERSIONS
------------------
commit : bdc79c146c2e32f2cab629be240f01658cfb6cc2
python : 3.11.5.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 126 Stepping 5, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : English_United States.1252
pandas : 2.2.1
numpy : 1.24.3
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.0.0
pip : 23.3
Cython : None
pytest : 7.4.0
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.9.3
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.15.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : 3.8.0
numba : 0.58.0
numexpr : 2.8.7
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 14.0.1
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.11.3
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
|
[
"Bug",
"Timedelta",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Tried to reproduce. Your example for .round on a Series fails for me as well, however rounding single timedeltas (e.g. `pd.Timedelta('1 days 23:18:00').round('d')`) works fine.",
"Thanks for the report! Result of a git bisect:\r\n\r\n```\r\ncommit 6dbeeb4009bbfac5ea1ae2111346f5e9f05b81f4\r\nAuthor: Lumberbot (aka Jack)\r\nDate: Mon Jan 8 23:24:22 2024 +0100\r\n\r\n Backport PR #56767 on branch 2.2.x (BUG: Series.round raising for nullable bool dtype) (#56782)\r\n \r\n Backport PR #56767: BUG: Series.round raising for nullable bool dtype\r\n \r\n Co-authored-by: Patrick Hoefler\r\n```\r\n\r\ncc @phofl\r\n\r\nNote that `ser.dt.round('d')` still works. I don't believe that `Series.round` having special logic for different dtypes is documented anywhere.",
"Yeah this is a bit weird, DataFrame.round never worked for those, Series.round now follows this behavior. We can either make both work or keep as is",
"Making this work for DataFrame.round seems undesirable - numeric columns would take the number of digits as an argument whereas timedelta/datetime would take a frequency. I'd lean toward keeping this as-is.",
"I have encountered the same issue, but then with datetimes. And as @rhshadrach already mentioned, it matters if you use `.round()` or `.dt.round()`:\r\n```python\r\nimport pandas as pd\r\ndtindex = pd.date_range(\"2010-01-01\", \"2010-01-01 12:00\", periods=3)\r\ndf = pd.DataFrame()\r\ndf[\"datetimes\"] = dtindex\r\ndf[\"round_1\"] = df[\"datetimes\"].round('d') # this does not round in pandas 2.2.0, it did in pandas<=2.1.4\r\ndf[\"round_2\"] = df[\"datetimes\"].dt.round('d')\r\nprint(df)\r\n```\r\n\r\nThese two rounding methods produce different results with `pandas>=2.2.0`:\r\n```\r\n datetimes round_1 round_2\r\n0 2010-01-01 00:00:00 2010-01-01 00:00:00 2010-01-01\r\n1 2010-01-01 06:00:00 2010-01-01 06:00:00 2010-01-01\r\n2 2010-01-01 12:00:00 2010-01-01 12:00:00 2010-01-01\r\n```\r\n\r\nIn `pandas<=2.1.4` the two rounding methods produce the same results:\r\n```\r\n datetimes round_1 round_2\r\n0 2010-01-01 00:00:00 2010-01-01 2010-01-01\r\n1 2010-01-01 06:00:00 2010-01-01 2010-01-01\r\n2 2010-01-01 12:00:00 2010-01-01 2010-01-01\r\n```"
] |
2,176,481,125
| 57,780
|
COMPAT: Adapt to Numpy 2.0 dtype changes
|
closed
| 2024-03-08T17:34:01
| 2024-03-08T20:49:47
| 2024-03-08T20:27:52
|
https://github.com/pandas-dev/pandas/pull/57780
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57780
|
https://github.com/pandas-dev/pandas/pull/57780
|
seberg
| 2
|
Based on Thomas Li's work, but wanted to try and see that this is right.
Replaces gh-13466.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@lithomas1 this needs backport right?",
"Mhm"
] |
2,175,936,918
| 57,779
|
Fix rank method with nullable int
|
closed
| 2024-03-08T12:23:59
| 2024-03-17T07:53:37
| 2024-03-08T21:37:04
|
https://github.com/pandas-dev/pandas/pull/57779
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57779
|
https://github.com/pandas-dev/pandas/pull/57779
|
tqa236
| 2
|
We can't pass only `self._values_for_argsort()` to `rank` as the missing data info will be lost. Fortunately, passing `self` directly to `rank` seems to work just fine.
- [x] closes #56976 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"NA - MaskedArrays",
"Reduction Operations"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thank you for the review. I added the note.",
"Great thanks @tqa236 "
] |
2,175,400,926
| 57,777
|
DOC: Clarify doc for converting timestamps to epoch
|
closed
| 2024-03-08T07:00:50
| 2024-04-23T19:01:22
| 2024-04-23T18:59:53
|
https://github.com/pandas-dev/pandas/pull/57777
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57777
|
https://github.com/pandas-dev/pandas/pull/57777
|
tqa236
| 3
|
- [x] closes #56343 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs",
"Stale"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Sorry, I was not meant to close this PR. I closed it by accident when cleaning up my fork",
"This pull request is stale because it has been open for thirty days with no activity. Please [update](https://pandas.pydata.org/pandas-docs/stable/development/contributing.html#updating-your-pull-request) and respond to this comment if you're still interested in working on this.",
"Thanks for the pull request, but it appears to have gone stale. Additionally it seems like more discussion is needed in the issue on what we actually want to include in the docs so closing for now"
] |
2,175,109,684
| 57,776
|
DOC: Fix description for pd.concat sort argument
|
closed
| 2024-03-08T01:28:07
| 2024-03-08T22:25:57
| 2024-03-08T17:52:32
|
https://github.com/pandas-dev/pandas/pull/57776
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57776
|
https://github.com/pandas-dev/pandas/pull/57776
|
Wikilicious
| 1
|
- [x] closes #57753
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @Wikilicious "
] |
2,175,056,455
| 57,775
|
BUG: groupby.apply when func always returns None returns empty dataframe
|
closed
| 2024-03-08T00:24:14
| 2024-03-12T17:27:27
| 2024-03-12T17:27:26
|
https://github.com/pandas-dev/pandas/issues/57775
| true
| null | null |
mvashishtha
| 3
|
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
print(pd.DataFrame(['a']).groupby(0).apply(lambda x: None))
```
### Issue Description
Returns an empty dataframe with no columns or index
### Expected Behavior
Should return something like
```python
0
a None
dtype: object
```
### Installed Versions
<details>
```
INSTALLED VERSIONS
------------------
commit : bdc79c146c2e32f2cab629be240f01658cfb6cc2
python : 3.9.18.final.0
python-bits : 64
OS : Darwin
OS-release : 23.3.0
Version : Darwin Kernel Version 23.3.0: Wed Dec 20 21:31:00 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6020
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.1
numpy : 1.26.3
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.2.2
pip : 23.3.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 8.18.1
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.4
qtpy : None
pyqt5 : None
```
</details>
|
[
"Bug",
"Docs",
"Groupby",
"Apply"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Was this ever the case? Admittedly I've never done something like this so I just don't know. Been a user since 0.23, I can't think of a time I've used a function like this. Docs are unclear if its special cased or not.",
"Agreed this should be added to the documentation in the `Notes` section of apply. This is to allow certain groups to be filtered.\r\n\r\n```\r\ndf = pd.DataFrame({\"a\": [1, 2, 3], \"b\": [4, 5, 6]})\r\ngb = df.groupby(\"a\")\r\n\r\ndef foo(x):\r\n if x.iloc[0, 0] == 5:\r\n return None\r\n else:\r\n return x**2\r\n\r\nprint(gb.apply(foo, include_groups=False))\r\n# b\r\n# a \r\n# 1 0 16\r\n# 3 2 36\r\n```\r\n\r\nStill - in the OP case we should return a DataFrame with the same columns and dtypes as the input. So labeling this both doc and bug.",
"take"
] |
2,175,023,963
| 57,774
|
COMPAT: Adapt to Numpy 2.0 dtype changes
|
closed
| 2024-03-07T23:48:55
| 2024-03-08T18:43:05
| 2024-03-08T18:43:04
|
https://github.com/pandas-dev/pandas/pull/57774
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/57774
|
https://github.com/pandas-dev/pandas/pull/57774
|
lithomas1
| 16
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Compat"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"cc @seberg for any objections to the approach here.\r\n\r\n(I hope we aren't abusing numpy internals here).",
"I think the [migration guide](https://numpy.org/devdocs/numpy_2_0_migration_guide.html#c-api-changes) suggests using `#include <numpy/npy_2_compat.h>` and then the code change would use this macro, which includes any legacy/new dtype mangling. \r\n```diff\r\n- return (((PyArray_DatetimeDTypeMetaData *)dtype->c_metadata)->meta);\r\n+ return PyDataType_C_METADATA(dtype);\r\n```\r\n\r\nBut what you have should also work.",
"This is OK. I think slightly nicer would be to define `#define PyDataType_C_METADATA(descr) ((descr)->c_metadata)` instead and use that instead.\r\n(That way you also don't need to vendor `npy2_compat.h` which is a bit much for just this purpose.\r\n\r\nI thought there would be an `elsize` use in the json reader, but nice if not!",
"> I think slightly nicer would be to define ...\r\n\r\nso the complete diff would be like this?\r\n```diff\r\n- return (((PyArray_DatetimeDTypeMetaData *)dtype->c_metadata)->meta);\r\n+ #if NPY_ABI_VERSION < 0x02000000\r\n+ #define PyDataType_C_METADATA(descr) ((descr)->c_metadata)\r\n+ #endif\r\n+ return (PyDataType_C_METADATA(descr)->meta);\r\n```",
"Huh, ``PyDataType_C_METADATA`` doesn't seem to exist in the ``ndarraytypes.h`` include file.\r\n(I think that's why I copied the legacydtype thing from numpy's datetime.c file yesterday).\r\n\r\nDoes it need to be exposed on the numpy side, or am I just missing something obvious?",
"Maybe its something to do with the static inline declaration?",
"Hummmm, weird. Any chance there is some caching involved in CI and it is using an older NumPy wheel?",
"I had the same issue locally earlier today, and that was not because of caching I think (I tried with removing my pandas build directory multiple times)",
"No, I have it locally too\r\n\r\nHere's what the header looks like (ndarraytypes.h)\r\n\r\n```\r\n#define PyDataType_ISLEGACY(dtype) ((dtype)->type_num < NPY_VSTRING && ((dtype)->type_num >= 0))\r\n#define PyDataType_ISBOOL(obj) PyTypeNum_ISBOOL(((PyArray_Descr*)(obj))->type_num)\r\n#define PyDataType_ISUNSIGNED(obj) PyTypeNum_ISUNSIGNED(((PyArray_Descr*)(obj))->type_num)\r\n#define PyDataType_ISSIGNED(obj) PyTypeNum_ISSIGNED(((PyArray_Descr*)(obj))->type_num)\r\n#define PyDataType_ISINTEGER(obj) PyTypeNum_ISINTEGER(((PyArray_Descr*)(obj))->type_num )\r\n#define PyDataType_ISFLOAT(obj) PyTypeNum_ISFLOAT(((PyArray_Descr*)(obj))->type_num)\r\n#define PyDataType_ISNUMBER(obj) PyTypeNum_ISNUMBER(((PyArray_Descr*)(obj))->type_num)\r\n#define PyDataType_ISSTRING(obj) PyTypeNum_ISSTRING(((PyArray_Descr*)(obj))->type_num)\r\n#define PyDataType_ISCOMPLEX(obj) PyTypeNum_ISCOMPLEX(((PyArray_Descr*)(obj))->type_num)\r\n#define PyDataType_ISFLEXIBLE(obj) PyTypeNum_ISFLEXIBLE(((PyArray_Descr*)(obj))->type_num)\r\n#define PyDataType_ISDATETIME(obj) PyTypeNum_ISDATETIME(((PyArray_Descr*)(obj))->type_num)\r\n#define PyDataType_ISUSERDEF(obj) PyTypeNum_ISUSERDEF(((PyArray_Descr*)(obj))->type_num)\r\n#define PyDataType_ISEXTENDED(obj) PyTypeNum_ISEXTENDED(((PyArray_Descr*)(obj))->type_num)\r\n#define PyDataType_ISOBJECT(obj) PyTypeNum_ISOBJECT(((PyArray_Descr*)(obj))->type_num)\r\n#define PyDataType_MAKEUNSIZED(dtype) ((dtype)->elsize = 0)\r\n/*\r\n * PyDataType_* FLAGS, FLACHK, REFCHK, HASFIELDS, HASSUBARRAY, UNSIZED,\r\n * SUBARRAY, NAMES, FIELDS, C_METADATA, and METADATA require version specific\r\n * lookup and are defined in npy_2_compat.h.\r\n */\r\n\r\n\r\n#define PyArray_ISBOOL(obj) PyTypeNum_ISBOOL(PyArray_TYPE(obj))\r\n#define PyArray_ISUNSIGNED(obj) PyTypeNum_ISUNSIGNED(PyArray_TYPE(obj))\r\n#define PyArray_ISSIGNED(obj) PyTypeNum_ISSIGNED(PyArray_TYPE(obj))\r\n#define PyArray_ISINTEGER(obj) PyTypeNum_ISINTEGER(PyArray_TYPE(obj))\r\n```",
"The whitespace is where the definition of the static things should be, but they're just gone.\r\n\r\nWe still have macros like ``PyDataType_ISLEGACY`` that you added in https://github.com/numpy/numpy/pull/25802 though.",
"And I was certainly using an up to date numpy, because wasn't using a nightly wheel but locally built from latest main. For the changes in pyarrow, I did _not_ run into that error. I was trying to see the difference between both, but couldn't directly see anything.",
"Sorry I hadn't noticed: the definitions are moved, and you need `#include \"numpy/ndarrayobject.h\"` and import the NumPy API. I had added a new import helper to just do that locally as a quick fix if needed, but that requires copying in `npy2_compat.h`.\r\n\r\nI'll have a look at it.",
"Sorry, I'll need a bit more time to set up the env for NumPy 2, but this diff should be right, I think. The annoying part is that we need to use `import_array()`, but it seems there is a place for this already here.\r\n\r\n<details>\r\n\r\n```diff\r\ndiff --git a/pandas/_libs/src/datetime/pd_datetime.c b/pandas/_libs/src/datetime/pd_datetime.c\r\nindex 19de51be6e..f778d06607 100644\r\n--- a/pandas/_libs/src/datetime/pd_datetime.c\r\n+++ b/pandas/_libs/src/datetime/pd_datetime.c\r\n@@ -20,6 +20,8 @@ This file is derived from NumPy 1.7. See NUMPY_LICENSE.txt\r\n #include <Python.h>\r\n \r\n #include \"datetime.h\"\r\n+/* Need to import_array for np_datetime.c (for NumPy 1.x support) */\r\n+#include \"numpy/ndarrayobject.h\"\r\n #include \"pandas/datetime/pd_datetime.h\"\r\n #include \"pandas/portable.h\"\r\n \r\n@@ -255,5 +257,6 @@ static struct PyModuleDef pandas_datetimemodule = {\r\n \r\n PyMODINIT_FUNC PyInit_pandas_datetime(void) {\r\n PyDateTime_IMPORT;\r\n+ import_array();\r\n return PyModuleDef_Init(&pandas_datetimemodule);\r\n }\r\ndiff --git a/pandas/_libs/src/vendored/numpy/datetime/np_datetime.c b/pandas/_libs/src/vendored/numpy/datetime/np_datetime.c\r\nindex 277d01807f..d2853b4936 100644\r\n--- a/pandas/_libs/src/vendored/numpy/datetime/np_datetime.c\r\n+++ b/pandas/_libs/src/vendored/numpy/datetime/np_datetime.c\r\n@@ -16,7 +16,7 @@ This file is derived from NumPy 1.7. See NUMPY_LICENSE.txt\r\n \r\n // Licence at LICENSES/NUMPY_LICENSE\r\n \r\n-#define NO_IMPORT\r\n+#define NO_IMPORT_ARRAY\r\n \r\n #ifndef NPY_NO_DEPRECATED_API\r\n #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION\r\n@@ -25,7 +25,7 @@ This file is derived from NumPy 1.7. See NUMPY_LICENSE.txt\r\n #include <Python.h>\r\n \r\n #include \"pandas/vendored/numpy/datetime/np_datetime.h\"\r\n-#include <numpy/ndarraytypes.h>\r\n+#include <numpy/ndarrayobject.h>\r\n #include <numpy/npy_common.h>\r\n \r\n #if defined(_WIN32)\r\n```\r\n",
"> Sorry, I'll need a bit more time to set up the env for NumPy 2, but this diff should be right, I think. The annoying part is that we need to use `import_array()`, but it seems there is a place for this already here.\r\n\r\nThanks, I'll test this out.\r\n\r\nThinking out loud, maybe a simpler way could just be to use the newly exposed ``datetime.c`` functions from here https://github.com/numpy/numpy/pull/21199/files.\r\n\r\nWould there be any ABI problems with using numpy C API functions from a new version?",
"I opened gh-57780 to iterate myself quickly, probably done, but lets see what crops up next...\r\n\r\nSince they are proper API Functions, if you use them you cannot support NumPy 1.x unfortunately.",
"closing this then.\r\n\r\nThanks for helping debug this!\r\n"
] |
2,174,962,145
| 57,773
|
BUG: to_parquet on column containing StringArray objects fails
|
open
| 2024-03-07T22:50:44
| 2025-07-15T20:42:42
| null |
https://github.com/pandas-dev/pandas/issues/57773
| true
| null | null |
paavanb
| 2
|
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
records = [
{"name": "A", "project": "foo"},
{"name": "A", "project": "bar"},
{"name": "A", "project": pd.NA},
]
df = pd.DataFrame.from_records(records).astype({"project": "string"})
agg_df = df.groupby("name").agg({"project": "unique"})
agg_df.to_parquet("test.parquet")
```
### Issue Description
It seems that when aggregating on a `"string"` type column (potentially nullable), the resulting dataframe uses `StringArray` to represent the aggregated value (here `project`). When attempting to write to parquet, this fails with the following error:
```
pyarrow.lib.ArrowInvalid: ("Could not convert <StringArray>\n['foo', 'bar', <NA>]\nLength: 3, dtype: string with type StringArray: did not recognize Python value type when inferring an Arrow data type", 'Conversion failed for column project with type object')
```
I'm not sure why this is happening, because `StringArray` clearly implements the `__arrow_array__` [protocol](https://arrow.apache.org/docs/python/extending_types.html) that should be allowing this conversion. This also happens if the string is of type `"string[pyarrow]"`.
The workaround I've been using so far is to re-type the column as `object` prior to the `groupby`. This results in values of type `np.ndarray`, which are parquet-able. But this is suboptimal because we use `read_parquet` upstream that types the column as `"string"`, which means that we have to remember to retype it every time we do this aggregation, else it will fail downstream when writing to parquet. It seems like this should work with the column typed as `"string"`.
### Expected Behavior
It should successfully write the parquet file without failures.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : bdc79c146c2e32f2cab629be240f01658cfb6cc2
python : 3.9.17.final.0
python-bits : 64
OS : Darwin
OS-release : 23.3.0
Version : Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.1
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0.post0
setuptools : 58.1.0
pip : 23.0.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 8.18.1
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 15.0.0
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Strings",
"IO Parquet",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take\r\n",
"Hm, this looks like Arrow is not liking having a StringArray \"scalar\" inside an object dtype column.\r\n\r\nCalling ``pyarrow.array`` on ``agg_df['project']`` fails (and seems to be the root error)\r\n\r\n```\r\n>>> pa.array(agg_df['project'])\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"pyarrow/array.pxi\", line 339, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 85, in pyarrow.lib._ndarray_to_array\r\n File \"pyarrow/error.pxi\", line 91, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Could not convert <StringArray>\r\n['foo', 'bar', <NA>]\r\nLength: 3, dtype: string with type StringArray: did not recognize Python value type when inferring an Arrow data type\r\n```\r\n\r\nTentatively labelling upstream issue, since the affected code paths go through pyarrow."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.