repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
facebookresearch/fairseq
pytorch
4,737
TypeError: cannot unpack non-iterable NoneType object
## 🐛 Bug I met this bug when I try to load a fairseq translation model. I could see the same issue is still open in fairseq GitHub repo. I tried installing torch and torchvision packages as mentioned in the below link but still I am facing the same issue. [](https://github.com/facebookresearch/fairseq/issues/4214) ### To Reproduce Using your [colab tutorial](https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/pytorch_fairseq_translation.ipynb) will also produce the same bug. ``` Using cache found in /root/.cache/torch/hub/pytorch_fairseq_main --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-21-61d0ed709261> in <module> 1 # Load translation model 2 # en2ru = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-ru.single_model', tokenizer='moses', bpe='fastbpe') ----> 3 ru2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.ru-en.single_model', tokenizer='moses', bpe='fastbpe') 7 frames /usr/local/lib/python3.7/dist-packages/torch/hub.py in load(repo_or_dir, model, source, trust_repo, force_reload, verbose, skip_validation, *args, **kwargs) /usr/local/lib/python3.7/dist-packages/torch/hub.py in _load_local(hubconf_dir, model, *args, **kwargs) /usr/local/lib/python3.7/dist-packages/torch/hub.py in _import_module(name, path) 87 return '[https://github.com/{}/{}/archive/{}.zip](https://github.com/%7B%7D/%7B%7D/archive/%7B%7D.zip)'.format(repo_owner, repo_name, branch) 88 ---> 89 90 def _load_attr_from_module(module, func_name): 91 # Check if callable is defined in the module /usr/lib/python3.7/importlib/_bootstrap_external.py in exec_module(self, module) /usr/lib/python3.7/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds) ~/.cache/torch/hub/pytorch_fairseq_main/hubconf.py in <module> 37 38 # only do fairseq imports after checking for dependencies ---> 39 from fairseq.hub_utils import ( # noqa; noqa 40 BPEHubInterface as bpe, 41 TokenizerHubInterface as tokenizer, ~/.cache/torch/hub/pytorch_fairseq_main/fairseq/__init__.py in <module> 31 hydra_init() 32 ---> 33 import fairseq.criterions # noqa 34 import fairseq.distributed # noqa 35 import fairseq.models # noqa ~/.cache/torch/hub/pytorch_fairseq_main/fairseq/criterions/__init__.py in <module> 22 CRITERION_DATACLASS_REGISTRY, 23 ) = registry.setup_registry( ---> 24 "--criterion", base_class=FairseqCriterion, default="cross_entropy" 25 ) 26 TypeError: cannot unpack non-iterable NoneType object ``` #### Code sample Using your [colab tutorial](https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/pytorch_fairseq_translation.ipynb) will also produce the same bug. ### Expected behavior successfully load the translation model ### Environment - PyTorch Version (e.g., 1.6) - Google colab
open
2022-09-20T11:38:34Z
2024-02-13T02:29:05Z
https://github.com/facebookresearch/fairseq/issues/4737
[ "bug", "needs triage" ]
HarryHe11
8
torchbox/wagtail-grapple
graphql
313
Irrelevance fields Appear for Rendition (they are Image specific fields)
Fields like title,focal_point_x,focal_point_y,focal_point_width,focal_point_height,file_hash,collection,tags are Image specific but they appear for Rendition type and obviously cause error they are coming from `BaseImageObjectType` that it's main purpose appears to be base class for both Image and Rendition due to this: https://github.com/torchbox/wagtail-grapple/blob/2e7cb3e23f81c3c65e1fddc811aeaed99cd7743c/grapple/types/images.py#L62 https://github.com/torchbox/wagtail-grapple/blob/2e7cb3e23f81c3c65e1fddc811aeaed99cd7743c/grapple/types/images.py#L85
open
2023-02-11T08:13:58Z
2023-02-13T18:43:57Z
https://github.com/torchbox/wagtail-grapple/issues/313
[]
engAmirEng
2
littlecodersh/ItChat
api
947
很好用
确实很好用
closed
2021-09-10T03:22:58Z
2023-11-16T12:33:52Z
https://github.com/littlecodersh/ItChat/issues/947
[]
2905683882
0
Significant-Gravitas/AutoGPT
python
8,999
Marketplace - creator page - increase margins between section title and agent list
### Describe your issue. Increase margins to 32px <img width="1122" alt="Screenshot 2024-12-16 at 21 25 24" src="https://github.com/user-attachments/assets/5c7b18a1-f882-497e-ab16-6de058c7e9a6" />
closed
2024-12-16T13:26:08Z
2024-12-20T13:47:21Z
https://github.com/Significant-Gravitas/AutoGPT/issues/8999
[ "good first issue", "UI", "platform/frontend" ]
ograce1421
0
bmoscon/cryptofeed
asyncio
424
Set create_db callback's parameter to False by default
Backend - InfluxDB v1.8 When authorizing with credentials for _non-admin_ user that has access for single database inside InfluxDB instance like so: ``` from cryptofeed import FeedHandler from cryptofeed.backends.influxdb import TradeInflux from cryptofeed.defines import TRADES from cryptofeed.exchanges import Coinbase def main(): f = FeedHandler() address = 'http://localhost:8086' db_name = 'some_db' username = 'some_user' password = 'some_pass' f.add_feed(Coinbase(channels=[TRADES], symbols=['BTC-USD'], callbacks={TRADES: TradeInflux(address, db_name, username=username, password=password)})) f.run() if __name__ == '__main__': main() ``` . Got an error `requests.exceptions.HTTPError: 403 Client Error: Forbidden for url` Problem is that this is somewhat misleading in this context. The actual reason of the error is not incorrect rights set for user, or wrong credentials, but the default value of `create_db` set to `True`. Since creating databases is _admin_ privilege - got an 403 for regular user. So if disable it for such calls like: ``` TradeInflux(address, db_name, create_db=False, username=username, password=password) ``` works as expected. In fact this "feature request" is almost no-op i think. Just spent some time to investigate the problem - so write it here, maybe will help someone.
closed
2021-02-24T12:18:22Z
2021-02-24T22:21:50Z
https://github.com/bmoscon/cryptofeed/issues/424
[ "Feature Request" ]
BeforeFlight
1
man-group/arctic
pandas
73
Code quality metrics: add codify / landscape to project
Hello, it will be nice to enable Landscape https://landscape.io/ It can help to track many bugs (even before you noticed them) For example this https://github.com/manahl/arctic/issues/49 https://github.com/manahl/arctic/blob/master/arctic/tickstore/tickstore.py#L106 should be found easily by Landscape Codacy https://www.codacy.com/ might also be something to consider Coveralls provide code coverage https://coveralls.io/ and is also something that should be considered Landscape and Codacy are very easy to use as they are services. Coveralls is a little bit harder to setup... but nothing impossible. It can be also interesting to test code quality locally... so it may be worth to also set tox https://testrun.org/tox/latest/ flake8 https://flake8.readthedocs.org/ ... (other tools are pylint http://www.pylint.org/ pychecker http://pychecker.sourceforge.net/ pyflaskes https://launchpad.net/pyflakes/ flake8 https://flake8.readthedocs.org/ pep8 http://pep8.readthedocs.org/ mccabe http://www.mccabe.com/ ) but flake8 is a wrapper around PyFlakes, pep8, Ned Batchelder’s McCabe script so it may be enough You can find interesting project templates with these tools enabled https://github.com/audreyr/cookiecutter-pypackage We use Travis, Landscape, Coveralls in https://github.com/pydata/pandas-datareader So a first step could be to enable Landscape as it's easy and can help a lot. Kind regards
closed
2015-12-28T08:33:35Z
2018-01-03T14:09:21Z
https://github.com/man-group/arctic/issues/73
[]
femtotrader
4
vaexio/vaex
data-science
2,382
[BUG-REPORT] pip install vaex-core==4.16.1 on Linux leads to build failure with gcc 13.1.1
When I try to run `pip install vaex` in a fresh virtualenv on Python 3.11.3, Arch Linux, running gcc 13.1.1, I get the following build failure when trying to build vaex-core==4.16.1: building 'vaex.superstrings' extension gcc -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -ffat-lto-objects -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -fPIC -I/tmp/pip-build-env-5_4um6a5/overlay/lib/python3.11/site-packages/numpy/core/include -Ivendor/pybind11/include -Ivendor/pybind11/include -Ivendor/string-view-lite/include -Ivendor/boost -I.venv/include -I.venv/Library/include -I/tmp/pip-install-glb_ywl6/vaex-core_90c85da07235445f8a1d4c7fd6de4efc/vendor/pcre/Library/include -I.venv/include -I/usr/include/python3.11 -c src/string_utils.cpp -o build/temp.linux-x86_64-cpython-311/src/string_utils.o -std=c++11 -O3 -funroll-loops -Werror=return-type -Wno-unused-parameter -g In file included from src/string_utils.cpp:3: src/string_utils.hpp:8:14: error: ‘uint8_t’ does not name a type 8 | extern const uint8_t category_index[CHARS >> 8]; | ^~~~~~~ src/string_utils.hpp:4:1: note: ‘uint8_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’? 3 | #include <regex> +++ |+#include <cstdint> 4 | I have encountered similar errors in other C++ projects after I updated to gcc 13, in which version they reshuffled some C++ standard library files, meaning you now need to include certain header files that were previously implicitly included.
open
2023-07-07T10:07:53Z
2025-01-17T16:37:25Z
https://github.com/vaexio/vaex/issues/2382
[]
Mortal
2
holoviz/panel
jupyter
7,025
No scrollbar (overflow: hidden) in dev documentation when using Fast template
When opening the `dev` docs on a page that uses the Fast template there is no scrollbar. For example https://holoviz-dev.github.io/panel/tutorials/basic/build_crossfilter_dashboard.html I believe it won't make sense to release Panel 1.4.5 or 1.5.0 before this is fixed @philippjfr . ![image](https://github.com/user-attachments/assets/852c971e-3267-4a5f-b41a-5c70de65ca29)
closed
2024-07-27T05:17:43Z
2024-07-29T10:31:34Z
https://github.com/holoviz/panel/issues/7025
[]
MarcSkovMadsen
0
iterative/dvc
data-science
10,701
Allow dvc.yaml templating from non default params files
Hi! First of all, thanks for your amazing work, I love it and I'm trying to push our team to use it on daily basis. I'm trying to have multiple pipelines to handle different behaviors in a RAG library : a synthetic dataset generation and a RAG execution. I have divided a folder `pipelines` into 2 subfolders `rag` and `synth_dataset_generation`, which have their own `dvc.yaml` and `params.yaml`, but they have some parameters in common, in `global_params.yaml`, which I'm trying to use to template their respective `dvc.yaml`. However, no matter how I try to structure its reference, I always get a `dvc.parsing.ResolveError: failed to parse 'stages.data_extraction.cmd' in 'pipelines/rag/dvc.yaml': Could not find 'input_data'`. I've tried to debug it with `dvc repro -vvvv` and it seems that the context stays empty from any key/value of `global_params.yaml` : ``` Context during resolution of stage data_extraction: {} ``` I've stumbled upon this issue while dividing my main pipeline in 2 others, but I've explored this behaviour in a test repo and it seems impossible to template my `dvc.yaml` with values from a param file other than the default one. ([here is the link to it](https://github.com/Gwenn-LR/templating_dvc_with_parameters)) So I was wondering if it was an expected behaviour or if you could add this feature ? Thank you for your consideration.
closed
2025-03-05T12:36:31Z
2025-03-05T15:03:54Z
https://github.com/iterative/dvc/issues/10701
[ "awaiting response" ]
Gwenn-LR
7
labmlai/annotated_deep_learning_paper_implementations
pytorch
1
Save generator and load it only for prediction
Hello, Thank you for your implementation of cycle gans, it is very clear. I would like to ask if there is a way to save the generators every 500 iterations (exactly when they predict the test images) so I can load them in a different moment and only perform prediction in a specific test set with the loaded model (in a new code, independent of cycle_gan.py) Thank you, Agelos
closed
2020-10-15T04:12:30Z
2020-10-27T12:04:35Z
https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/1
[ "question" ]
agelosk
2
gee-community/geemap
streamlit
1,473
Add support for geetiles
https://github.com/rramosp/geetiles
closed
2023-03-21T03:37:15Z
2023-06-30T18:42:35Z
https://github.com/gee-community/geemap/issues/1473
[ "Feature Request" ]
giswqs
1
seleniumbase/SeleniumBase
web-scraping
3,063
Firefox Extensions
I don't see anything in the docs about using firefox extensions. Is that possible, or is it a chrome only feature?
closed
2024-08-28T15:45:46Z
2024-08-30T11:58:48Z
https://github.com/seleniumbase/SeleniumBase/issues/3063
[ "question" ]
JasonCrowe
2
serengil/deepface
machine-learning
1,099
Resizing of image in YuNet detector returns wrong results for eyes positions
Consider [this code](https://github.com/serengil/deepface/blob/master/deepface/detectors/YuNet.py#L81-L124) : ```python for face in faces: # pylint: disable=W0105 """ The detection output faces is a two-dimension array of type CV_32F, whose rows are the detected face instances, columns are the location of a face and 5 facial landmarks. The format of each row is as follows: x1, y1, w, h, x_re, y_re, x_le, y_le, x_nt, y_nt, x_rcm, y_rcm, x_lcm, y_lcm, where x1, y1, w, h are the top-left coordinates, width and height of the face bounding box, {x, y}_{re, le, nt, rcm, lcm} stands for the coordinates of right eye, left eye, nose tip, the right corner and left corner of the mouth respectively. """ (x, y, w, h, x_re, y_re, x_le, y_le) = list(map(int, face[:8])) left_eye = (x_re, y_re) right_eye = (x_le, y_le) # Yunet returns negative coordinates if it thinks part of # the detected face is outside the frame. # We set the coordinate to 0 if they are negative. x = max(x, 0) y = max(y, 0) if resized: img = original_image x, y, w, h = int(x / r), int(y / r), int(w / r), int(h / r) x_re, y_re, x_le, y_le = ( int(x_re / r), int(y_re / r), int(x_le / r), int(y_le / r), ) confidence = float(face[-1]) facial_area = FacialAreaRegion( x=x, y=y, w=w, h=h, confidence=confidence, left_eye=left_eye, right_eye=right_eye, ) resp.append(facial_area) ``` If you check the flow here you see `left_eye` and `right_eye` are valued as tuple from `(x_re, y_re)` and `(x_le, y_le)` respectively **but** in case of resized image are not recomputed: yes the coordinates are recomputed but **not reassigned to left_eye and right_eye** which are the vars effectively copied into FacialAreaRegion. This has an implication on face alignement (which is done elsewhere) based on the coordinates of the wyes
closed
2024-03-11T11:12:13Z
2024-03-11T18:49:46Z
https://github.com/serengil/deepface/issues/1099
[ "bug" ]
AndreaLanfranchi
5
microsoft/unilm
nlp
766
Adding multiple classification heads to train in single model
I want to train a model based on the same architecture but two different classification heads where one would detect layout of documents as table, text, title, figure etc and other would detect cells inside table. Right now I have two different models for layout and table-cell based on the same architecture. Since I have used the same architecture for both different use-cases. How can I train one single model with combination of layout and cells inside table? NOTE: I already used OCR coordinates of text inside tables but results were not good enough,so don't want to use it.
open
2022-06-21T09:11:33Z
2022-06-22T06:48:16Z
https://github.com/microsoft/unilm/issues/766
[]
Atul997
6
amisadmin/fastapi-amis-admin
sqlalchemy
104
TypeError: 'default' is an invalid keyword argument for this function
请教例子就报错是咋回事? INFO: Started server process [28391] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) Error: 'default' is an invalid keyword argument for this function Traceback: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__ await self.app(scope, receive, _send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__ raise exc File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__ await self.app(scope, receive, sender) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__ raise e File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__ await self.app(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/routing.py", line 718, in __call__ await route.handle(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/routing.py", line 276, in handle await self.app(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/routing.py", line 66, in app response = await func(request) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi/routing.py", line 237, in app raw_response = await run_endpoint_function( File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi/routing.py", line 163, in run_endpoint_function return await dependant.call(**values) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi_amis_admin/admin/admin.py", line 477, in route return await self.page_parser(request, page) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi_amis_admin/admin/admin.py", line 435, in page_parser result = page.amis_html( File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi_amis_admin/amis/components.py", line 131, in amis_html "AmisSchemaJson": self.amis_json(), File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi_amis_admin/amis/types.py", line 23, in amis_json return self.json(exclude_none=True, by_alias=True) File "pydantic/main.py", line 504, in pydantic.main.BaseModel.json TypeError: 'default' is an invalid keyword argument for this function INFO: 127.0.0.1:58286 - "GET /admin/ HTTP/1.1" 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 398, in run_asgi result = await app(self.scope, self.receive, self.send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__ return await self.app(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/uvicorn/middleware/debug.py", line 81, in __call__ raise exc from None File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/uvicorn/middleware/debug.py", line 78, in __call__ await self.app(scope, receive, inner_send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi/applications.py", line 276, in __call__ await super().__call__(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/applications.py", line 122, in __call__ await self.middleware_stack(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 184, in __call__ raise exc File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__ await self.app(scope, receive, _send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/middleware/base.py", line 109, in __call__ await response(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/responses.py", line 277, in __call__ await wrap(partial(self.listen_for_disconnect, receive)) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 662, in __aexit__ raise exceptions[0] File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/responses.py", line 273, in wrap await func() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/middleware/base.py", line 134, in stream_response return await super().stream_response(send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/responses.py", line 262, in stream_response async for chunk in self.body_iterator: File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/middleware/base.py", line 98, in body_stream raise app_exc File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/middleware/base.py", line 70, in coro await self.app(scope, receive_or_disconnect, send_no_error) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__ raise exc File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__ await self.app(scope, receive, sender) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__ raise e File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__ await self.app(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/routing.py", line 718, in __call__ await route.handle(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/routing.py", line 443, in handle await self.app(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi/applications.py", line 276, in __call__ await super().__call__(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/applications.py", line 122, in __call__ await self.middleware_stack(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 184, in __call__ raise exc File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__ await self.app(scope, receive, _send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__ raise exc File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__ await self.app(scope, receive, sender) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__ raise e File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__ await self.app(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/routing.py", line 718, in __call__ await route.handle(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/routing.py", line 276, in handle await self.app(scope, receive, send) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/starlette/routing.py", line 66, in app response = await func(request) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi/routing.py", line 237, in app raw_response = await run_endpoint_function( File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi/routing.py", line 163, in run_endpoint_function return await dependant.call(**values) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi_amis_admin/admin/admin.py", line 477, in route return await self.page_parser(request, page) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi_amis_admin/admin/admin.py", line 435, in page_parser result = page.amis_html( File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi_amis_admin/amis/components.py", line 131, in amis_html "AmisSchemaJson": self.amis_json(), File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/fastapi_amis_admin/amis/types.py", line 23, in amis_json return self.json(exclude_none=True, by_alias=True) File "pydantic/main.py", line 504, in pydantic.main.BaseModel.json TypeError: 'default' is an invalid keyword argument for this function
closed
2023-06-16T11:53:00Z
2023-07-24T01:16:21Z
https://github.com/amisadmin/fastapi-amis-admin/issues/104
[]
nonotde
0
tflearn/tflearn
tensorflow
404
Hi, how could i save some statistics like training loss and accuracy to a file?
this maybe very helpful to use these data to plot my own graph. edit: one way to solve it is that we can download tensorboard json file and plot it.
open
2016-10-18T05:55:03Z
2016-10-18T23:40:12Z
https://github.com/tflearn/tflearn/issues/404
[]
lfwin
1
kizniche/Mycodo
automation
830
Logitech C920 Issues
Hey there! Love Mycodo so far! Working on a temp/humidity controlled relay setup for outlets shortly. :) However, I've run into an issue with trying to optimize my Camera settings for a spare webcam I have on-site, the Logitech C920 webcam on a Raspberry Pi 4 using opencv and would greatly appreciate your help. There are several issues I'm seeing related to camera setup-- Upon default add, the image is grayscale -- I'm working within the v412-ctl CLI command to tweak the saturation and contrast to get it to a decent color depth and focus, but upon increasing the aforementioned values to ~100-150 (out of max 255), I start running into the following error shown in debug mode followed by corrupted video stream: Corrupt JPEG data: 304 extraneous bytes before marker 0xd7 Corrupt JPEG data: 408 extraneous bytes before marker 0xd2 Corrupt JPEG data: 973 extraneous bytes before marker 0xd5 Corrupt JPEG data: found marker 0xd6 instead of RST7 Corrupt JPEG data: 208 extraneous bytes before marker 0xd6 Corrupt JPEG data: premature end of data segment Corrupt JPEG data: 157 extraneous bytes before marker 0xd2 Corrupt JPEG data: premature end of data segment Corrupt JPEG data: 644 extraneous bytes before marker 0xd4 Corrupt JPEG data: 540 extraneous bytes before marker 0xd0 Corrupt JPEG data: 1169 extraneous bytes before marker 0xd6 Corrupt JPEG data: 1077 extraneous bytes before marker 0xd7 Corrupt JPEG data: premature end of data segment Corrupt JPEG data: premature end of data segment Corrupt JPEG data: 1161 extraneous bytes before marker 0xd3 Corrupt JPEG data: premature end of data segment Corrupt JPEG data: 193 extraneous bytes before marker 0xd2 Corrupt JPEG data: premature end of data segment Corrupt JPEG data: 796 extraneous bytes before marker 0xd2 Corrupt JPEG data: 106 extraneous bytes before marker 0xd3 Corrupt JPEG data: premature end of data segment Corrupt JPEG data: 99 extraneous bytes before marker 0xd7 This is equally weird as I've used this webcam successfully on Octoprint (raspi 3D Printing using the same tweaks) as a live print webcam. Unfortunately, I don't know what the delta is between their webcam implementation and Mycodo's. I also noticed that increasing the values past ~150-160 causes the frontend to crash. Any thoughts on resolution of these issues? Alternatively, are there any webcams that are recommended as 100% working?
closed
2020-08-28T22:03:28Z
2020-09-09T15:05:06Z
https://github.com/kizniche/Mycodo/issues/830
[]
jasonlaw0213
1
datadvance/DjangoChannelsGraphqlWs
graphql
36
Subscription is not triggered from background task
I am using a Celery periodic_task that triggers a Subscription. ``` from .schema import OnUpdateOrderStatus # subscription @periodic_task(run_every=crontab(minute='*/1'), name="completed_appointments") def completed_appointments(): ... OnUpdateOrderStatus.update_order_status(order=order, customer_id=customer.pk) ``` - [x] When I call OnUpdateOrderStatus from a Mutation or a Query on the same file it works - [x] when I call external methods in another module from this celery periodic_task it works too, anyone has tried to import a subscription class and call it from another module/file? Thank you
closed
2019-12-10T00:43:28Z
2019-12-26T03:28:38Z
https://github.com/datadvance/DjangoChannelsGraphqlWs/issues/36
[]
carlosalvarez91
3
stanfordnlp/stanza
nlp
1,101
Pipeline is incorrect with specific lang in MultilingualPipeline if lang_config is set
**Describe the bug** If I set lang_config in MultilingualPipeline for specific language , the Pipeline is always initialized as "en" **To Reproduce** `lang_configs = {'hi': {'processors': 'tokenize,pos'}, 'ar': {'processors': 'tokenize,pos'}}` `nlp = MultilingualPipeline(lang_configs=lang_configs)` If I put hindi/arabic texts as input, the language identification works well, but the Pipeline is initialized with "en". The problem seems that the "lang" param is not set in https://github.com/stanfordnlp/stanza/blob/main/stanza/pipeline/multilingual.py#L68 If I set lang explicitly in lang_configs, it works as expected, but I think this should be fixed to avoid misleading. **Expected behavior** Pipeline should be init with specific lang even if 'lang' is not set explicitly. **Environment (please complete the following information):** - OS: [e.g. Windows, Ubuntu, CentOS, MacOS] - Python version: [e.g. Python 3.6.8 from Anaconda] - Stanza version: [e.g., 1.0.0] **Additional context** Add any other context about the problem here.
closed
2022-08-18T03:51:22Z
2022-09-14T19:25:00Z
https://github.com/stanfordnlp/stanza/issues/1101
[ "bug" ]
dejianchen-x
3
OFA-Sys/Chinese-CLIP
computer-vision
302
微调clip_cn_vit-l-14-336 报错
我尝试换了下sh 里面的resume 加载的模型路径,然后提示模型参数错误: `size mismatch for module.visual.transformer.resblocks.0.mlp.c_proj.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for module.visual.transformer.resblocks.0.ln_2.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for module.visual.transformer.resblocks.0.ln_2.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for module.visual.transformer.resblocks.1.attn.in_proj_weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([2304, 768]). size mismatch for module.visual.transformer.resblocks.1.attn.in_proj_bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([2304]). size mismatch for module.visual.transformer.resblocks.1.attn.out_proj.weight: copying a param with shape torch.Size([1024, 1024]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for module.visual.transformer.resblocks.1.attn.out_proj.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for module.visual.transformer.resblocks.1.ln_1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for module.visual.transformer.resblocks.1.ln_1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for module.visual.transformer.resblocks.1.mlp.c_fc.weight: copying a param with shape torch.Size([4096, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 768]). size mismatch for module.visual.transformer.resblocks.1.mlp.c_fc.bias: copying a param with shape torch.Size([4096]) from checkpoint, the shape in current model is torch.Size([3072]). size mismatch for module.visual.transformer.resblocks.1.mlp.c_proj.weight: copying a param with shape torch.Size([1024, 4096]) from checkpoint, the shape in current model is torch.Size([768, 3072]). ` 谁知道这个是要改哪里才行~
closed
2024-04-22T06:43:00Z
2024-04-26T07:00:07Z
https://github.com/OFA-Sys/Chinese-CLIP/issues/302
[]
xxllp
2
kizniche/Mycodo
automation
698
LCD not working on 7.7.7
## Mycodo Issue Report: - Specific Mycodo Version: 7.7.7 #### Problem Description LCD is not working since 7.7.7 update ### Steps to Reproduce the issue: update to 7.7.7 ;)
closed
2019-09-24T07:03:33Z
2019-09-24T11:07:20Z
https://github.com/kizniche/Mycodo/issues/698
[]
buzzfuzz2k
1
noirbizarre/flask-restplus
api
354
POST adding extra header
The issue still exists in version 0.10.1 (Flask 0.12.2) https://github.com/noirbizarre/flask-restplus/issues/84 curl -X POST **--header 'Content-Type: application/json'** --header 'Accept: application/json' 'http://localhost:5000/api/v1.0/user/peter?password=12345678&email=peter%40abcdef.com' Response Body: { "message": "Failed to decode JSON object: Expecting value: line 1 column 1 (char 0)" } ` @api.expect(parser) @api.response(200, 'Success, not created') @api.response(201, 'Created') @api.response(400, 'Bad request, validation error') def post(self, username, ): '''Create a user''' args = parser.parse_args(strict=True) password = args['password'] email = args['email'] user = User(username, password, email) if user.create(): return True, 201 # TODO add location header else: return False, 200 ` ![user api](https://user-images.githubusercontent.com/2574849/32971881-a7e7df96-cbef-11e7-9285-341d75f92598.png)
open
2017-11-17T22:34:33Z
2018-05-23T12:40:39Z
https://github.com/noirbizarre/flask-restplus/issues/354
[]
ptrdvds
3
pytest-dev/pytest-html
pytest
191
how to set pytest-cov output to a single html?
how to set pytest-cov output to a single html? I'm trying to set the pytest-cov output to pytest-html, but don't know how to do. could you give me some advice?
open
2019-01-04T05:15:56Z
2020-10-23T01:12:10Z
https://github.com/pytest-dev/pytest-html/issues/191
[ "question" ]
Jiangshan00001
5
wandb/wandb
data-science
9,277
[Q]: wandb.sweep not working
### Ask your question Hello! I put my question as a question, but I sincerely do not know if it is a bug or something I do wrongly. Here is my Python code: ```python # 2: Define the search space sweep_configuration = { "method": "random", "metric": {"goal": "minimize", "name": "score"}, "parameters": { "x": {"max": 0.1, "min": 0.01}, "y": {"values": [1, 3, 7]}, }, } if __name__ == "__main__": import wandb wandb.login() print("logged in") # 3: Start the sweep score = 0 sweep_id = wandb.sweep(sweep=sweep_configuration, project="<good_project>", entity="<good_entity>") print(sweep_id) ``` However, when I run it, i got the following error: ```cmd File "C:\Users\maxen\anaconda3\envs\my_RN\Lib\site-packages\wandb\util.py", line 853, in no_retry_4xx raise UsageError(body["errors"][0]["message"]) wandb.errors.errors.UsageError: Sweep user not valid ``` I've looked at several sources and it appears that all of them are doing the same as me... so i do not see what is wrong! Thanks for your answer!
closed
2025-01-16T11:54:40Z
2025-01-25T09:32:52Z
https://github.com/wandb/wandb/issues/9277
[ "ty:question" ]
jupiterMJM
7
waditu/tushare
pandas
1,133
[dividend()]分红送配接口返回的数据中有重复的数据
dividend(),分红送配接口返回的数据中有重复的数据,比如 import tushare as ts tsp = ts.pro_api() df = tsp.dividend(ts_code='002540.SZ', fields='div_proc,stk_div,cash_div_tax,end_date,ex_date') df = df.query('div_proc==\'实施\'') print(df) 会打印以下数据,可以看到前3条数据都是重复的 end_date div_proc stk_div cash_div cash_div_tax ex_date 1 20181231 实施 0.0 0.320297 0.320297 20190612 2 20181231 实施 0.0 0.320297 0.320297 20190612 7 20181231 实施 0.0 0.320297 0.320297 20190612 9 20171231 实施 0.0 0.060000 0.060000 20180621 10 20161231 实施 0.0 0.060000 0.060000 20170524 11 20151231 实施 0.0 0.050000 0.050000 20160617 12 20141231 实施 1.5 0.000000 0.000000 20150413 13 20131231 实施 0.0 0.190000 0.200000 20140507 14 20121231 实施 1.0 0.190000 0.200000 20130506 15 20111231 实施 0.0 0.225000 0.250000 20120529 16 20101231 实施 0.3 0.180000 0.200000 20110509
closed
2019-09-11T08:35:24Z
2019-09-17T07:06:06Z
https://github.com/waditu/tushare/issues/1133
[]
whwalker
1
Nekmo/amazon-dash
dash
14
Not working on Raspberry pi?
Sorry if this isn't the place for this but I can't find a forum. When trying to run on a fresh install of current Raspbian, which has Python 2.7 and 3.5.3 installed, I get to the point where I run the discovery and get: pi@pi0w:~ $ sudo amazon-dash discovery sudo: unable to resolve host pi0w Traceback (most recent call last): File "/usr/local/bin/amazon-dash", line 6, in <module> from amazon_dash.management import execute_from_command_line File "/usr/local/lib/python2.7/dist-packages/amazon_dash/management.py", line 9, in <module> from amazon_dash.exceptions import AmazonDashException File "/usr/local/lib/python2.7/dist-packages/amazon_dash/exceptions.py", line 12, in <module> class ConfigFileNotFoundError(AmazonDashException, FileNotFoundError): NameError: name 'FileNotFoundError' is not defined To try to force it to use Python3, I changed the first line of /usr/local/bin/amazon-dash to point to python3 instead of just python, as a test, and I get slightly different error: pi@pi0w:~ $ sudo amazon-dash discovery sudo: unable to resolve host pi0w Traceback (most recent call last): File "/usr/local/bin/amazon-dash", line 6, in <module> from amazon_dash.management import execute_from_command_line ImportError: No module named 'amazon_dash' I know I must be missing something really stupid, but dont know where to go from here. I can't be the only one trying to do this on RaspPi ZeroW, so hoping you may know. Thanks for sharing your work.. I look forward to using this!! -Steve
closed
2017-12-30T16:00:17Z
2017-12-30T18:51:14Z
https://github.com/Nekmo/amazon-dash/issues/14
[ "bug" ]
172pilot
2
google/seq2seq
tensorflow
355
ValueError: Can not provide both every_secs and every_steps
Hi, I am trying to train the model with tensorflow 1.2 but I face this error, which raises in "/tensorflow/python/training/basic_session_run_hooks.py". Is it a problem of versioning or another kind of problem? _"ValueError: Can not provide both every_secs and every_steps."_ Thanks for helping me with that.
open
2019-05-29T08:14:50Z
2019-05-29T08:14:50Z
https://github.com/google/seq2seq/issues/355
[]
nazaninfrz
0
pytorch/pytorch
numpy
149,258
Auto-selective activation checkpointing is not optimal for speed (issue with min_cut_rematerialization_partition)
### 🐛 Describe the bug I try the new api described in [pytorch blog: selective activation checkpointing](https://pytorch.org/blog/activation-checkpointing-techniques/#compile-only-memory-budget-api-new) . Then I find that selective activation checkpointing is not optimal for speed. A minimal reproducer: ```python import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" os.environ["TORCH_COMPILE_DEBUG"] = "1" import torch from torch import nn import torch.nn.functional as F import torch._functorch.config torch._functorch.config.activation_memory_budget = 0.99 class Test1(nn.Module): def __init__(self): super().__init__() self.layer0 = nn.Linear(100, 100, bias=False) self.layer1 = nn.Linear(100, 100, bias=False) self.norm = nn.RMSNorm(100) def forward(self, x): x = self.norm(x) return self.layer0(self.layer1(x)) class Test(nn.Module): def __init__(self): super().__init__() self.embs = nn.Embedding(1000, 100) self.layers = nn.ModuleList([Test1() for _ in range(32)]) def forward(self, x): x = self.embs(x) for layer in self.layers: x = layer(x) + x return x.sum() x = torch.randint(0, 1000, (20,), device="cuda") model = Test().cuda().bfloat16() compiled_model = torch.compile(model) y = compiled_model(x) y.backward() ``` In the `torch_compile_debug` backward folder, the `fx_graph_readable.py` file shows an unusual series of additions. ```python class GraphModule(torch.nn.Module): def forward(self, primals_2: "i64[20]", primals_3: "bf16[100]", primals_6: "bf16[100]", primals_9: "bf16[100]", primals_12: "bf16[100]", primals_15: "bf16[100]", primals_18: "bf16[100]", primals_21: "bf16[100]", primals_24: "bf16[100]", primals_27: "bf16[100]", primals_30: "bf16[100]", primals_33: "bf16[100]", primals_36: "bf16[100]", primals_39: "bf16[100]", primals_42: "bf16[100]", primals_45: "bf16[100]", primals_48: "bf16[100]", primals_51: "bf16[100]", primals_54: "bf16[100]", primals_57: "bf16[100]", primals_60: "bf16[100]", primals_63: "bf16[100]", primals_66: "bf16[100]", primals_69: "bf16[100]", primals_72: "bf16[100]", primals_75: "bf16[100]", primals_78: "bf16[100]", primals_81: "bf16[100]", primals_84: "bf16[100]", primals_87: "bf16[100]", primals_90: "bf16[100]", primals_93: "bf16[100]", primals_96: "bf16[100]", embedding: "bf16[20, 100]", rsqrt: "bf16[20, 1]", mm: "bf16[20, 100]", mm_1: "bf16[20, 100]", rsqrt_1: "bf16[20, 1]", mm_2: "bf16[20, 100]", mm_3: "bf16[20, 100]", rsqrt_2: "bf16[20, 1]", mm_4: "bf16[20, 100]", mm_5: "bf16[20, 100]", rsqrt_3: "bf16[20, 1]", mm_6: "bf16[20, 100]", mm_7: "bf16[20, 100]", rsqrt_4: "bf16[20, 1]", mm_8: "bf16[20, 100]", mm_9: "bf16[20, 100]", rsqrt_5: "bf16[20, 1]", mm_10: "bf16[20, 100]", mm_11: "bf16[20, 100]", rsqrt_6: "bf16[20, 1]", mm_12: "bf16[20, 100]", mm_13: "bf16[20, 100]", rsqrt_7: "bf16[20, 1]", mm_14: "bf16[20, 100]", mm_15: "bf16[20, 100]", rsqrt_8: "bf16[20, 1]", mm_16: "bf16[20, 100]", mm_17: "bf16[20, 100]", rsqrt_9: "bf16[20, 1]", mm_18: "bf16[20, 100]", mm_19: "bf16[20, 100]", rsqrt_10: "bf16[20, 1]", mm_20: "bf16[20, 100]", mm_21: "bf16[20, 100]", rsqrt_11: "bf16[20, 1]", mm_22: "bf16[20, 100]", mm_23: "bf16[20, 100]", rsqrt_12: "bf16[20, 1]", mm_24: "bf16[20, 100]", mm_25: "bf16[20, 100]", rsqrt_13: "bf16[20, 1]", mm_26: "bf16[20, 100]", mm_27: "bf16[20, 100]", rsqrt_14: "bf16[20, 1]", mm_28: "bf16[20, 100]", mm_29: "bf16[20, 100]", rsqrt_15: "bf16[20, 1]", mm_30: "bf16[20, 100]", mm_31: "bf16[20, 100]", rsqrt_16: "bf16[20, 1]", mm_32: "bf16[20, 100]", mm_33: "bf16[20, 100]", rsqrt_17: "bf16[20, 1]", mm_34: "bf16[20, 100]", mm_35: "bf16[20, 100]", rsqrt_18: "bf16[20, 1]", mm_36: "bf16[20, 100]", mm_37: "bf16[20, 100]", rsqrt_19: "bf16[20, 1]", mm_38: "bf16[20, 100]", mm_39: "bf16[20, 100]", rsqrt_20: "bf16[20, 1]", mm_40: "bf16[20, 100]", mm_41: "bf16[20, 100]", rsqrt_21: "bf16[20, 1]", mm_42: "bf16[20, 100]", mm_43: "bf16[20, 100]", rsqrt_22: "bf16[20, 1]", mm_44: "bf16[20, 100]", mm_45: "bf16[20, 100]", rsqrt_23: "bf16[20, 1]", mm_46: "bf16[20, 100]", mm_47: "bf16[20, 100]", rsqrt_24: "bf16[20, 1]", mm_48: "bf16[20, 100]", mm_49: "bf16[20, 100]", rsqrt_25: "bf16[20, 1]", mm_50: "bf16[20, 100]", mm_51: "bf16[20, 100]", rsqrt_26: "bf16[20, 1]", mm_52: "bf16[20, 100]", mm_53: "bf16[20, 100]", rsqrt_27: "bf16[20, 1]", mm_54: "bf16[20, 100]", mm_55: "bf16[20, 100]", rsqrt_28: "bf16[20, 1]", mm_56: "bf16[20, 100]", mm_57: "bf16[20, 100]", rsqrt_29: "bf16[20, 1]", mm_58: "bf16[20, 100]", mm_59: "bf16[20, 100]", rsqrt_30: "bf16[20, 1]", mm_60: "bf16[20, 100]", mm_61: "bf16[20, 100]", rsqrt_31: "bf16[20, 1]", mm_62: "bf16[20, 100]", permute_66: "bf16[100, 100]", permute_70: "bf16[100, 100]", permute_74: "bf16[100, 100]", permute_78: "bf16[100, 100]", permute_82: "bf16[100, 100]", permute_86: "bf16[100, 100]", permute_90: "bf16[100, 100]", permute_94: "bf16[100, 100]", permute_98: "bf16[100, 100]", permute_102: "bf16[100, 100]", permute_106: "bf16[100, 100]", permute_110: "bf16[100, 100]", permute_114: "bf16[100, 100]", permute_118: "bf16[100, 100]", permute_122: "bf16[100, 100]", permute_126: "bf16[100, 100]", permute_130: "bf16[100, 100]", permute_134: "bf16[100, 100]", permute_138: "bf16[100, 100]", permute_142: "bf16[100, 100]", permute_146: "bf16[100, 100]", permute_150: "bf16[100, 100]", permute_154: "bf16[100, 100]", permute_158: "bf16[100, 100]", permute_162: "bf16[100, 100]", permute_166: "bf16[100, 100]", permute_170: "bf16[100, 100]", permute_174: "bf16[100, 100]", permute_178: "bf16[100, 100]", permute_182: "bf16[100, 100]", permute_186: "bf16[100, 100]", permute_190: "bf16[100, 100]", permute_194: "bf16[100, 100]", permute_198: "bf16[100, 100]", permute_202: "bf16[100, 100]", permute_206: "bf16[100, 100]", permute_210: "bf16[100, 100]", permute_214: "bf16[100, 100]", permute_218: "bf16[100, 100]", permute_222: "bf16[100, 100]", permute_226: "bf16[100, 100]", permute_230: "bf16[100, 100]", permute_234: "bf16[100, 100]", permute_238: "bf16[100, 100]", permute_242: "bf16[100, 100]", permute_246: "bf16[100, 100]", permute_250: "bf16[100, 100]", permute_254: "bf16[100, 100]", permute_258: "bf16[100, 100]", permute_262: "bf16[100, 100]", permute_266: "bf16[100, 100]", permute_270: "bf16[100, 100]", permute_274: "bf16[100, 100]", permute_278: "bf16[100, 100]", permute_282: "bf16[100, 100]", permute_286: "bf16[100, 100]", permute_290: "bf16[100, 100]", permute_294: "bf16[100, 100]", permute_298: "bf16[100, 100]", permute_302: "bf16[100, 100]", permute_306: "bf16[100, 100]", permute_310: "bf16[100, 100]", permute_314: "bf16[100, 100]", permute_318: "bf16[100, 100]", tangents_1: "bf16[]"): # File: /tmp/ipykernel_1043308/3460069279.py:38 in forward, code: return x.sum() expand: "bf16[20, 100]" = torch.ops.aten.expand.default(tangents_1, [20, 100]); tangents_1 = None # File: /tmp/ipykernel_1043308/3460069279.py:24 in forward, code: return self.layer0(self.layer1(x)) permute_64: "bf16[100, 20]" = torch.ops.aten.permute.default(expand, [1, 0]) mm_64: "bf16[100, 100]" = torch.ops.aten.mm.default(permute_64, mm_62); permute_64 = mm_62 = None permute_65: "bf16[100, 100]" = torch.ops.aten.permute.default(mm_64, [1, 0]); mm_64 = None mm_65: "bf16[20, 100]" = torch.ops.aten.mm.default(expand, permute_66); permute_66 = None permute_67: "bf16[100, 100]" = torch.ops.aten.permute.default(permute_65, [1, 0]); permute_65 = None permute_68: "bf16[100, 20]" = torch.ops.aten.permute.default(mm_65, [1, 0]) # File: /tmp/ipykernel_1043308/3460069279.py:37 in forward, code: x = layer(x) + x add_1: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_1, embedding); mm_1 = None add_3: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_3, add_1); mm_3 = None add_5: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_5, add_3); mm_5 = None add_7: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_7, add_5); mm_7 = None add_9: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_9, add_7); mm_9 = None add_11: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_11, add_9); mm_11 = None add_13: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_13, add_11); mm_13 = None add_15: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_15, add_13); mm_15 = None add_17: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_17, add_15); mm_17 = None add_19: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_19, add_17); mm_19 = None add_21: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_21, add_19); mm_21 = None add_23: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_23, add_21); mm_23 = None add_25: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_25, add_23); mm_25 = None add_27: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_27, add_25); mm_27 = None add_29: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_29, add_27); mm_29 = None add_31: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_31, add_29); mm_31 = None add_33: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_33, add_31); mm_33 = None add_35: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_35, add_33); mm_35 = None add_37: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_37, add_35); mm_37 = None add_39: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_39, add_37); mm_39 = None add_41: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_41, add_39); mm_41 = None add_43: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_43, add_41); mm_43 = None add_45: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_45, add_43); mm_45 = None add_47: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_47, add_45); mm_47 = None add_49: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_49, add_47); mm_49 = None add_51: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_51, add_49); mm_51 = None add_53: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_53, add_51); mm_53 = None add_55: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_55, add_53); mm_55 = None add_57: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_57, add_55); mm_57 = None add_59: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_59, add_57); mm_59 = None add_61: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_61, add_59); mm_61 = None ``` A simple observation reveals that it has transformed into the following pattern during the forward pass: x1 = x0 + y0, x2 = x1 + y1, x3 = x2 + y2. Here, x0, x1, x2, and x3 are all stored for backward computation. The optimal approach would be to store x0, x1, x2, and x3. However, due to an issue in the `min cut` implementation of `torch.compile`, which supports recomputation for non-compute-intensive operations, it instead stores x0, y0, y1, and y2, while x1, x2, and x3 are recomputed. Although both methods use the same amount of memory, the latter introduces unnecessary computations. ### Error logs _No response_ ### Versions torch 2.5.1+cu124 cc @chauhang @penguinwu @zou3519 @bdhirsh
open
2025-03-15T15:57:32Z
2025-03-18T18:17:30Z
https://github.com/pytorch/pytorch/issues/149258
[ "triaged", "oncall: pt2", "module: pt2-dispatcher" ]
efsotr
1
python-visualization/folium
data-visualization
1,601
Retrieve data added with a folium Draw plugin
Hello, This is not an issue but a question. I didn't found a better place to ask... I use a folium.plugins.Draw and I would like to get the objects interactively added on the map (rectangles for example) Something like "Draw.drawnItems" but folium says : "'Draw' object has no attribute 'drawnItems'" and I didn't found any code example Could you help me ?
closed
2022-06-03T10:15:44Z
2022-11-17T15:26:24Z
https://github.com/python-visualization/folium/issues/1601
[]
Cimeliere
1
PokemonGoF/PokemonGo-Bot
automation
6,312
How to make pokemon bot
Hi
closed
2022-06-28T05:12:04Z
2022-11-12T22:17:20Z
https://github.com/PokemonGoF/PokemonGo-Bot/issues/6312
[]
Subroop
5
lukas-blecher/LaTeX-OCR
pytorch
152
Can't install normally pyqt5
`pip install pix2tex[gui]` gives me ``` ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. spyder 5.1.5 requires pyqt5<5.13, but you have pyqt5 5.15.6 which is incompatible. spyder 5.1.5 requires pyqtwebengine<5.13, but you have pyqtwebengine 5.15.5 which is incompatible. Successfully installed PyQt5-5.15.6 PyQt5-sip-12.10.1 ``` if i ignore it and enter `latexocr --gnome` it gives me ``` Traceback (most recent call last): File "/home/islambek243/anaconda3/bin/latexocr", line 5, in <module> from pix2tex.gui import main File "/home/islambek243/anaconda3/lib/python3.9/site-packages/pix2tex/gui.py", line 7, in <module> from PyQt5.QtWebEngineWidgets import QWebEngineView ImportError: /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2: undefined symbol: krb5_ser_context_init, version krb5_3_MIT ``` If i don't ignore previous error and enter `pip install pyqt5==5.12.3` it gives me ``` ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. spyder 5.1.5 requires pyqtwebengine<5.13, but you have pyqtwebengine 5.15.5 which is incompatible. pyqtwebengine 5.15.5 requires PyQt5>=5.15.4, but you have pyqt5 5.12.3 which is incompatible. Successfully installed pyqt5-5.12.3 ``` it's endless circle...
closed
2022-05-19T15:10:42Z
2022-05-21T06:10:27Z
https://github.com/lukas-blecher/LaTeX-OCR/issues/152
[]
islambek243
2
koaning/scikit-lego
scikit-learn
297
[BUG] conda does not update automatically
@MBrouns dunno if we want to automate this, but can you push to conda?
closed
2020-02-14T18:33:44Z
2020-02-18T22:06:38Z
https://github.com/koaning/scikit-lego/issues/297
[ "bug" ]
koaning
2
onnx/onnx
scikit-learn
5,909
onnx.helper.make_attribute_ref does not set attr.ref_attr_name
# Bug Report ### Describe the bug onnx.helper.make_attribute_ref does not create a reference attribute, it creates a normal attribute with name and type. Should it not set attr.ref_attr_name to refer to the parent function's attribute?
open
2024-02-06T09:12:50Z
2024-02-08T13:12:57Z
https://github.com/onnx/onnx/issues/5909
[ "bug" ]
aernoudt
3
junyanz/pytorch-CycleGAN-and-pix2pix
deep-learning
1,374
latest_net_<D/G>_<A/B> vs <epoch_count>+<save_latest_freq> Net
I am confused by the checkpoints `latest_net_D_A`, `latest_net_D_B`, `latest_net_G_A`, and `latest_net_G_B`. We save checkpoints every `--save_epoch_freq`. So I would guess that if I trained my network for 10 Epochs with, `--save_epoch_freq` set to 5, `10_net_D_B.pth` will be equal to `latest_net_D_B.pth`. However, what if I train my network for 10 Epochs with, `--save_epoch_freq` set to 5, but the training gets interrupted at epoch 8, is `latest_net_D_B.pth` then equal to `5_net_D_B.pth`, `7_net_D_B.pth`, or `8_net_D_B.pth`? If it's the former (equal to `5_net_D_B.pth`), why do we save `latest_net_D_B.pth` at all since it is a duplicate?
open
2022-02-05T22:52:17Z
2022-02-15T09:47:40Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1374
[]
Lauenburg
1
chezou/tabula-py
pandas
4
Cannot Install tabula-py
My java version is 1.8.0_101 and pandas installed in anaconda environment. I tried install it on both python version is Python 2.7.12 and python 3.5 :: Anaconda 4.1.1 (64-bit). I executed "pip install tabula-py" on anaconda as well, the running message is : _Collecting tabula-py Could not find a version that satisfies the requirement tabula-py (from versions: ) No matching distribution found for tabula-py_ Is there any specific requirements other than Java and pandas? Thank you
closed
2016-09-20T14:05:01Z
2016-09-21T12:07:27Z
https://github.com/chezou/tabula-py/issues/4
[]
ghost
4
lanpa/tensorboardX
numpy
357
Got this problem
anaconda3/lib/python3.6/site-packages/google/protobuf/pyext/../../../../../libprotobuf.so.16: symbol _ZNSt7__cxx1119basic_ostringstreamIcSt11char_traitsIcESaIcEED1Ev, version GLIBCXX_3.4.21 not defined in file libstdc++.so.6 with link time reference
open
2019-02-17T06:34:59Z
2019-02-17T06:34:59Z
https://github.com/lanpa/tensorboardX/issues/357
[]
zhangxinyu-xyz
0
zappa/Zappa
flask
533
[Migrated] Set PYTHON_EGG_CACHE for flask apps during init
Originally from: https://github.com/Miserlou/Zappa/issues/1412 by [L226](https://github.com/L226) <!--- Provide a general summary of the issue in the Title above --> ## Context <!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug --> <!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 --> I discovered that in my Flask deployment, the app deploys fine with e.g. `zappa init; zappa deploy dev` however upon hitting the generated endpoint a failure is returned. ## Expected Behavior <!--- Tell us what should happen --> You should be able to get your expected response from whatever endpoint is hit. ## Actual Behavior <!--- Tell us what happens instead --> You get this response: ``` "{u'message': u'An uncaught exception happened while servicing this request. You can investigate this with the `zappa tail` command.', u'traceback': ['Traceback (most recent call last):\\n', ' File \"/var/task/handler.py\", line 452, in handler\\n response = Response.from_app(self.wsgi_app, environ)\\n', ' File \"/tmp/pip-build-LktYrc/Werkzeug/werkzeug/wrappers.py\", line 903, in from_app\\n', ' File \"/tmp/pip-build-LktYrc/Werkzeug/werkzeug/wrappers.py\", line 57, in _run_wsgi_app\\n', ' File \"/tmp/pip-build-LktYrc/Werkzeug/werkzeug/test.py\", line 884, in run_wsgi_app\\n', \"TypeError: 'NoneType' object is not callable\\n\"]}" ``` `zappa tail dev` yields the following: ``` [1519342540529] Can't extract file(s) to egg cache The following error occurred while trying to extract file(s) to the Python egg cache: [Errno 30] Read-only file system: '/home/sbx_user1060' The Python egg cache directory is currently set to: /home/sbx_user1060/.python-eggs Perhaps your account does not have write access to this directory? You can change the cache directory by setting the PYTHON_EGG_CACHE environment variable to point to an accessible directory. ``` ## Possible Fix <!--- Not obligatory, but suggest a fix or reason for the bug --> Seems that PYTHON_EGG_CACHE needs to be set as an environment variable to '/tmp'. I solved by including the following in my zappa_settings.json: ```json "environment_variables": { "PYTHON_EGG_CACHE": "/tmp" } ``` Unsure if this is Flask specific, or if I stuffed up somewhere, or if this is actually expected behaviour... ## Steps to Reproduce <!--- Provide a link to a live example, or an unambiguous set of steps to --> <!--- reproduce this bug include code to reproduce, if relevant --> 1. Make a flask app 2. `zappa init` 3. `zappa deploy dev` 4. poke API endpoint ## Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> * Zappa version used: 0.45.1 * Operating System and Python version: 4.13.0-32-generic #35~16.04.1-Ubuntu | Python 2.7.12 * The output of `pip freeze`: * Link to your project (optional): * Your `zappa_settings.py`: ```json { "dev": { "app_function": "*****.api.API", "aws_region": "ap-southeast-2", "profile_name": "*****", "project_name": "api", "runtime": "python2.7", "s3_bucket": "*****", "environment_variables": { "*****": "*****", "PYTHON_EGG_CACHE": "/tmp" }, "domain": "*****.*****", "cors": true, "certificate_arn": "arn:aws:acm:us-east-1:*******" } ```
closed
2021-02-20T09:44:36Z
2023-08-17T01:18:18Z
https://github.com/zappa/Zappa/issues/533
[ "bug", "enhancement" ]
jneves
1
nalepae/pandarallel
pandas
263
Parallel_apply gets stuck
## General - **Operating System**: mac os - **Python version**: Python 3.11.6 - **Pandas version**: 2.1.4 - **Pandarallel version**: 1.6.5 ## Acknowledgement - [x] My issue is **NOT** present when using `pandas` without alone (without `pandarallel`) - [x] If I am on **Windows**, I read the [Troubleshooting page](https://nalepae.github.io/pandarallel/troubleshooting/) before writing a new bug report ## Bug description ### Observed behavior I have 2 function that I'm running with `parallel_apply` on my dataframe. Here are the functions: ``` class Myclass: #*Method 1* @staticmethod def remove_newlines(txt): txt = re.sub(r'[\n]+','\n', txt) return txt def clean_text(self,txt): txt = self.remove_tags(txt) txt = self.remove_newlines(txt) return txt def clean_text_column(self, df_col): if self.parallel: pandarallel.initialize(progress_bar=True) df_col = df_col.parallel_apply(self.clean_text) else: df_col = df_col.apply(self.clean_text) return df_col #*Method 2* @staticmethod def get_tokenizer(model = 'cl100k_base'): return tiktoken.get_encoding(model) @staticmethod def get_tokens(text, tokenizer): tokens = tokenizer.encode( text, disallowed_special=() ) return len(tokens) def get_tokens_column(self, df_col): tokenizer = self.get_tokenizer() if self.parallel: pandarallel.initialize(progress_bar=True) df_col = df_col.parallel_apply(self.get_tokens, args=(tokenizer,)) else: # there is an issue with pandarallel here. df_col = df_col.apply(self.get_tokens, args = (tokenizer,)) return df_col ``` The first method runs ok with `parallel_apply`, but the second method gets stuck at 0% without raising any error. <img width="914" alt="Screenshot 2024-01-17 at 4 53 08 PM" src="https://github.com/nalepae/pandarallel/assets/156240643/43db3183-3493-4655-8b39-6390f2485da4">
open
2024-01-18T00:53:41Z
2024-04-27T11:42:41Z
https://github.com/nalepae/pandarallel/issues/263
[]
zeinabsobhani
4
ivy-llc/ivy
tensorflow
28,715
Fix Frontend Failing Test: paddle - logic.paddle.equal_all
To-do List: https://github.com/unifyai/ivy/issues/27500
closed
2024-04-01T13:10:39Z
2024-04-09T04:31:29Z
https://github.com/ivy-llc/ivy/issues/28715
[ "Sub Task" ]
ZJay07
0
fbdesignpro/sweetviz
pandas
2
Documentation analyze vs compare
Hi, first of all, congrats on the project. Haven’t used extensively yet but will do so soon. Just dropping this note because I noticed some inconsistencies in the documentation and the medium article. On the medium article the analize() function is mention but not used. Here on git compare() is mentioned but not used. Just wanted to let you know....
closed
2020-06-06T12:15:02Z
2020-06-08T13:21:06Z
https://github.com/fbdesignpro/sweetviz/issues/2
[]
gpompeo
0
sinaptik-ai/pandas-ai
data-visualization
1,655
AttributeError: 'LangchainLLM' object has no attribute '_llm_type'
### System Info pandasai==3.0.0b14 system in windows10 and ubuntu22.04 python3.11 ### 🐛 Describe the bug from langchain.chat_models import ChatOpenAI import pandasai as pai from pandasai_langchain import LangchainLLM dataset_path = "qshop/log-data" try: sql_table = pai.create( path=dataset_path, description="XXXXXXXXXXXXXX", source={ "type": "mysql", "connection": { "host": "192.168.0.4", "port": 8096, "user": "qshop_rw", "password": "Hd43eN+DkNaR", "database": "qshop" }, "table": "tb_log" }, columns=[ { "name": "Id", "type": "string", "description": "每条数据的唯一标识符" }, { "name": "UserID", "type": "string", "description": "此条操作记录的用户,无就代表用户没登录" }, { "name": "CreateTime", "type": "datetime", "description": "此条操作记录的产生的时间" }, { "name": "PageName", "type": "string", "description": "此条操作记录访问的页面名称" }, { "name": "GoodsName", "type": "string", "description": "此条操作记录访问的产品的名称,或者需求的名称,或者视频资讯的名称" }, { "name": "Col1", "type": "string", "description": "辅助判断列,如果值为小模型发布则说明GoodsName对应的是产品,如果值为小模型需求则说明GoodsName对应的是需求,如果值为小模型视频说明GoodsName对应的是视频资讯" } ] ) print(f"成功创建新数据集: {dataset_path}") except Exception as e: print(f"创建数据集时出错: {e}") llm = ChatOpenAI(base_url='https://XXXX.XXX.XX.XX:XXX/v1/', api_key='sk-proj-1234567890', model='deepseek-r1-distill-qwen', request_timeout=300) llm1 = LangchainLLM(langchain_llm=llm) pai.config.set({ "llm": llm1, "timeout": 300, "enable_cache": False, }) # 从连接器获取数据 agent = pai.load('qshop/log-data') # 示例查询 ans = agent.chat("请根据这个表格生成一份访问分析报告,并根据报告给出后续的运营建议。") print(ans) Exception has occurred: AttributeError 'LangchainLLM' object has no attribute '_llm_type' File "E:\develop\aiagent\pandasaitest.py", line 84, in <module> ans = agent.chat("请根据这个表格生成一份访问分析报告,并根据报告给出后续的运营建议。") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'LangchainLLM' object has no attribute '_llm_type'
closed
2025-03-04T04:09:26Z
2025-03-14T17:03:38Z
https://github.com/sinaptik-ai/pandas-ai/issues/1655
[]
ban0228
1
pallets/quart
asyncio
66
Does quart have feature parity and consistent bug fixes for the python 3.6 and python 3.7 releases?
I noticed that when I look at the available quart versions for python 3.6 and 3.7, they differ. python 3.6: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 0.4.1, 0.5.0, 0.6.0, 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.6.5, 0.6.6, 0.6.7, 0.6.8, 0.6.9, 0.6.10, 0.6.11, 0.6.12, 0.6.13 python 3.7: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 0.4.1, 0.5.0, 0.6.0, 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.6.5, 0.6.6, 0.6.7, 0.6.8, 0.6.9, 0.6.10, 0.6.11, 0.6.12, 0.6.13, 0.7.0, 0.7.1, 0.7.2, 0.8.0, 0.8.1, 0.9.0, 0.9.1 Do I need to use python 3.7 to get the latest quart changes? Will quart support and updates for python 3.6 at any point be deprecated? Thanks.
closed
2019-05-29T17:02:30Z
2022-07-06T00:23:54Z
https://github.com/pallets/quart/issues/66
[]
DeVonteApplewhite
1
robotframework/robotframework
automation
5,287
Add `type` attribute to `TestSuite` and `TestCase` objects
This attribute makes it easier to detect is a certain model object a suite, a test, a keyword or a control structure. One use case is implementing `start_keyword` in a visitor or a listener and wanting to know the type of `kw.parent`. A concrete use case is that `JsonLogger` implemented as part of #3423 needs to know is a parent of a teardown keyword a suite or a test. Keywords and control structures already have a `type` attribute that's need also in JSON serialization, but using `kw.parent.type` currently fails if the parent happens to be a test or a suite. A workaround is importing appropriate classes and using something like `isinstance(kw.parent, TestSuite)`, but in many usages using `kw.parent == 'SUITE` is more convenient. Another workaround is implementing also `start_suite`, `start_test` and `start_body_item` and keeping track of the parent type separately. That can be needed in some cases anyway, but in simple usages accessing the `type` attribute is a lot simpler.
closed
2024-12-09T09:46:21Z
2024-12-18T13:33:15Z
https://github.com/robotframework/robotframework/issues/5287
[ "enhancement", "priority: low", "beta 1", "effort: small" ]
pekkaklarck
2
aiogram/aiogram
asyncio
524
Handle messages from a group
Hello everyone, How can I handle messages from a group? (I didn't receive messages in the `process_messages`) - created a bot - set privacy mod to DISABLED at BotFather: ``` > User /setprivacy > BotFather Choose a bot to change group messages settings. > User @name_bot > BotFather 'Enable' - your bot will only receive messages that either start with the '/' symbol or mention the bot by username. 'Disable' - your bot will receive all messages that people send to groups. Current status is: DISABLED > User Disable > BotFather Success! The new status is: DISABLED. /help ``` - create a group - added the bot to the group as an admin - ran the code: ```python from aiogram import Bot, Dispatcher, executor, types bot = Bot(token="<API token>") dp = Dispatcher(bot) @dp.message_handler(chat_type=[types.ChatType.GROUP, types.ChatType.CHANNEL]) async def process_messages(msg: types.Message): ... # TODO: to process the message executor.start_polling(dp) ``` Permissions of the bot in the channel: ![image](https://user-images.githubusercontent.com/77075714/111037742-2c358880-842e-11eb-86bf-b3ab014d33c1.png)
closed
2021-03-13T17:02:17Z
2021-03-14T16:41:38Z
https://github.com/aiogram/aiogram/issues/524
[ "question issue" ]
alex-deus
2
NullArray/AutoSploit
automation
881
Divided by zero exception151
Error: Attempted to divide by zero.151
closed
2019-04-19T16:01:52Z
2019-04-19T16:37:15Z
https://github.com/NullArray/AutoSploit/issues/881
[]
AutosploitReporter
0
iperov/DeepFaceLab
deep-learning
549
Gigapixel upscaling the source method not working (restoring metadata error)
So i followed all the instructions on how to upscale using gigapixel. used "4.2.other) data_src util faceset metadata save", then upscaled the aligned images in src using gigapixel, then renamed the new upscaled folder to "aligned" and moved the metadata file there. But when i click the restore bat file "4.2.other) data_src util faceset metadata restore" i get this error and no image processes Is this a bug? ![Screenshot (1319)](https://user-images.githubusercontent.com/52265226/72046985-f5685d80-32d2-11ea-8e0e-8f65779a164c.png)
closed
2020-01-09T07:27:12Z
2020-03-28T05:42:18Z
https://github.com/iperov/DeepFaceLab/issues/549
[]
mpmo10
4
python-gino/gino
asyncio
699
Alter in create_all
* GINO version: 1.0.0 * Python version: 3.8 * asyncpg version:0.20.1 * PostgreSQL version:10 We are using db.gino.create_all() for creation of database from models.py file. However when we make some changes to any specific table (class) of model file, we need to drop that table and then the changes are getting reflected. Is there any way where the alteration of the tables can happen while create_all()
closed
2020-06-10T16:40:33Z
2020-06-21T03:40:18Z
https://github.com/python-gino/gino/issues/699
[ "question" ]
nikhilpatil02
2
unit8co/darts
data-science
2,558
[QUESTION]Training Loss Much Lower Than Validation Loss in TSMixerModel: Need Help Understanding Why
**Issue** I am training a TSMixerModel to forecast multivariate time series. The model performs well overall, but I notice that the training loss is consistently much lower than the validation loss (sometimes by orders of magnitude). I have already tried different loss functions (MAELoss, MapeLoss), and the issue persists. However, when I forecast using this model, I don’t observe signs of overfitting, and the model predictions look good. **Callback** I use the following setup for logging the losses: ``` class LossLogger(Callback): def __init__(self): self.train_loss = [] self.val_loss = [] # will automatically be called at the end of each epoch def on_train_epoch_end(self, trainer: "pl.Trainer", pl_module: "pl.LightningModule") -> None: self.train_loss.append(float(trainer.callback_metrics["train_loss"])) def on_validation_epoch_end(self, trainer: "pl.Trainer", pl_module: "pl.LightningModule") -> None: if not trainer.sanity_checking: self.val_loss.append(float(trainer.callback_metrics["val_loss"])) loss_logger = LossLogger() ``` **Model** This is how I initialize the model: ``` progress_bar = TFMProgressBar(enable_sanity_check_bar=False, enable_validation_bar=False) limit_train_batches = 50 limit_val_batches = 50 max_epochs = 30 batch_size = 64 model_tsm = TSMixerModel( input_chunk_length=49, output_chunk_length=130, use_reversible_instance_norm=True, optimizer_kwargs={"lr": 1e-4}, nr_epochs_val_period=1, pl_trainer_kwargs={"gradient_clip_val": 1, "max_epochs": max_epochs, "limit_train_batches": limit_train_batches, "limit_val_batches": limit_val_batches, "accelerator": "auto", "callbacks": [progress_bar, loss_logger]}, lr_scheduler_cls=torch.optim.lr_scheduler.ExponentialLR, lr_scheduler_kwargs={"gamma": 0.999}, likelihood=QuantileRegression(), loss_fn=None, save_checkpoints=True, force_reset=True, batch_size=64, random_state=42, add_encoders={"cyclic": {"future": ['month', 'day', 'weekday','quarter', 'dayofyear', 'week']}}, use_static_covariates=True, model_name="tsm") ``` **Loss curves** Here are the plotted loss curves after training: ``` loss_df = pd.DataFrame({'epoch':range(0, len(model_tsm.trainer.callbacks[1].train_loss)), 'train_loss':model_tsm.trainer.callbacks[1].train_loss, 'val_loss':model_tsm.trainer.callbacks[1].val_loss}) plt.plot(loss_df['epoch'], loss_df['train_loss'], color='blue', label='train loss: ' + str(loss_df['train_loss'][-1:].item())) plt.plot(loss_df['epoch'], loss_df['val_loss'], color='orange', label='val loss: ' + str(loss_df['val_loss'][-1:].item())) plt.gcf().set_size_inches(10, 5) plt.legend() plt.show() ``` ![image](https://github.com/user-attachments/assets/470f0537-96a9-4d0d-8bb9-351c57ccd636) **Data** I create my multivariate time series using from_group_dataframe() as follows: ``` ts_df = TimeSeries.from_group_dataframe(df, group_cols=['group1', 'group2', 'group3'], time_col='ds', value_cols='y', freq='D') ``` **Question** Why is my training loss significantly lower than the validation loss, sometimes by orders of magnitude? Could it be related to how the data is structured as a list of time series? Is this expected behavior in this scenario, or could there be an issue with scaling or loss calculation? I appreciate any help or insights! Thanks!
closed
2024-10-10T15:35:00Z
2024-11-07T08:26:50Z
https://github.com/unit8co/darts/issues/2558
[ "question" ]
erl61
5
seleniumbase/SeleniumBase
web-scraping
2,946
Detect cloudflare using its own method of bypassing
Hi, I apologize if I use my own method and not the built-in sb.uc_gui_click_captcha() . I have cloudflare detecting SeleniumBase if I use my own image crawling method! What could this be related to? . Code - ----------------------------------------------- ``` import time import pyautogui import asyncio from seleniumbase import SB screenshot_path = 'click.png' ua = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36" async def search_and_click(): while True: try: location = pyautogui.locateOnScreen(screenshot_path, confidence=0.8) if location: center = pyautogui.center(location) print(f"Found at: x={center.x}, y={center.y}") pyautogui.moveTo(center.x, center.y, duration=1) pyautogui.click() break # Остановить цикл после успешного клика except pyautogui.ImageNotFoundException: pass await asyncio.sleep(1) def check_title_and_handle_captcha(sb): title = sb.driver.title print(f"Title: {title}") if "Just a moment" in title or "Один момент" in title: asyncio.run(search_and_click()) time.sleep(10) check_title_and_handle_captcha(sb) with SB(uc=True, test=True) as sb: url = "https://nopecha.com/demo/cloudflare" sb.uc_open_with_reconnect(url, reconnect_time=9) time.sleep(3) check_title_and_handle_captcha(sb) sb.post_message("SeleniumBase wasn't detected", duration=3) cookies = sb.driver.get_cookies() cf_clearance_cookie = next((cookie for cookie in cookies if cookie['name'] == 'cf_clearance'), None) if cf_clearance_cookie: print(f"Cookie: cf_clearance={cf_clearance_cookie['value']}") else: print("cf_clearance cookie not found") ``` ------------------------ here is a png - ![click](https://github.com/user-attachments/assets/81fc84c8-deae-46f9-b38c-9aaec2064481) . **Thank you very much for your hard work!**
closed
2024-07-21T11:37:07Z
2024-07-21T13:25:20Z
https://github.com/seleniumbase/SeleniumBase/issues/2946
[ "question", "UC Mode / CDP Mode" ]
leo562
1
x-tabdeveloping/topicwizard
plotly
4
Compatibility for Chinese
Hi! Thanks for this awesome package! Currently am applying this package on Chinese language text corpus. The output generated are "empty squares" - the reason behind this is that we need to use explicit Language fonts (.tff file which I have). Any idea how to incorporate this external font file to this package for use? Thanks!
closed
2023-03-22T08:52:15Z
2025-01-03T14:51:23Z
https://github.com/x-tabdeveloping/topicwizard/issues/4
[ "bug" ]
jsnleong
5
amdegroot/ssd.pytorch
computer-vision
495
continue train
train break how to contiune? Thanks
open
2020-06-28T00:32:20Z
2020-09-23T06:40:49Z
https://github.com/amdegroot/ssd.pytorch/issues/495
[]
czy112
2
Ehco1996/django-sspanel
django
12
关于负载均衡的疑问
节点我搭建了2个,面板id都为1。 如果服务器地址填写ip地址的话,填写后端1的ip,那么,后端2号是可以正常访问的,前台不会计算后端2号的流量消耗。 01 我的疑问是,,如果我用一个域名做为服务器地址。。联通和电信,dns解析不同的后端ip,,那么流量计费正常吗? 02 如果遇到懂技术的用户,他通过域名不同的解析,找到后端ip,,那么,他直接用ip,会不会出现一个不计费的线路 03 这是2个后端的情况,如果多个后端呢?
closed
2017-10-08T23:23:49Z
2017-10-08T23:44:26Z
https://github.com/Ehco1996/django-sspanel/issues/12
[]
cheapssr
1
LibreTranslate/LibreTranslate
api
105
Limit quantity of calls for key
[Few shot translation](https://community.libretranslate.com/t/few-shot-translation-with-multilingual-language-models/160) model backends are expensive (OpenAI API is currently ~$0.06 per token). This means using few shot translations will require limiting API keys to a number of calls or characters limit.
closed
2021-06-24T22:33:48Z
2022-07-17T15:15:04Z
https://github.com/LibreTranslate/LibreTranslate/issues/105
[ "enhancement" ]
PJ-Finlay
0
CorentinJ/Real-Time-Voice-Cloning
deep-learning
538
New pretrained synthesizer model (tensorflow)
Trained on LibriSpeech, using the current synthesizer (tensorflow). This performs similarly to the current model, with fewer random gaps appearing in the middle of synthesized utterances. It handles short input texts better too. ### Download link: https://www.dropbox.com/s/3kyjgew55c4yxtf/librispeech_270k_tf.zip?dl=0 Unzip the file and move the `logs-pretrained` folder to `synthesizer/saved_models`. I am not going to provide scripts to reproduce the training. For anyone interested, you will need to curate LibriSpeech to have more consistent prosody. This is what I did when running synthesizer_preprocess_audio.py: 1. In synthesizer/hparams.py, set `silence_min_duration_split=0.05` 2. Right before [this line](https://github.com/CorentinJ/Real-Time-Voice-Cloning/blob/8f71d678d2457dffc4d07b52e75be11433313e15/synthesizer/preprocess.py#L182), run `encoder.preprocess_wav()` on each wav, this will use voice activation detection to trim silences (see #501). Compare the lengths of the "before" and "after" wavs. If they don't match then it means a silence is detected and it is discarded. I keep the "before" wav if the lengths match. 3. Post-process `datasets_root/SV2TTS/synthesizer/train.txt` to include utterances between 225 and 600 mel frames (2.8 to 7.5 sec). This leaves 48 hours of training data. 4. Train from scratch for about 270k steps. I used a batch size of 12 because of limited GPU memory.
closed
2020-09-30T07:59:31Z
2021-12-04T06:01:56Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/538
[]
ghost
3
sinaptik-ai/pandas-ai
data-visualization
769
Azure Open AI support not working in 1.5.1
### System Info Platform: Azure databricks Python version: 3.10.12 pandas ai: 1.5.1 ### 🐛 Describe the bug OS version: Python version: pandas ai: 1.5.1 When changing from **1.4.10** to **1.5.1**, my code is not longer working. I am following the documentation and I changed **api_base* to **azure_endpoint** I am configuring the LLM as follows: llm = AzureOpenAI(api_token="my token", azure_endpoint="my endpoint", api_version="2023-05-15", deployment_name="my deployment name", ) I tried different **api_versions** and I tried to set **is_chat** to true. I am using GPT 3.5 turbo with version number 0301. The reply I get from PandasAI, regardless of the prompt is **_'Unfortunately, I was not able to answer your question, because of the following error:\n\nNo result returned\n'_** **Note that the same code works fine in pandas ai 1.4.10** Thanks in advance for the info and let me know if you need additional info.
closed
2023-11-21T20:21:20Z
2024-06-01T00:20:49Z
https://github.com/sinaptik-ai/pandas-ai/issues/769
[ "bug" ]
epicvhbennetts
16
donnemartin/system-design-primer
python
551
DNS image
I personally think this image is more efficient to explain DNS' working: ![q5s2t](https://user-images.githubusercontent.com/53744971/127287086-da2c26c5-82e1-4c28-86ea-effc9c733595.jpg)
open
2021-07-28T08:05:51Z
2022-04-23T13:17:41Z
https://github.com/donnemartin/system-design-primer/issues/551
[ "needs-review" ]
shreyshreyansh
0
gradio-app/gradio
data-science
10,180
`/gradio_api/file=` may not work with docker
### Describe the bug When I deploy the gradio app with docker and mount a volume which path is included in `allowed_paths`, I got an error `{"detail":"File not allowed:images/0.jpg}` ![image](https://github.com/user-attachments/assets/91a0c28b-355a-4d6f-8871-c6f990db821e) When started locally, the path can be accessed normally. ### Have you searched existing issues? 🔎 - [X] I have searched and found no existing issues ### Reproduction ```python import gradio as gr with gr.Blocks( title="MMKG-RAG", css_paths=[Path("src/mgrag/gui/front/index.css")] ) as demo: gr.Markdown( "# MMKG-RAG\n\n Enhancing Retrieval-Augmented Generation with Multi-Modal Knowledge Graph Integration. [GitHub](https://github.com/wenzhaoabc/mmkg-rag)") if __name__ == "__main__": demo.launch( allowed_paths=[ "/root_path/", ], ) ``` Docker start command: ```bash docker run -p 7860:7860 -v ~/project/root_path:/root_path --name project project:v2 ``` When I accessed 'http://127.0.0.1:7860', I got an error `{"detail":"File not allowed:images/0.jpg}` ### Screenshot _No response_ ### Logs _No response_ ### System Info ```shell Gradio Environment Information: ------------------------------ Operating System: Linux gradio version: 5.8.0 gradio_client version: 1.5.1 ------------------------------------------------ gradio dependencies in your environment: aiofiles: 23.2.1 anyio: 4.7.0 audioop-lts is not installed. fastapi: 0.115.6 ffmpy: 0.4.0 gradio-client==1.5.1 is not installed. httpx: 0.28.1 huggingface-hub: 0.26.5 jinja2: 3.1.4 markupsafe: 2.1.5 numpy: 2.2.0 orjson: 3.10.12 packaging: 24.2 pandas: 2.2.3 pillow: 11.0.0 pydantic: 2.10.3 pydub: 0.25.1 python-multipart: 0.0.19 pyyaml: 6.0.2 ruff: 0.8.2 safehttpx: 0.1.6 semantic-version: 2.10.0 starlette: 0.41.3 tomlkit: 0.13.2 typer: 0.15.1 typing-extensions: 4.12.2 urllib3: 2.2.3 uvicorn: 0.32.1 authlib; extra == 'oauth' is not installed. itsdangerous; extra == 'oauth' is not installed. gradio_client dependencies in your environment: fsspec: 2024.10.0 httpx: 0.28.1 huggingface-hub: 0.26.5 packaging: 24.2 typing-extensions: 4.12.2 websockets: 14.1 ``` ### Severity I can work around it
open
2024-12-11T14:27:08Z
2025-02-28T17:54:38Z
https://github.com/gradio-app/gradio/issues/10180
[ "bug" ]
wenzhaoabc
0
sqlalchemy/sqlalchemy
sqlalchemy
10,050
Possibility to use callable for relationship.back_populates and ForeignKey.column
### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/10049 <div type='discussions-op-text'> <sup>Originally posted by **AlexanderPodorov** July 1, 2023</sup> Given this option we could make existing mapping more robust and reliable for refactoring. Current code, example from [here](https://docs.sqlalchemy.org/en/20/orm/basic_relationships.html#declarative-vs-imperative-forms): ```python class Parent(Base): __tablename__ = "parent_table" id: Mapped[int] = mapped_column(primary_key=True) children: Mapped[List["Child"]] = relationship(back_populates="parent") class Child(Base): __tablename__ = "child_table" id: Mapped[int] = mapped_column(primary_key=True) parent_id: Mapped[int] = mapped_column(ForeignKey("parent_table.id")) parent: Mapped["Parent"] = relationship(back_populates="children") ``` Suggested example code: ```python from __future__ import annotations class Parent(Base): __tablename__ = "parent_table" id: Mapped[int] = mapped_column(primary_key=True) children: Mapped[List[Child]] = relationship(back_populates=lambda: Child.parent) class Child(Base): __tablename__ = "child_table" id: Mapped[int] = mapped_column(primary_key=True) parent_id: Mapped[int] = mapped_column(ForeignKey(lambda: Parent.id)) parent: Mapped[Parent] = relationship(back_populates=lambda: Parent.children) ``` In the above example if we rename any of attributes, the mapping will still work. No need to manually rename `back_populates` and foreign key column name. Please note that `lambda` is kind of required here to enable forward references and lazy evaluation of attributes. What do you think? Thanks.</div>
closed
2023-07-02T14:17:53Z
2024-02-01T09:51:43Z
https://github.com/sqlalchemy/sqlalchemy/issues/10050
[ "orm", "use case", "patch provided", "orm - annotated declarative" ]
zzzeek
5
onnx/onnx
tensorflow
6,329
Change the example in the documentation of Transpose
TL;DR: The ONNX documentation for operator `Transpose` does not remove an ambiguity. The current example is > For example, when perm=(1, 0, 2), given an input tensor of shape (1, 2, 3), the output shape will be (2, 1, 3). I propose to replace it with > For example, when perm=(2, 0, 1), given an input tensor of shape (1, 2, 3), the output shape will be (3, 1, 2). Long version: Applying the permutation `perm` over a tensor of shape `(d1,...,dk)` leads to a tensor of shape `(d'1,...,d'k)` such that `d'i = d(perm[i])`. Another (slightly less natural but reasonable) interpretation would be to assume instead that `d'(perm[i]) = di`. (Indeed, that's the inverse permutation of `perm`). The current example from the documentation is such that it is its own inverse, which means that it does not disambiguate the two interpretations. In the example I propose, the incorrect interpretation would lead to a tensor of shape (2,3,1). Hence, the example is more informative.
open
2024-08-30T07:04:16Z
2024-09-06T14:58:25Z
https://github.com/onnx/onnx/issues/6329
[ "bug", "topic: documentation", "topic: spec clarification", "contributions welcome" ]
agrastien
1
coqui-ai/TTS
pytorch
4,118
[Bug] Streaming inference does not work
### Describe the bug Tried the streaming code at https://docs.coqui.ai/en/latest/models/xtts.html#streaming-manually with use_deepspeed=False on CPU. Got error: AttributeError: 'int' object has no attribute '_pad_token_tensor' ### To Reproduce import os import time import torch import torchaudio from TTS.tts.configs.xtts_config import XttsConfig from TTS.tts.models.xtts import Xtts print("Loading model...") config = XttsConfig() config.load_json("/path/to/xtts/config.json") model = Xtts.init_from_config(config) model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", use_deepspeed=False) #model.cuda() print("Computing speaker latents...") gpt_cond_latent, speaker_embedding = model.get_conditioning_latents(audio_path=["reference.wav"]) print("Inference...") t0 = time.time() chunks = model.inference_stream( "It took me quite a long time to develop a voice and now that I have it I am not going to be silent.", "en", gpt_cond_latent, speaker_embedding ) wav_chuncks = [] for i, chunk in enumerate(chunks): if i == 0: print(f"Time to first chunck: {time.time() - t0}") print(f"Received chunk {i} of audio length {chunk.shape[-1]}") wav_chuncks.append(chunk) wav = torch.cat(wav_chuncks, dim=0) torchaudio.save("xtts_streaming.wav", wav.squeeze().unsqueeze(0).cpu(), 24000) ### Expected behavior It should output generated audio ### Logs ```shell /Users/zhz/miniconda3/envs/xtts/lib/python3.10/site-packages/TTS/tts/layers/xtts/stream_generator.py:138: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation) warnings.warn( /Users/zhz/miniconda3/envs/xtts/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:818: UserWarning: `return_dict_in_generate` is NOT set to `True`, but `output_hidden_states` is. When `return_dict_in_generate` is not `True`, `output_hidden_states` is ignored. warnings.warn( Traceback (most recent call last): File "/Users/zhz/Desktop/paradigm/conversation_playground/xtts/xtts_streaming.py", line 31, in <module> for i, chunk in enumerate(chunks): File "/Users/zhz/miniconda3/envs/xtts/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 36, in generator_context response = gen.send(None) File "/Users/zhz/miniconda3/envs/xtts/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 652, in inference_stream gpt_generator = self.gpt.get_generator( File "/Users/zhz/miniconda3/envs/xtts/lib/python3.10/site-packages/TTS/tts/layers/xtts/gpt.py", line 603, in get_generator return self.gpt_inference.generate_stream( File "/Users/zhz/miniconda3/envs/xtts/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/Users/zhz/miniconda3/envs/xtts/lib/python3.10/site-packages/TTS/tts/layers/xtts/stream_generator.py", line 186, in generate model_kwargs["attention_mask"] = self._prepare_attention_mask_for_generation( File "/Users/zhz/miniconda3/envs/xtts/lib/python3.10/site-packages/transformers/generation/utils.py", line 585, in _prepare_attention_mask_for_generation pad_token_id = generation_config._pad_token_tensor AttributeError: 'int' object has no attribute '_pad_token_tensor' ``` ### Environment ```shell '{ "CUDA": { "GPU": [], "available": false, "version": null }, "Packages": { "PyTorch_debug": false, "PyTorch_version": "2.5.1", "TTS": "0.22.0", "numpy": "1.22.0" }, "System": { "OS": "Darwin", "architecture": [ "64bit", "" ], "processor": "arm", "python": "3.10.16", "version": "Darwin Kernel Version 24.1.0: Thu Oct 10 21:06:23 PDT 2024; root:xnu-11215.41.3~3/RELEASE_ARM64_T8132" } } ``` ### Additional context _No response_
closed
2024-12-31T05:23:56Z
2025-02-22T05:07:50Z
https://github.com/coqui-ai/TTS/issues/4118
[ "bug", "wontfix" ]
1640675651
6
pytest-dev/pytest-selenium
pytest
46
Typo on documentation
from documentation in testing bot section: ``` py.test --driver TestingBot --capability browserName firefox --capability browserName 39 --capability platform WIN8 ``` correction should be ``` py.test --driver TestingBot --capability browserName firefox --capability version 39 --capability platform WIN8 ```
closed
2015-12-14T09:22:16Z
2015-12-14T10:59:27Z
https://github.com/pytest-dev/pytest-selenium/issues/46
[]
rarajabs
1
huggingface/datasets
pandas
6,490
`load_dataset(...,save_infos=True)` not working without loading script
### Describe the bug It seems that saving a dataset infos back into the card file is not working for datasets without a loading script. After tracking the problem a bit it looks like saving the infos uses `Builder.get_imported_module_dir()` as its destination directory. Internally this is a call to `inspect.getfile()` but since the actual builder class used is dynamically created (cf. `datasets.load.configure_builder_class`) this method actually return te path to the parent builder class (e.g. `datasets.packaged_modules.json.JSON`). ### Steps to reproduce the bug 1. Have a local dataset without any loading script 2. Make sure there are no dataset infos in the README.md 3. Load with `save_infos=True` 4. No change in the dataset README.md 5. A new README.md file is created in the directory of the parent builder class (e.g. for json in `.../site-packages/datasets/packaged_modules/json/README.md`) ### Expected behavior The dataset README.md should be updated and no file should be created in the python environment. ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.3 - `fsspec` version: 2023.6.0
open
2023-12-12T08:09:18Z
2023-12-12T08:36:22Z
https://github.com/huggingface/datasets/issues/6490
[]
morganveyret
1
junyanz/pytorch-CycleGAN-and-pix2pix
deep-learning
1,316
Test results (question)
Is it possible to save the full result image, and not only the half of Fake_B and Real_B?
closed
2021-09-18T22:18:48Z
2021-12-02T21:07:52Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1316
[]
avada-z
1
torchbox/wagtail-grapple
graphql
409
Performance problems with querying for a large numbers of redirects
We have an, ah, _unseemly_ number of redirects (almost 100,000), and it seems like it's starting to timeout on the GraphQL query for them at this point. It's not entirely clear why it has started to do this, because the number of redirects hasn't jumped dramatically, but it seems to have been triggered by an upgrade from 0.18 to 0.19.1. Perhaps some kind of performance regression? (Several GraphQL subdependencies came along with that update, too.) We've since upgraded to 0.24, which is the latest version that supports the version of Wagtail we're currently running (4.2.4 – yeah, I know…), but it still takes a very long time, often timing out after trying for 60 seconds. If there isn't a performance regression that can easily be diagnosed and fixed, some other potential ways to address this could be: - Enable pagination support on the redirects query - Allow fetching only redirects added/modified after a certain datetime, so the frontend can ask for only what's changed since it last updated its redirects cache
open
2024-10-01T18:38:44Z
2024-10-02T09:06:57Z
https://github.com/torchbox/wagtail-grapple/issues/409
[]
Scotchester
2
microsoft/nni
pytorch
5,224
AttributeError: 'torch._C.Node' object has no attribute 'schema'
I use the tool to try to prue my model such as (https://github.com/microsoft/nni/blob/dab51f799f77aa72c18774faffaedf8d0ee2c977/examples/model_compress/pruning/admm_pruning_torch.py) I only change the model(with restnet) and the dataloader: But now there is some problem when I use ModelSpeedup File "<ipython-input-7-25297990bbbb>", line 1, in <module> ModelSpeedup(model, torch.randn([1, 2, 224, 224]).to(device), masks).speedup_model() File "D:\anaconda3\lib\site-packages\nni\compression\pytorch\speedup\compressor.py", line 543, in speedup_model self.infer_modules_masks() File "D:\anaconda3\lib\site-packages\nni\compression\pytorch\speedup\compressor.py", line 380, in infer_modules_masks self.update_direct_sparsity(curnode) File "D:\anaconda3\lib\site-packages\nni\compression\pytorch\speedup\compressor.py", line 228, in update_direct_sparsity func = jit_to_python_function(node, self) File "D:\anaconda3\lib\site-packages\nni\compression\pytorch\speedup\jit_translate.py", line 554, in jit_to_python_function return trans_func_dict[node.op_type](node, speedup) File "D:\anaconda3\lib\site-packages\nni\compression\pytorch\speedup\jit_translate.py", line 488, in generate_aten_to_python schema = c_node.schema() AttributeError: 'torch._C.Node' object has no attribute 'schema'
closed
2022-11-11T14:10:09Z
2022-12-05T02:38:01Z
https://github.com/microsoft/nni/issues/5224
[]
sunpeil
3
MaartenGr/BERTopic
nlp
1,604
TypeError: issubclass() arg 1 must be a class
Hello, I wanted to develop a project using with BERTopic but I got an error. The error is: ``` TypeError Traceback (most recent call last) /var/folders/q0/6dq2tpd50s7g0rgbvq9q6ys40000gn/T/ipykernel_58966/2806191270.py in <cell line: 1>() ----> 1 from bertopic import BERTopic ~/opt/anaconda3/lib/python3.9/site-packages/bertopic/__init__.py in <module> ----> 1 from bertopic._bertopic import BERTopic 2 3 __version__ = "0.15.0" 4 5 __all__ = [ ~/opt/anaconda3/lib/python3.9/site-packages/bertopic/_bertopic.py in <module> 47 from bertopic.cluster import BaseCluster 48 from bertopic.backend import BaseEmbedder ---> 49 from bertopic.representation._mmr import mmr 50 from bertopic.backend._utils import select_backend 51 from bertopic.vectorizers import ClassTfidfTransformer ~/opt/anaconda3/lib/python3.9/site-packages/bertopic/representation/__init__.py in <module> 35 # POS using Spacy 36 try: ---> 37 from bertopic.representation._pos import PartOfSpeech 38 except ModuleNotFoundError: 39 PartOfSpeech = NotInstalled("Part of Speech with Spacy", "spacy") ~/opt/anaconda3/lib/python3.9/site-packages/bertopic/representation/_pos.py in <module> 3 import pandas as pd 4 ----> 5 import spacy 6 from spacy.matcher import Matcher 7 from spacy.language import Language ~/opt/anaconda3/lib/python3.9/site-packages/spacy/__init__.py in <module> 11 from thinc.api import Config, prefer_gpu, require_cpu, require_gpu # noqa: F401 12 ---> 13 from . import pipeline # noqa: F401 14 from . import util 15 from .about import __version__ # noqa: F401 ~/opt/anaconda3/lib/python3.9/site-packages/spacy/pipeline/__init__.py in <module> ----> 1 from .attributeruler import AttributeRuler 2 from .dep_parser import DependencyParser 3 from .edit_tree_lemmatizer import EditTreeLemmatizer 4 from .entity_linker import EntityLinker 5 from .entityruler import EntityRuler ~/opt/anaconda3/lib/python3.9/site-packages/spacy/pipeline/attributeruler.py in <module> 6 from .. import util 7 from ..errors import Errors ----> 8 from ..language import Language 9 from ..matcher import Matcher 10 from ..scorer import Scorer ~/opt/anaconda3/lib/python3.9/site-packages/spacy/language.py in <module> 41 from .lang.tokenizer_exceptions import BASE_EXCEPTIONS, URL_MATCH 42 from .lookups import load_lookups ---> 43 from .pipe_analysis import analyze_pipes, print_pipe_analysis, validate_attrs 44 from .schemas import ( 45 ConfigSchema, ~/opt/anaconda3/lib/python3.9/site-packages/spacy/pipe_analysis.py in <module> 4 5 from .errors import Errors ----> 6 from .tokens import Doc, Span, Token 7 from .util import dot_to_dict 8 ~/opt/anaconda3/lib/python3.9/site-packages/spacy/tokens/__init__.py in <module> ----> 1 from ._serialize import DocBin 2 from .doc import Doc 3 from .morphanalysis import MorphAnalysis 4 from .span import Span 5 from .span_group import SpanGroup ~/opt/anaconda3/lib/python3.9/site-packages/spacy/tokens/_serialize.py in <module> 12 from ..errors import Errors 13 from ..util import SimpleFrozenList, ensure_path ---> 14 from ..vocab import Vocab 15 from ._dict_proxies import SpanGroups 16 from .doc import DOCBIN_ALL_ATTRS as ALL_ATTRS ~/opt/anaconda3/lib/python3.9/site-packages/spacy/vocab.pyx in init spacy.vocab() ~/opt/anaconda3/lib/python3.9/site-packages/spacy/tokens/doc.pyx in init spacy.tokens.doc() ~/opt/anaconda3/lib/python3.9/site-packages/spacy/schemas.py in <module> 285 286 --> 287 class TokenPattern(BaseModel): 288 orth: Optional[StringValue] = None 289 text: Optional[StringValue] = None ~/opt/anaconda3/lib/python3.9/site-packages/pydantic/main.cpython-39-darwin.so in pydantic.main.ModelMetaclass.__new__() ~/opt/anaconda3/lib/python3.9/site-packages/pydantic/fields.cpython-39-darwin.so in pydantic.fields.ModelField.infer() ~/opt/anaconda3/lib/python3.9/site-packages/pydantic/fields.cpython-39-darwin.so in pydantic.fields.ModelField.__init__() ~/opt/anaconda3/lib/python3.9/site-packages/pydantic/fields.cpython-39-darwin.so in pydantic.fields.ModelField.prepare() ~/opt/anaconda3/lib/python3.9/site-packages/pydantic/fields.cpython-39-darwin.so in pydantic.fields.ModelField._type_analysis() ~/opt/anaconda3/lib/python3.9/site-packages/pydantic/fields.cpython-39-darwin.so in pydantic.fields.ModelField._type_analysis() ~/opt/anaconda3/lib/python3.9/typing.py in __subclasscheck__(self, cls) 845 return issubclass(cls.__origin__, self.__origin__) 846 if not isinstance(cls, _GenericAlias): --> 847 return issubclass(cls, self.__origin__) 848 return super().__subclasscheck__(cls) 849 TypeError: issubclass() arg 1 must be a class ``` I checked everywhere but I could not find a solution. Can you help me? I try to fix it. If I fix this problem, I write a solution as well.
closed
2023-10-31T17:31:30Z
2023-11-01T19:04:13Z
https://github.com/MaartenGr/BERTopic/issues/1604
[]
tanersekmen
2
nolar/kopf
asyncio
1,032
Document how to debug kopf
### Keywords debugger, debug, debugging ### Problem I want to start kopf with debugger mode so that I can attach to it with me IDE and debug the code. Is it doable but not yet documented? regards
closed
2023-06-22T11:15:46Z
2023-07-24T05:27:06Z
https://github.com/nolar/kopf/issues/1032
[ "question" ]
lkoniecz
2
TencentARC/GFPGAN
deep-learning
435
nvfuser_codegen.dll
Hi, I'm getting this error when I try it, any help ? `Error loading "C:\Users\Mourad\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\lib\nvfuser_codegen.dll" or one of its dependencies. `
open
2023-08-28T16:42:57Z
2023-08-28T16:42:57Z
https://github.com/TencentARC/GFPGAN/issues/435
[]
Arifi
0
viewflow/viewflow
django
229
integrate mail server
Hi, is it possible to integrate send_email method from library django.core.mail (i.e. after approved process in the http://docs.viewflow.io/viewflow_quickstart.html)? thank you
closed
2018-10-17T07:38:00Z
2018-10-25T03:48:31Z
https://github.com/viewflow/viewflow/issues/229
[ "request/question" ]
gotamarepo
1
huggingface/transformers
tensorflow
36,067
Differences when inheriting from nn.Module and BertFromPreTrained
### System Info ```shell - `transformers` version: 4.46.3 - Platform: Linux-4.15.0-112-generic-x86_64-with-glibc2.17 - Python version: 3.8.18 - Huggingface_hub version: 0.24.6 - Safetensors version: 0.4.5 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.0+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA GeForce RTX 3090 ``` ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction I customized a model, but when the model class inherits BertPreTrainedModel and uses the from_pretrained method to load the model, I found that the performance is completely different from when the model class directly inherits nn.Module for training (the latter is better). **BertFromPreTrained:** ``` model = BertSoftmax.from_pretrained(model_path, config=model_config) class BertSoftmax(BertPreTrainedModel): def __init__(self, config): super(BertSoftmax, self).__init__(config) self.bert = BertModel(config) ......................................... self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Sequential(nn.Linear(config.hidden_size, config.num_labels)) self.init_weights() ``` ################################################################################### **nn.Module:** ``` from transformers import BertModel model = BertSoftmax2(model_path, config=model_config) class BertSoftmax2(nn.Module): def __init__(self, model_path, config): super(BertSoftmax, self).__init__(config) self.bert = BertModel.from_pretrained(model_path, config) ......................................... self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Sequential(nn.Linear(config.hidden_size, config.num_labels)) ``` ### Expected behavior ```shell I hope the following questions can be answered, because I am very confused 1. Is the way I use nn.Module to load the model correct? 2. Why does the latter nn.Module perform better during training? Is it because there is something wrong with the way I load the model? 3. If both methods are correct, what are the differences in their application scenarios? Thanks! ``` ### Checklist - [x] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [x] I checked if a related official extension example runs on my machine.
closed
2025-02-06T10:30:27Z
2025-02-06T15:12:16Z
https://github.com/huggingface/transformers/issues/36067
[]
QXGeraldMo
1
pytest-dev/pytest-html
pytest
510
Pytest-html generates html report even if no tests were run with VSCode
I am using VSCode and each time the test explorer runs a discovery/collects tests, a report.html file is generated even if no tests were run. I currently have my framework set up to create folders for each run with a report.html similar to the structure below. I am procedurally setting the the htmlpath in pytest_configure for pytest-html. /runs ..../2022-04-08-10-25-00 ......../tests ............/name_of_test ................/logs ................/screenshots ................/captures ................/downloads ........report.html Except I am getting the below structure when VSCode discovers tests and no tests are run. /runs ..../2022-04-08-10-25-00 ........report.html ..../2022-04-08-10-26-00 ........report.html Here is my pytest_configure method for setting html path procedurally: @pytest.hookimpl(tryfirst=True) def pytest_configure(config): if "session_date" not in config.stash: config.stash["session_date"] = datetime.now().strftime("%Y-%m-%d-%H-%M-%S") [...] run_path = config.rootpath.joinpath("runs", config.stash["session_date"]) run_path.mkdir(parents=True, exist_ok=True) config.stash["run_path"] = run_path config.option.htmlpath = str(run_path / "report.html") Any way to prevent html report generation if no tests are run?
closed
2022-04-08T14:31:24Z
2022-04-08T16:34:03Z
https://github.com/pytest-dev/pytest-html/issues/510
[]
davidcasarez
6
pyg-team/pytorch_geometric
deep-learning
9,600
bunch of CI failures with latest updates
### 🐛 Describe the bug when updating from 8c849a482c3cf2326c1f493e79d04169b26dfb0b to the latest commit c0c2d5fefddbce412741db68cc7a74af225fa94a we now see the following errors (their all pretty much the same, let me know if you want the full log) ``` ______________________________ test_to_undirected ______________________________ def test_to_undirected(): row = torch.tensor([0, 1, 1]) col = torch.tensor([1, 0, 2]) edge_index = to_undirected(torch.stack([row, col], dim=0)) assert edge_index.tolist() == [[0, 1, 1, 2], [1, 0, 2, 1]] @torch.jit.script > def jit(edge_index: Tensor) -> Tensor: test/utils/test_undirected.py:37: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1428: in script ret = _script_impl( /usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1204: in _script_impl fn = torch._C._jit_script_compile( /usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1498: in _get_overloads _compile_function_with_overload(overload_fn, qual_name, obj) /usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1471: in _compile_function_with_overload fn = torch._C._jit_script_compile_overload( /usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1498: in _get_overloads _compile_function_with_overload(overload_fn, qual_name, obj) /usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1471: in _compile_function_with_overload fn = torch._C._jit_script_compile_overload( /usr/local/lib/python3.10/dist-packages/torch/jit/_recursive.py:1003: in try_compile_fn return torch.jit.script(fn, _rcb=rcb) /usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1428: in script ret = _script_impl( /usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1204: in _script_impl fn = torch._C._jit_script_compile( /usr/local/lib/python3.10/dist-packages/torch/jit/_recursive.py:1003: in try_compile_fn return torch.jit.script(fn, _rcb=rcb) /usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1428: in script ret = _script_impl( /usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1204: in _script_impl fn = torch._C._jit_script_compile( /usr/local/lib/python3.10/dist-packages/torch/jit/_recursive.py:1003: in try_compile_fn return torch.jit.script(fn, _rcb=rcb) /usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1428: in script ret = _script_impl( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ obj = <function is_compiling at 0xf103a8e791b0>, optimize = None, _frames_up = 1 _rcb = <function createResolutionCallbackFromEnv.<locals>.<lambda> at 0xf10712e6fc70> example_inputs = None def _script_impl( obj, optimize=None, _frames_up=0, _rcb=None, example_inputs: Union[List[Tuple], Dict[Callable, List[Tuple]], None] = None, ): global type_trace_db if optimize is not None: warnings.warn( "`optimize` is deprecated and has no effect. " "Use `with torch.jit.optimized_execution()` instead", FutureWarning, stacklevel=3, ) # No-op for modules, functions, class instances that are already scripted if isinstance(obj, RecursiveScriptClass): return obj if isinstance(obj, ScriptModule): return obj if isinstance(obj, ScriptFunction): return obj if example_inputs: # If MonkeyType is installed, enable profile directed type annotation # Check if example_inputs are defined and generate call traces # for the method by running eager mode version of the method with # the provide example inputs. This logs all the traces in type_trace_db type_trace_db = JitTypeTraceStore() if monkeytype_trace: monkeytype_config = JitTypeTraceConfig(type_trace_db) with monkeytype_trace(monkeytype_config): if isinstance(example_inputs, Dict): # If the obj is an nn.Module or a class, then each method is # executed with the arguments provided in the example inputs. # example inputs here will be of type Dict(class.method, (arguments)) # This is used to infer type annotations for those methods # which are not called directly under the hood of monkeytype. for module, example_input in example_inputs.items(): for example in example_input: module(*example) elif isinstance(example_inputs, List): for examples in example_inputs: obj(*examples) else: raise ValueError( "Error: Unable to infer types. Please format the inputs to type `List[Tuple]`" " or `Dict[Callable, List[Tuple]]` to be run with MonkeyType." ) else: warnings.warn( "Warning: monkeytype is not installed. Please install https://github.com/Instagram/MonkeyType " "to enable Profile-Directed Typing in TorchScript. Refer to " "https://github.com/Instagram/MonkeyType/blob/master/README.rst to install MonkeyType. " ) if isinstance(obj, torch.nn.Module): obj = call_prepare_scriptable_func(obj) return torch.jit._recursive.create_script_module( obj, torch.jit._recursive.infer_methods_to_compile ) else: obj = obj.__prepare_scriptable__() if hasattr(obj, "__prepare_scriptable__") else obj # type: ignore[operator] if isinstance(obj, dict): return create_script_dict(obj) if isinstance(obj, list): return create_script_list(obj) if inspect.isclass(obj): qualified_name = _qualified_name(obj) # If this type is a `nn.Module` subclass, they probably meant to pass # an instance instead of a Module if issubclass(obj, torch.nn.Module): raise RuntimeError( f"Type '{obj}' cannot be compiled since it inherits from nn.Module, pass an instance instead" ) # Enums are automatically usable in TorchScript, explicitly scripting # is not necessary, but not harmful either. if issubclass(obj, enum.Enum): return obj if not _is_new_style_class(obj): raise RuntimeError( "TorchScript classes must be new-style classes. " "Please inherit from 'object'." ) if len(obj.mro()) > 2: raise RuntimeError( "TorchScript classes does not support inheritance yet. " "Please directly inherit from 'object'." ) if _rcb is None: _rcb = _jit_internal.createResolutionCallbackFromFrame(_frames_up + 1) _compile_and_register_class(obj, _rcb, qualified_name) return obj elif inspect.isfunction(obj) or inspect.ismethod(obj): qualified_name = _qualified_name(obj) # this is a decorated fn, and we need to the underlying fn and its rcb if hasattr(obj, "__script_if_tracing_wrapper"): obj = obj.__original_fn # type: ignore[union-attr] _rcb = _jit_internal.createResolutionCallbackFromClosure(obj) # some functions are explicitly marked as not supported in script mode if hasattr(obj, "__script_unsupported"): raise RuntimeError("TorchScript error: " + obj.__script_unsupported) _check_directly_compile_overloaded(obj) maybe_already_compiled_fn = _try_get_jit_cached_function(obj) if maybe_already_compiled_fn: maybe_already_compiled_fn._torchdynamo_inline = obj # type: ignore[attr-defined] return maybe_already_compiled_fn ast = get_jit_def(obj, obj.__name__) if _rcb is None: _rcb = _jit_internal.createResolutionCallbackFromClosure(obj) > fn = torch._C._jit_script_compile( qualified_name, ast, _rcb, get_default_args(obj) ) E RuntimeError: E undefined value torch: E File "/usr/local/lib/python3.10/dist-packages/typing_extensions.py", line 34 E It will depend on the context where to use what. E """ E return torch.compiler.is_compiling() E ~~~~~ <--- HERE E 'is_compiling' is being compiled since it was called from 'is_compiling' E File "/usr/local/lib/python3.10/dist-packages/torch_geometric/_compile.py", line 14 E """ E if torch_geometric.typing.WITH_PT21: E return torch._dynamo.is_compiling() E ~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE E return False # pragma: no cover E 'is_compiling' is being compiled since it was called from 'index_sort' E File "/usr/local/lib/python3.10/dist-packages/torch_geometric/utils/_index_sort.py", line 30 E (default: :obj:`False`) E """ E if stable or not torch_geometric.typing.WITH_INDEX_SORT or is_compiling(): E ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE E return inputs.sort(stable=stable) E return pyg_lib.ops.index_sort(inputs, max_value=max_value) E 'index_sort' is being compiled since it was called from 'coalesce' E File "/usr/local/lib/python3.10/dist-packages/torch_geometric/utils/_coalesce.py", line 147 E E if not is_sorted: E idx[1:], perm = index_sort(idx[1:], max_value=num_nodes * num_nodes) E ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE E if isinstance(edge_index, Tensor): E edge_index = edge_index[:, perm] E 'coalesce' is being compiled since it was called from 'to_undirected' E File "/usr/local/lib/python3.10/dist-packages/torch_geometric/utils/undirected.py", line 209 E edge_attr = [torch.cat([e, e], dim=0) for e in edge_attr] E E return coalesce(edge_index, edge_attr, num_nodes, reduce) E ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE E 'to_undirected' is being compiled since it was called from 'jit' E File "/opt/pyg/pytorch_geometric/test/utils/test_undirected.py", line 38 E @torch.jit.script E def jit(edge_index: Tensor) -> Tensor: E return to_undirected(edge_index) E ~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE /usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1204: RuntimeError =============================== warnings summary =============================== ../../../usr/local/lib/python3.10/dist-packages/torch_geometric/_compile.py:14: 2 warnings test/contrib/nn/models/test_rbcd_attack.py: 36 warnings test/data/test_batch.py: 3 warnings test/data/test_data.py: 2 warnings test/data/test_datapipes.py: 1 warning test/data/test_dataset_summary.py: 5 warnings test/data/test_graph_store.py: 1 warning test/data/test_hypergraph_data.py: 1 warning test/datasets/graph_generator/test_ba_graph.py: 1 warning test/datasets/graph_generator/test_er_graph.py: 1 warning test/datasets/graph_generator/test_grid_graph.py: 1 warning test/datasets/graph_generator/test_tree_graph.py: 1 warning test/datasets/test_ba_shapes.py: 1 warning test/datasets/test_bzr.py: 1 warning test/datasets/test_enzymes.py: 2 warnings test/datasets/test_explainer_dataset.py: 3 warnings test/datasets/test_fake.py: 36 warnings test/datasets/test_imdb_binary.py: 1 warning test/datasets/test_infection_dataset.py: 2 warnings test/datasets/test_mutag.py: 1 warning test/datasets/test_planetoid.py: 1 warning test/datasets/test_snap_dataset.py: 12 warnings test/distributed/test_local_graph_store.py: 1 warning test/explain/algorithm/test_attention_explainer.py: 4 warnings test/explain/algorithm/test_captum.py: 13 warnings test/explain/algorithm/test_gnn_explainer.py: 866 warnings test/explain/algorithm/test_graphmask_explainer.py: 648 warnings test/explain/algorithm/test_pg_explainer.py: 12 warnings test/loader/test_cache.py: 4 warnings test/loader/test_imbalanced_sampler.py: 3 warnings test/loader/test_link_neighbor_loader.py: 41 warnings test/loader/test_neighbor_loader.py: 44 warnings test/loader/test_zip_loader.py: 2 warnings test/nn/aggr/test_attention.py: 2 warnings test/nn/aggr/test_basic.py: 5 warnings test/nn/aggr/test_fused.py: 7 warnings test/nn/aggr/test_multi.py: 10 warnings test/nn/aggr/test_scaler.py: 2 warnings test/nn/aggr/test_set2set.py: 1 warning test/nn/conv/cugraph/test_cugraph_gat_conv.py: 48 warnings test/nn/conv/cugraph/test_cugraph_rgcn_conv.py: 144 warnings test/nn/conv/cugraph/test_cugraph_sage_conv.py: 128 warnings test/nn/conv/test_agnn_conv.py: 2 warnings test/nn/conv/test_antisymmetric_conv.py: 1 warning test/nn/conv/test_appnp.py: 2 warnings test/nn/conv/test_arma_conv.py: 2 warnings test/nn/conv/test_cg_conv.py: 3 warnings test/nn/conv/test_cheb_conv.py: 2 warnings test/nn/conv/test_cluster_gcn_conv.py: 1 warning test/nn/conv/test_create_gnn.py: 1 warning test/nn/conv/test_dir_gnn_conv.py: 2 warnings test/nn/conv/test_dna_conv.py: 2 warnings test/nn/conv/test_edge_conv.py: 1 warning test/nn/conv/test_eg_conv.py: 5 warnings test/nn/conv/test_fa_conv.py: 1 warning test/nn/conv/test_feast_conv.py: 1 warning test/nn/conv/test_film_conv.py: 1 warning test/nn/conv/test_fused_gat_conv.py: 1 warning test/nn/conv/test_gat_conv.py: 5 warnings test/nn/conv/test_gated_graph_conv.py: 1 warning test/nn/conv/test_gatv2_conv.py: 3 warnings test/nn/conv/test_gcn2_conv.py: 1 warning test/nn/conv/test_gcn_conv.py: 9 warnings test/nn/conv/test_gen_conv.py: 3 warnings test/nn/conv/test_general_conv.py: 8 warnings test/nn/conv/test_gin_conv.py: 5 warnings test/nn/conv/test_gmm_conv.py: 4 warnings test/nn/conv/test_gps_conv.py: 6 warnings test/nn/conv/test_graph_conv.py: 2 warnings test/nn/conv/test_han_conv.py: 3 warnings test/nn/conv/test_heat_conv.py: 2 warnings test/nn/conv/test_hetero_conv.py: 11 warnings test/nn/conv/test_hgt_conv.py: 7 warnings test/nn/conv/test_hypergraph_conv.py: 2 warnings test/nn/conv/test_le_conv.py: 1 warning test/nn/conv/test_lg_conv.py: 1 warning test/nn/conv/test_message_passing.py: 36 warnings test/nn/conv/test_mf_conv.py: 1 warning test/nn/conv/test_mixhop_conv.py: 1 warning test/nn/conv/test_nn_conv.py: 2 warnings test/nn/conv/test_pdn_conv.py: 2 warnings test/nn/conv/test_pna_conv.py: 3 warnings test/nn/conv/test_point_conv.py: 1 warning test/nn/conv/test_point_gnn_conv.py: 1 warning test/nn/conv/test_point_transformer_conv.py: 1 warning test/nn/conv/test_ppf_conv.py: 1 warning test/nn/conv/test_res_gated_graph_conv.py: 2 warnings test/nn/conv/test_rgat_conv.py: 65 warnings test/nn/conv/test_rgcn_conv.py: 18 warnings test/nn/conv/test_sage_conv.py: 22 warnings test/nn/conv/test_sg_conv.py: 1 warning test/nn/conv/test_signed_conv.py: 1 warning test/nn/conv/test_simple_conv.py: 4 warnings test/nn/conv/test_ssg_conv.py: 1 warning test/nn/conv/test_static_graph.py: 1 warning test/nn/conv/test_supergat_conv.py: 2 warnings test/nn/conv/test_tag_conv.py: 2 warnings test/nn/conv/test_transformer_conv.py: 4 warnings test/nn/conv/test_wl_conv.py: 1 warning test/nn/conv/test_wl_conv_continuous.py: 1 warning test/nn/dense/test_dense_gat_conv.py: 4 warnings test/nn/dense/test_dense_gcn_conv.py: 1 warning test/nn/dense/test_dense_gin_conv.py: 1 warning test/nn/dense/test_dense_graph_conv.py: 6 warnings test/nn/dense/test_dense_sage_conv.py: 1 warning test/nn/dense/test_linear.py: 14 warnings test/nn/models/test_attentive_fp.py: 1 warning test/nn/models/test_basic_gnn.py: 1821 warnings test/nn/models/test_correct_and_smooth.py: 1 warning test/nn/models/test_deep_graph_infomax.py: 2 warnings test/nn/models/test_deepgcn.py: 8 warnings test/nn/models/test_graph_unet.py: 1 warning test/nn/models/test_label_prop.py: 1 warning test/nn/models/test_lightgcn.py: 36 warnings test/nn/models/test_linkx.py: 2 warnings test/nn/models/test_metapath2vec.py: 3 warnings test/nn/models/test_neural_fingerprint.py: 2 warnings test/nn/models/test_node2vec.py: 2 warnings test/nn/models/test_pmlp.py: 1 warning test/nn/models/test_rect.py: 1 warning test/nn/models/test_rev_gnn.py: 20 warnings test/nn/models/test_signed_gcn.py: 2 warnings test/nn/models/test_tgn.py: 2 warnings test/nn/pool/select/test_select_topk.py: 1 warning test/nn/pool/test_asap.py: 1 warning test/nn/pool/test_avg_pool.py: 1 warning test/nn/pool/test_edge_pool.py: 2 warnings test/nn/pool/test_glob.py: 2 warnings test/nn/pool/test_max_pool.py: 3 warnings test/nn/pool/test_sag_pool.py: 1 warning test/nn/pool/test_topk_pool.py: 1 warning test/nn/test_compile_basic.py: 2 warnings test/nn/test_compile_conv.py: 4 warnings test/nn/test_model_summary.py: 5 warnings test/nn/test_sequential.py: 4 warnings test/nn/test_to_hetero_module.py: 3 warnings test/nn/test_to_hetero_transformer.py: 10 warnings test/nn/test_to_hetero_with_bases_transformer.py: 5 warnings test/profile/test_profile.py: 7 warnings test/profile/test_profiler.py: 2 warnings test/sampler/test_sampler_base.py: 2 warnings test/test_edge_index.py: 208 warnings test/test_warnings.py: 1 warning test/transforms/test_add_metapaths.py: 4 warnings test/transforms/test_face_to_edge.py: 1 warning test/transforms/test_feature_propagation.py: 1 warning test/transforms/test_gdc.py: 2 warnings test/transforms/test_line_graph.py: 1 warning test/transforms/test_local_cartesian.py: 1 warning test/transforms/test_local_degree_profile.py: 1 warning test/transforms/test_node_property_split.py: 3 warnings test/transforms/test_pad.py: 34 warnings test/transforms/test_random_link_split.py: 3 warnings test/transforms/test_remove_duplicated_edges.py: 1 warning test/transforms/test_rooted_subgraph.py: 2 warnings test/transforms/test_sign.py: 1 warning test/transforms/test_to_sparse_tensor.py: 8 warnings test/transforms/test_to_undirected.py: 3 warnings test/transforms/test_two_hop.py: 1 warning test/utils/test_assortativity.py: 1 warning test/utils/test_augmentation.py: 1 warning test/utils/test_coalesce.py: 2 warnings test/utils/test_convert.py: 18 warnings test/utils/test_embedding.py: 1 warning test/utils/test_grid.py: 1 warning test/utils/test_loop.py: 3 warnings test/utils/test_mesh_laplacian.py: 2 warnings test/utils/test_negative_sampling.py: 3 warnings test/utils/test_num_nodes.py: 1 warning test/utils/test_ppr.py: 2 warnings test/utils/test_random.py: 3 warnings test/utils/test_scatter.py: 6 warnings test/utils/test_softmax.py: 3 warnings test/utils/test_sort_edge_index.py: 1 warning test/utils/test_sparse.py: 22 warnings test/utils/test_spmm.py: 2 warnings test/utils/test_train_test_split_edges.py: 1 warning test/utils/test_tree_decomposition.py: 2 warnings test/utils/test_trim_to_layer.py: 1 warning test/utils/test_undirected.py: 2 warnings test/visualization/test_influence.py: 1 warning /usr/local/lib/python3.10/dist-packages/torch_geometric/_compile.py:14: FutureWarning: `torch._dynamo.external_utils.is_compiling` is deprecated. Use `torch.compiler.is_compiling` instead. return torch._dynamo.is_compiling() ../../../usr/local/lib/python3.10/dist-packages/torch_geometric/graphgym/imports.py:14 /usr/local/lib/python3.10/dist-packages/torch_geometric/graphgym/imports.py:14: UserWarning: Please install 'pytorch_lightning' via 'pip install pytorch_lightning' in order to use GraphGym warnings.warn("Please install 'pytorch_lightning' via " test/data/test_batch.py::test_pickling /opt/pyg/pytorch_geometric/test/data/test_batch.py:333: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. batch = torch.load(path) test/data/test_dataset.py: 4 warnings test/datasets/test_bzr.py: 2 warnings test/datasets/test_elliptic.py: 1 warning test/datasets/test_enzymes.py: 3 warnings test/datasets/test_imdb_binary.py: 1 warning test/datasets/test_mutag.py: 2 warnings test/datasets/test_planetoid.py: 3 warnings test/datasets/test_snap_dataset.py: 3 warnings test/datasets/test_suite_sparse.py: 2 warnings test/io/test_fs.py: 2 warnings test/nn/models/test_re_net.py: 1 warning test/transforms/test_random_link_split.py: 1 warning /usr/local/lib/python3.10/dist-packages/torch_geometric/io/fs.py:215: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. return torch.load(f, map_location) test/loader/test_prefetch.py: 10 warnings /usr/local/lib/python3.10/dist-packages/torch_geometric/loader/prefetch.py:76: DeprecationWarning: The argument 'device' of Tensor.pin_memory() is deprecated. Please do not pass this argument. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/native/Memory.cpp:46.) batch = batch.pin_memory(self.device_helper.device) test/loader/test_prefetch.py: 10 warnings /usr/local/lib/python3.10/dist-packages/torch_geometric/loader/prefetch.py:76: DeprecationWarning: The argument 'device' of Tensor.is_pinned() is deprecated. Please do not pass this argument. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/native/Memory.cpp:31.) batch = batch.pin_memory(self.device_helper.device) test/nn/conv/cugraph/test_cugraph_gat_conv.py: 24 warnings test/nn/conv/cugraph/test_cugraph_rgcn_conv.py: 72 warnings test/nn/conv/cugraph/test_cugraph_sage_conv.py: 64 warnings /usr/local/lib/python3.10/dist-packages/pylibcugraphops/pytorch/graph.py:71: UserWarning: dst_max_in_degree currently has no effect warnings.warn("dst_max_in_degree currently has no effect") test/nn/conv/test_message_passing.py::test_my_conv_save /opt/pyg/pytorch_geometric/test/nn/conv/test_message_passing.py:142: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. conv = torch.load(path) test/nn/conv/test_message_passing.py::test_pickle /opt/pyg/pytorch_geometric/test/nn/conv/test_message_passing.py:741: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. model = torch.load(path) test/nn/conv/test_rgcn_conv.py: 12 warnings /usr/local/lib/python3.10/dist-packages/torch/jit/_check.py:178: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`. warnings.warn( test/nn/models/test_basic_gnn.py::test_packaging /opt/pyg/pytorch_geometric/test/nn/models/test_basic_gnn.py:238: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. model = torch.load(path) test/nn/nlp/test_sentence_transformer.py: 12 warnings /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( test/nn/nlp/test_sentence_transformer.py: 12 warnings /usr/local/lib/python3.10/dist-packages/transformers/modeling_attn_mask_utils.py:445: FutureWarning: `torch._dynamo.external_utils.is_compiling` is deprecated. Use `torch.compiler.is_compiling` instead. or (hasattr(torch, "_dynamo") and torch._dynamo.is_compiling()) test/nn/test_model_hub.py::test_from_pretrained /usr/local/lib/python3.10/dist-packages/torch_geometric/nn/model_hub.py:178: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. state_dict = torch.load(model_file, map_location=map_location) test/profile/test_profiler.py::test_profiler[cpu] test/profile/test_profiler.py::test_profiler[cuda:0] /usr/local/lib/python3.10/dist-packages/torch_geometric/profile/profiler.py:342: FutureWarning: `self_cuda_memory_usage` is deprecated. Use `self_device_memory_usage` instead. hasattr(e, "self_cuda_memory_usage") for e in events) test/profile/test_profiler.py::test_profiler[cpu] test/profile/test_profiler.py::test_profiler[cuda:0] /usr/local/lib/python3.10/dist-packages/torch_geometric/profile/profiler.py:345: FutureWarning: `self_cuda_memory_usage` is deprecated. Use `self_device_memory_usage` instead. [getattr(e, "self_cuda_memory_usage", 0) or 0 for e in events]) test/profile/test_profiler.py::test_profiler[cpu] test/profile/test_profiler.py::test_profiler[cuda:0] /usr/local/lib/python3.10/dist-packages/torch_geometric/profile/profiler.py:355: FutureWarning: `self_cuda_time_total` is deprecated. Use `self_device_time_total` instead. hasattr(e, "self_cuda_time_total") for e in events) test/profile/test_profiler.py::test_profiler[cpu] test/profile/test_profiler.py::test_profiler[cuda:0] /usr/local/lib/python3.10/dist-packages/torch_geometric/profile/profiler.py:358: FutureWarning: `self_cuda_time_total` is deprecated. Use `self_device_time_total` instead. [getattr(e, "self_cuda_time_total", 0) or 0 for e in events]) test/profile/test_profiler.py::test_profiler[cpu] test/profile/test_profiler.py::test_profiler[cuda:0] /usr/local/lib/python3.10/dist-packages/torch_geometric/profile/profiler.py:364: FutureWarning: `cuda_time_total` is deprecated. Use `device_time_total` instead. cuda_total=sum([e.cuda_time_total or 0 for e in events]), test/test_edge_index.py::test_save_and_load[int64-cpu] test/test_edge_index.py::test_save_and_load[int64-cuda:0] test/test_edge_index.py::test_save_and_load[int32-cpu] test/test_edge_index.py::test_save_and_load[int32-cuda:0] /opt/pyg/pytorch_geometric/test/test_edge_index.py:1259: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. out = torch.load(path) test/test_index.py::test_save_and_load[int64-cpu] test/test_index.py::test_save_and_load[int64-cuda:0] test/test_index.py::test_save_and_load[int32-cpu] test/test_index.py::test_save_and_load[int32-cuda:0] /opt/pyg/pytorch_geometric/test/test_index.py:532: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. out = torch.load(path) test/utils/test_convert.py: 16 warnings /usr/local/lib/python3.10/dist-packages/cugraph/structure/symmetrize.py:92: FutureWarning: Multi is deprecated and the removal of multi edges will no longer be supported from 'symmetrize'. Multi edges will be removed upon creation of graph instance. warnings.warn( test/utils/test_scatter.py::test_scatter_backward[min-cuda:0] /usr/local/lib/python3.10/dist-packages/torch_geometric/warnings.py:11: UserWarning: The usage of `scatter(reduce='min')` can be accelerated via the 'torch-scatter' package, but it was not found warnings.warn(message) test/utils/test_scatter.py::test_scatter_backward[max-cuda:0] /usr/local/lib/python3.10/dist-packages/torch_geometric/warnings.py:11: UserWarning: The usage of `scatter(reduce='max')` can be accelerated via the 'torch-scatter' package, but it was not found warnings.warn(message) test/utils/test_sparse.py::test_to_torch_coo_tensor_save_load /opt/pyg/pytorch_geometric/test/utils/test_sparse.py:227: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. adj = torch.load(path) -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ---------- coverage: platform linux, python 3.10.12-final-0 ---------- Coverage XML written to file coverage.xml =========================== short test summary info ============================ FAILED test/nn/aggr/test_fused.py::test_fused_aggregation[aggrs0] - RuntimeError: FAILED test/nn/aggr/test_fused.py::test_fused_aggregation[aggrs1] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/aggr/test_fused.py::test_fused_aggregation[aggrs2] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/aggr/test_fused.py::test_fused_aggregation[aggrs3] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/aggr/test_fused.py::test_fused_aggregation[aggrs4] - RuntimeError: FAILED test/nn/aggr/test_fused.py::test_fused_aggregation[aggrs5] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/aggr/test_fused.py::test_fused_aggregation[aggrs6] - RuntimeError: FAILED test/nn/aggr/test_gmt.py::test_graph_multiset_transformer - RuntimeError: FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple0] - RuntimeError: FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple1] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple2] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple3] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple4] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple5] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple6] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple7] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple8] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple9] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/aggr/test_scaler.py::test_degree_scaler_aggregation[True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/aggr/test_scaler.py::test_degree_scaler_aggregation[False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/aggr/test_set_transformer.py::test_set_transformer_aggregation - RuntimeError: FAILED test/nn/conv/test_agnn_conv.py::test_agnn_conv[True] - RuntimeError: FAILED test/nn/conv/test_agnn_conv.py::test_agnn_conv[False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_appnp.py::test_appnp - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_arma_conv.py::test_arma_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_arma_conv.py::test_lazy_arma_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_cg_conv.py::test_cg_conv[False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_cg_conv.py::test_cg_conv[True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_cg_conv.py::test_cg_conv_with_edge_features - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_cheb_conv.py::test_cheb_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_cluster_gcn_conv.py::test_cluster_gcn_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_dna_conv.py::test_dna_conv[3-32] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_dna_conv.py::test_dna_conv_sparse_tensor[3-32] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_edge_conv.py::test_edge_conv_conv - RuntimeError: FAILED test/nn/conv/test_eg_conv.py::test_eg_conv[True-aggregators0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_eg_conv.py::test_eg_conv[True-aggregators1] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_eg_conv.py::test_eg_conv[False-aggregators0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_eg_conv.py::test_eg_conv[False-aggregators1] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_fa_conv.py::test_fa_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_feast_conv.py::test_feast_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_film_conv.py::test_film_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_gat_conv.py::test_gat_conv[False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_gat_conv.py::test_gat_conv[True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_gated_graph_conv.py::test_gated_graph_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_gatv2_conv.py::test_gatv2_conv[False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_gatv2_conv.py::test_gatv2_conv[True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_gcn2_conv.py::test_gcn2_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_gcn_conv.py::test_gcn_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_gcn_conv.py::test_gcn_conv_with_decomposed_layers - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_gen_conv.py::test_gen_conv[softmax] - RuntimeError: FAILED test/nn/conv/test_gen_conv.py::test_gen_conv[powermean] - RuntimeError: FAILED test/nn/conv/test_gen_conv.py::test_gen_conv[aggr2] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_gin_conv.py::test_gin_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_gin_conv.py::test_gine_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_gmm_conv.py::test_gmm_conv[True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_gmm_conv.py::test_gmm_conv[False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_graph_conv.py::test_graph_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_heat_conv.py::test_heat_conv[True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_heat_conv.py::test_heat_conv[False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_le_conv.py::test_le_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_lg_conv.py::test_lg_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_message_passing.py::test_my_commented_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_message_passing.py::test_my_kwargs_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_message_passing.py::test_my_conv_jit - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_message_passing.py::test_my_conv_jit_save - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_message_passing.py::test_my_multiple_aggr_conv_jit - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_message_passing.py::test_my_edge_conv_jit - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_message_passing.py::test_my_default_arg_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_message_passing.py::test_tuple_output_jit - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_message_passing.py::test_explain_message - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_message_passing.py::test_traceable_my_conv_with_self_loops[4] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_message_passing.py::test_traceable_my_conv_with_self_loops[8] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_message_passing.py::test_traceable_my_conv_with_self_loops[2] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_message_passing.py::test_traceable_my_conv_with_self_loops[0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_message_passing.py::test_pickle - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_mf_conv.py::test_mf_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_mixhop_conv.py::test_mixhop_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_nn_conv.py::test_nn_conv[cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_nn_conv.py::test_nn_conv[cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_pdn_conv.py::test_pdn_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_pdn_conv.py::test_pdn_conv_with_sparse_node_input_feature - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_pna_conv.py::test_pna_conv[True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_pna_conv.py::test_pna_conv[False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_point_conv.py::test_point_net_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_point_gnn_conv.py::test_point_gnn_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_point_transformer_conv.py::test_point_transformer_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_ppf_conv.py::test_ppf_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_res_gated_graph_conv.py::test_res_gated_graph_conv[None] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_res_gated_graph_conv.py::test_res_gated_graph_conv[4] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_rgat_conv.py::test_rgat_conv_jit - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf0-RGCNConv-cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf0-RGCNConv-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf0-FastRGCNConv-cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf0-FastRGCNConv-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf1-RGCNConv-cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf1-RGCNConv-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf1-FastRGCNConv-cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf1-FastRGCNConv-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf2-RGCNConv-cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf2-RGCNConv-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf2-FastRGCNConv-cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf2-FastRGCNConv-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_sage_conv.py::test_sage_conv[mean-False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_sage_conv.py::test_sage_conv[mean-True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_sage_conv.py::test_sage_conv[sum-False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_sage_conv.py::test_sage_conv[sum-True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_sg_conv.py::test_sg_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_signed_conv.py::test_signed_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_simple_conv.py::test_simple_conv[mean-None] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_simple_conv.py::test_simple_conv[sum-sum] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_simple_conv.py::test_simple_conv[aggr2-cat] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_simple_conv.py::test_simple_conv[mean-self_loop] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_ssg_conv.py::test_ssg_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_tag_conv.py::test_tag_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/conv/test_wl_conv_continuous.py::test_wl_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/dense/test_linear.py::test_hetero_linear_basic[cpu] - RuntimeError: FAILED test/nn/dense/test_linear.py::test_hetero_linear_basic[cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/dense/test_linear.py::test_hetero_dict_linear_jit - RuntimeError: FAILED test/nn/models/test_attentive_fp.py::test_attentive_fp - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/models/test_basic_gnn.py::test_jit - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/models/test_linkx.py::test_linkx[1] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/models/test_linkx.py::test_linkx[2] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/models/test_meta.py::test_meta_layer_example - RuntimeError: FAILED test/nn/models/test_rect.py::test_rect - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/norm/test_graph_norm.py::test_graph_norm - RuntimeError: FAILED test/nn/norm/test_instance_norm.py::test_instance_norm[True] - RuntimeError: FAILED test/nn/norm/test_instance_norm.py::test_instance_norm[False] - RuntimeError: FAILED test/nn/norm/test_layer_norm.py::test_layer_norm[graph-True-cpu] - RuntimeError: FAILED test/nn/norm/test_layer_norm.py::test_layer_norm[graph-True-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/norm/test_layer_norm.py::test_layer_norm[graph-False-cpu] - RuntimeError: FAILED test/nn/norm/test_layer_norm.py::test_layer_norm[graph-False-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/norm/test_layer_norm.py::test_layer_norm[node-True-cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/norm/test_layer_norm.py::test_layer_norm[node-True-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/norm/test_layer_norm.py::test_layer_norm[node-False-cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/norm/test_layer_norm.py::test_layer_norm[node-False-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/norm/test_mean_subtraction_norm.py::test_mean_subtraction_norm - RuntimeError: FAILED test/nn/norm/test_pair_norm.py::test_pair_norm[False] - RuntimeError: FAILED test/nn/norm/test_pair_norm.py::test_pair_norm[True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/pool/select/test_select_topk.py::test_topk_ratio - RuntimeError: FAILED test/nn/pool/test_asap.py::test_asap - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/pool/test_asap.py::test_asap_jit_save - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/pool/test_avg_pool.py::test_avg_pool_x - RuntimeError: FAILED test/nn/pool/test_edge_pool.py::test_compute_edge_score_softmax - RuntimeError: FAILED test/nn/pool/test_edge_pool.py::test_edge_pooling - RuntimeError: FAILED test/nn/pool/test_max_pool.py::test_max_pool_x - RuntimeError: FAILED test/nn/pool/test_sag_pool.py::test_sag_pooling - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/nn/pool/test_topk_pool.py::test_topk_pooling - RuntimeError: FAILED test/nn/test_sequential.py::test_sequential_jit - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom... FAILED test/test_edge_index.py::test_torch_script - AssertionError: Regex pattern did not match. FAILED test/utils/test_coalesce.py::test_coalesce_jit - RuntimeError: FAILED test/utils/test_grid.py::test_grid - RuntimeError: FAILED test/utils/test_isolated.py::test_contains_isolated_nodes - RuntimeError: FAILED test/utils/test_laplacian.py::test_get_laplacian - RuntimeError: FAILED test/utils/test_softmax.py::test_softmax - RuntimeError: FAILED test/utils/test_sort_edge_index.py::test_sort_edge_index_jit - RuntimeError: FAILED test/utils/test_sparse.py::test_to_torch_coo_tensor - RuntimeError: FAILED test/utils/test_spmm.py::test_spmm_jit[sum] - RuntimeError: FAILED test/utils/test_spmm.py::test_spmm_jit[mean] - RuntimeError: FAILED test/utils/test_to_dense_adj.py::test_to_dense_adj - RuntimeError: FAILED test/utils/test_to_dense_batch.py::test_to_dense_batch_jit - RuntimeError: FAILED test/utils/test_undirected.py::test_is_undirected - RuntimeError: FAILED test/utils/test_undirected.py::test_to_undirected - RuntimeError: ``` ### Versions latest
closed
2024-08-16T20:30:13Z
2024-08-27T19:58:21Z
https://github.com/pyg-team/pytorch_geometric/issues/9600
[ "bug" ]
puririshi98
2
autogluon/autogluon
computer-vision
4,895
[BUG] Tabular models fail to function properly in certain Docker container environments.
**Bug Report Checklist** <!-- Please ensure at least one of the following to help the developers troubleshoot the problem: --> - [ ] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install --> - [√] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred --> - [√] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked --> **Describe the bug** <!-- A clear and concise description of what the bug is. --> When running tabular models such as DirectTabular or RecursiveTabular in a Docker container where the number of logical CPUs is less than the number of physical CPUs, a bug occurs. It appears that every tabular model class has overridden a method named _get_default_resources(). For instance, in line 518 of tabular/models/lgb/lgb_model.py, there is an annotation stating that logical=False is faster for training. However, in practice, setting logical=False can lead to bugs in certain Docker environments. Given this issue, why not change the setting to logical=True, or simply remove this overridden method altogether, since the parent class already implements this method? This adjustment could help avoid the bug and ensure more consistent behavior across different environments. **Expected behavior** <!-- A clear and concise description of what you expected to happen. --> **To Reproduce** <!-- A minimal script to reproduce the issue. Links to Colab notebooks or similar tools are encouraged. If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com. In short, we are going to copy-paste your code to run it and we expect to get the same result as you. --> **Screenshots / Logs** <!-- If applicable, add screenshots or logs to help explain your problem. --> **Installed Versions** <!-- Please run the following code snippet: --> <details> ```python # Replace this code with the output of the following: from autogluon.core.utils import show_versions show_versions() ``` </details>
open
2025-02-15T06:17:25Z
2025-02-19T00:12:22Z
https://github.com/autogluon/autogluon/issues/4895
[ "module: tabular", "bug: unconfirmed", "Needs Triage", "module: timeseries" ]
Anyon123
1
ultralytics/ultralytics
pytorch
18,794
How to Improve YOLOv8 Fine-Tuning for Better Generalization Across Different Scenes
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question I'm fine-tuning yolov8n to use it as a person detector for CCTV cameras. However, when I fine-tune the model using the commands below on data from multiple cameras in different scenes(6000 image), the model seems biased toward the training data. The performance of the fine-tuned model on unseen scenes is worse than the pre-trained yolov8n.pt. I expected that fine-tuning the model would improve its performance, but instead, it has become biased. What steps can I take to achieve a model with better generalization across different scenes? ### Additional results = model.train(data="/content/drive/MyDrive/SuperVision/FineTuneYolo/data.yaml", epochs=1, device=0, imgsz=1280, patience=20)
open
2025-01-21T08:57:37Z
2025-02-01T11:23:34Z
https://github.com/ultralytics/ultralytics/issues/18794
[ "question", "detect" ]
faezehprb
4
scikit-learn/scikit-learn
python
30,332
NuSVC argument `class_weight` is not used
### Describe the bug Like `SVC`, the class `NuSVC` takes argument `class_weight`. However, it looks like this argument is not used. After a quick look at the libsvm C code within sklearn as well as [libsvm's original documentation](https://www.csie.ntu.edu.tw/~cjlin/libsvm/), this seems to be expected: "`wi` set the parameter C of class i to weight*C, for C-SVC". I suggest that this argument should be removed from `NuSVC`'s constructor and from the documentation. ### Steps/Code to Reproduce ```python from sklearn.svm import SVC, NuSVC X = [[1., 2, 3], [0, 5, 2]] y = [-1, 1] NuSVC(verbose=True).fit(X, y).dual_coef_ optimization finished, #iter = 0 C = 2.587063 obj = 1.293532, rho = 0.000000 nSV = 2, nBSV = 0 Total nSV = 2 Out: [LibSVM]array([[-1.29353162, 1.29353162]]) SVC(C=2.587063, verbose=True).fit(X, y).dual_coef_ optimization finished, #iter = 1 obj = -1.293532, rho = 0.000000 nSV = 2, nBSV = 0 Total nSV = 2 Out: [LibSVM]array([[-1.29353162, 1.29353162]]) NuSVC(class_weight={-1:1.5, 1:.2}, verbose=True).fit(X, y).dual_coef_ optimization finished, #iter = 0 C = 2.587063 obj = 1.293532, rho = 0.000000 nSV = 2, nBSV = 0 Total nSV = 2 Out: [LibSVM]array([[-1.29353162, 1.29353162]]) SVC(C=2.587063, class_weight={-1:1.5, 1:.2}, verbose=True).fit(X, y).dual_coef_ optimization finished, #iter = 1 obj = -0.827860, rho = -0.600000 nSV = 2, nBSV = 1 Total nSV = 2 Out: [LibSVM]array([[-0.5174126, 0.5174126]]) NuSVC(class_weight={-1:0, 1:0}).fit(X, y).dual_coef_ Out: array([[-1.29353162, 1.29353162]]) SVC(class_weight={-1:0, 1:0}).fit(X, y).dual_coef_ Out: array([], shape=(1, 0), dtype=float64) ``` ### Expected Results As in the case of no `class_weight`, `NuSVM` should give the same `dual_coef_` as an `SVC` with the same `C`. Also `class_weight={-1:0, 1:0}` should give the "empty" result. ### Actual Results In all cases above `NuSVM` with class weight behaves exactly as when no weights are given. ### Versions ```shell System: python: 3.9.16 | packaged by conda-forge | (main, Feb 1 2023, 21:39:03) [GCC 11.3.0] executable: .../bin/python3.9 machine: Linux-6.8.0-48-generic-x86_64-with-glibc2.39 Python dependencies: sklearn: 1.5.2 pip: 23.0.1 setuptools: 67.6.0 numpy: 2.0.2 scipy: 1.13.1 Cython: None pandas: None matplotlib: 3.9.2 joblib: 1.4.2 threadpoolctl: 3.5.0 Built with OpenMP: True threadpoolctl info: user_api: blas internal_api: blis num_threads: 1 prefix: libblis filepath: .../lib/libblis.so.4.0.0 version: 0.9.0 threading_layer: pthreads architecture: skx user_api: openmp internal_api: openmp num_threads: 8 prefix: libgomp filepath: .../lib/libgomp.so.1.0.0 version: None ```
open
2024-11-22T13:37:27Z
2024-12-12T14:10:53Z
https://github.com/scikit-learn/scikit-learn/issues/30332
[ "Bug", "Needs Investigation" ]
lciti
15
albumentations-team/albumentations
machine-learning
1,781
Huggingface demo link in docs does not allow user uploaded images
## Describe the bug Huggingface demo link in docs does not allow user uploaded images. ### To Reproduce Test here: https://huggingface.co/spaces/qubvel-hf/albumentations-demo?transform=CLAHE
closed
2024-06-08T19:35:46Z
2024-09-19T04:31:04Z
https://github.com/albumentations-team/albumentations/issues/1781
[ "feature request" ]
ogencoglu
3
horovod/horovod
deep-learning
3,254
Support NCCL for Elastic Horovod
The documentation says that Elastic Horovod only works with Gloo. But NCCL is the state-of-the-art collective communication library with GPUs and has been widely adopted in distributed data-parallel DNN training. Therefore, I hope Elastic Horovod will support NCCL. BTW, I wonder why you choose Gloo rather than NCCL. > Horovod >= 0.20.0 with Gloo support (install Horovod using HOROVOD_WITH_GLOO=1 to ensure it is installed)
closed
2021-11-02T04:05:36Z
2021-11-03T06:54:58Z
https://github.com/horovod/horovod/issues/3254
[ "enhancement" ]
jasperzhong
2
mkhorasani/Streamlit-Authenticator
streamlit
28
How do I set SameSite=Lax?
Does anyone know how to set SameSite cookie? I'd appreciate it if someone could implement it for us.
closed
2022-08-09T05:25:32Z
2022-08-20T07:28:26Z
https://github.com/mkhorasani/Streamlit-Authenticator/issues/28
[]
DharmaDoll
2
X-PLUG/MobileAgent
automation
92
what does the argument '--use_som' stand for in PC-Agent?
open
2025-02-10T03:00:49Z
2025-02-12T07:00:57Z
https://github.com/X-PLUG/MobileAgent/issues/92
[]
wenwend1122
1
aeon-toolkit/aeon
scikit-learn
2,261
[ajb/exponent] is STALE
@TonyBagnall, ajb/exponent has had no activity for 143 days. This branch will be automatically deleted in 32 days.
closed
2024-10-28T01:28:10Z
2024-10-28T16:10:22Z
https://github.com/aeon-toolkit/aeon/issues/2261
[ "stale branch" ]
aeon-actions-bot[bot]
1
sqlalchemy/sqlalchemy
sqlalchemy
9,819
python 3.12 slices are hashable, affects one area of Row for 1.4 only
due to https://github.com/python/cpython/pull/101264 ```py from sqlalchemy import create_engine e = create_engine("sqlite://") with e.connect() as conn: result = conn.exec_driver_sql("select 1, 2, 3, 4, 5") row = result.one() onetwo = row[0:2] threefive = row[3:5] print(f"{onetwo} {threefive}") ``` output on 1.4: ```py Traceback (most recent call last): File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/engine/row.py", line 131, in _get_by_key_impl rec = self._keymap[key] ~~~~~~~~~~~~^^^^^ KeyError: slice(0, 2, None) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/classic/dev/sqlalchemy/test3.py", line 10, in <module> onetwo = row[0:2] ~~~^^^^^ File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/engine/row.py", line 135, in _get_by_key_impl rec = self._parent._key_fallback(key, ke) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/engine/cursor.py", line 801, in _key_fallback util.raise_( File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/util/compat.py", line 211, in raise_ raise exception sqlalchemy.exc.NoSuchColumnError: Could not locate column in row for column 'slice(0, 2, None)' ``` this does not seem to affect 2.0 since we've refactored. a fix would be ```diff diff --git a/lib/sqlalchemy/engine/row.py b/lib/sqlalchemy/engine/row.py index f7c00bab37..eb12e29dd6 100644 --- a/lib/sqlalchemy/engine/row.py +++ b/lib/sqlalchemy/engine/row.py @@ -130,6 +130,8 @@ except ImportError: try: rec = self._keymap[key] except KeyError as ke: + if isinstance(key, slice): + return tuple(self._data[key]) rec = self._parent._key_fallback(key, ke) except TypeError: if isinstance(key, slice): ``` now it would be nice to get 3.12 on CI, but then we have to make a tox build that does not need greenlet, or somehow get a 3.12 version of greenlet running.
closed
2023-05-22T16:37:21Z
2023-07-02T14:43:44Z
https://github.com/sqlalchemy/sqlalchemy/issues/9819
[ "bug", "engine" ]
zzzeek
9
kylebebak/Requester
graphql
17
Connection Error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:548)
How to use Self-signed SSL certificates? I think maybe need add setting for use Session.merge_environment_settings?
closed
2018-11-09T10:51:00Z
2018-12-03T14:03:47Z
https://github.com/kylebebak/Requester/issues/17
[]
jlab13
2
ymcui/Chinese-LLaMA-Alpaca
nlp
160
7B 量化成功 13B在转换合并阶段遇到错误
感谢您使用Issue提问模板,请按照以下步骤提供相关信息。我们将优先处理信息相对完整的Issue,感谢您的配合。 *提示:将[ ]中填入x,表示打对钩。* ### 问前必查项目 - [ ] 由于相关依赖频繁更新,请确保按照[README.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca)中的相关步骤执行 - [ ] 我已在Issue中对问题进行了搜索,没有找到相似问题和解决方案 - [ ] 我已阅读README中的[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca#faq),没有找到相似问题和解决方案 - [ ] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案 ### 选择问题类型 基础模型: - [ ] LLaMA - [√ ] Alpaca 问题类型: - [ ] 下载问题 - [ √] 模型转换和合并问题 - [ ] 模型推理问题(🤗 transformers) - [ ] 模型量化和部署问题(llama.cpp、text-generation-webui) - [ ] 效果问题 - [ ] 其他问题 ### 详细描述问题 7B 量化成功 13B在转换合并阶段遇到错误 ### 运行截图或log ``` root@autodl-container-58c811ac3c-98f22c2b:~/autodl-tmp# python ./Chinese-LLaMA-Alpaca/scripts/merge_llama_with_chinese_lora.py --base_model 'decapoda-research/llama-13b-hf' --lora_model 'ziqingyang/chinese-alpaca-lora-13b' ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 41/41 [00:26<00:00, 1.58it/s] Extended vocabulary size: 49954 Loading LoRA for 13B model Peft version: 0.1.0 Merging model Traceback (most recent call last): File "./Chinese-LLaMA-Alpaca/scripts/merge_llama_with_chinese_lora.py", line 119, in <module> assert not torch.allclose(first_weight_old, first_weight) AssertionError root@autodl-container-58c811ac3c-98f22c2b:~/autodl-tmp# ```
closed
2023-04-14T21:12:20Z
2023-04-14T21:41:11Z
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/160
[]
cgisky1980
2
plotly/dash
data-science
3,094
Allow_duplicate=True Fails with More Than Two Duplicate Callbacks
## Bug Report: `allow_duplicate=True` Fails with More Than Two Duplicate Callbacks **Description:** The `allow_duplicate=True` parameter does not function correctly when there are more than two duplicate callbacks. **Reproducible Example:** The following examples demonstrate the issue: **Working Examples (Two Duplicate Callbacks):** ```python # Example 1: Works Output("layout_ctx-train", "children") Input('button1', 'n_clicks'), ... Output("layout_ctx-train", "children", allow_duplicate=True) Input('button2', 'n_clicks'), ... ``` ```python # Example 2: Works Output("layout_ctx-train", "children", allow_duplicate=True) Input('button1', 'n_clicks'), ... Output("layout_ctx-train", "children") Input('button2', 'n_clicks'), ... ``` ```python # Example 3: Works Output("layout_ctx-train", "children", allow_duplicate=True) Input('button1', 'n_clicks'), ... Output("layout_ctx-train", "children", allow_duplicate=True) Input('button2', 'n_clicks'), ... ``` **Failing Examples (More Than Two Duplicate Callbacks):** ```python # Example 4: Fails Output("layout_ctx-train", "children", allow_duplicate=True) Input('button1', 'n_clicks'), ... Output("layout_ctx-train", "children") Input('button2', 'n_clicks'), ... Output("layout_ctx-train", "children") Input('button3', 'n_clicks'), ... Output("layout_ctx-train", "children") Input('button4', 'n_clicks'), ... ``` ```python # Example 5: Fails Output("layout_ctx-train", "children") Input('button1', 'n_clicks'), ... Output("layout_ctx-train", "children") Input('button2', 'n_clicks'), ... Output("layout_ctx-train", "children") Input('button3', 'n_clicks'), ... Output("layout_ctx-train", "children", allow_duplicate=True) Input('button4', 'n_clicks'), ... ``` ```python # Example 6: Fails Output("layout_ctx-train", "children", allow_duplicate=True) Input('button1', 'n_clicks'), ... Output("layout_ctx-train", "children", allow_duplicate=True) Input('button2', 'n_clicks'), ... Output("layout_ctx-train", "children", allow_duplicate=True) Input('button3', 'n_clicks'), ... Output("layout_ctx-train", "children", allow_duplicate=True) Input('button4', 'n_clicks'), ... ``` **Expected Behavior:** Duplicate callbacks should function correctly when at least one of the components has `allow_duplicate=True` set. **Additional Comments:** This functionality worked correctly in Dash version 2.9.1 for more than two duplicate callbacks as long as `allow_duplicate=True` was present on all relevant components. The issue was encountered in Dash versions 2.17.1+.
closed
2024-11-26T12:01:25Z
2024-11-27T15:35:24Z
https://github.com/plotly/dash/issues/3094
[ "bug", "P2" ]
Kissabi
1
BeastByteAI/scikit-llm
scikit-learn
16
can it output probabilities
Hi,i must say this is an excellent work.While using it, is it possible to output probabilities?
closed
2023-05-28T03:39:08Z
2023-07-15T21:11:57Z
https://github.com/BeastByteAI/scikit-llm/issues/16
[]
ghost
1
waditu/tushare
pandas
1,436
pro_bar的日线数据在使用了limit参数后, start_date参数被忽略
调用pro_bar接口获取日线数据时, 如果同时使用start_date和limit参数, start_date参数会变为无效, 如下: df = ts.pro_bar(ts_code='601607.SH',start_date='20200803', limit=7, adj='qfq') 理论上获取的应该是自20200803开始七天的交易数据, 实际获得的是从今天开始往前一共七天的数据. 如果去除limit参数, 则能正确获取20200803开始到今天的数据. 如果去把start_date参数改成end_date='20200803', 则能正确获得从20200803往前七天的数据. 社区id 237660
open
2020-09-22T11:13:56Z
2020-09-22T11:13:56Z
https://github.com/waditu/tushare/issues/1436
[]
E10000
0
matplotlib/matplotlib
matplotlib
29,359
[Doc]: stable docs link to dev
### Documentation Link https://matplotlib.org/stable/index.html ### Problem On MacOS Firefox, the link to the documentation (which is for the stable version) takes me to 3.10 (dev), so I always get the "This is documentation for an unstable development version" message. ### Suggested improvement I figure this has something to do with the latest release, 3.9.4 (which has no link), while [3.9.3](https://matplotlib.org/3.9.3/index.html) is working.
closed
2024-12-20T18:08:49Z
2024-12-20T20:56:25Z
https://github.com/matplotlib/matplotlib/issues/29359
[ "Documentation" ]
shallow-beach
1
marcomusy/vedo
numpy
869
Plotter screenshot scale has no effect
Hi Marco, I was installing newest version (`'2023.4.4'`) of vedo through pip and noticed that my rendered images degraded in quality. I found that the `scale` parameter in `screenshot` function of plotter has no effect if `asarray=True`, ie if i get the following output ``` >> plt.screenshot(asarray=True, scale=2).shape (1920, 1920, 3) >> plt.screenshot(asarray=True, scale=5).shape (1920, 1920, 3) ``` When `asarray=False` in the function call and i specify a filename the scale parameter works just fine 👍 I downgraded vedo to version `'2023.4.3'` and there everything related to this bug works fine.
closed
2023-05-24T12:18:01Z
2023-10-18T13:10:58Z
https://github.com/marcomusy/vedo/issues/869
[]
paul0noah
1
healthchecks/healthchecks
django
196
Problem with cron expressions
Hi, Very odd. The cron expression engine does not accept `0 2 * * *`... It accepts `0 1 * * *` and `0 3 * * *`. ![image](https://user-images.githubusercontent.com/1027111/47313789-1f871e80-d640-11e8-9b4c-4cb36e524494.png) ![image](https://user-images.githubusercontent.com/1027111/47313802-2746c300-d640-11e8-9dda-ca75e8cfcb7c.png) ![image](https://user-images.githubusercontent.com/1027111/47313827-3fb6dd80-d640-11e8-9500-03647f997f70.png)
closed
2018-10-22T19:20:15Z
2018-12-14T10:28:35Z
https://github.com/healthchecks/healthchecks/issues/196
[]
LordMike
6
piccolo-orm/piccolo
fastapi
1,039
Improve `LazyTableReference`
`LazyTableReference` is needed when a foreign key has to reference a table lower down in the file: ```python class Band(Table): manager = ForeignKey( LazyTableReference("Manager", module_path=__name__) ) name = Varchar() class Manager(Table): name = Varchar() ``` The challenge is making sure that `LazyTableReference` is converted to a real reference at the correct time - too soon, and you get circular import errors. Too late, and the foreign key won't work properly. In the `Table` metaclass we're calling the `copy` method on `ForeignKey` which seems to be causing some issues (triggering the conversion to a real reference too soon). We need to improve this.
closed
2024-06-27T10:28:17Z
2024-06-27T11:58:52Z
https://github.com/piccolo-orm/piccolo/issues/1039
[ "enhancement" ]
dantownsend
0
mwaskom/seaborn
data-visualization
3,021
Can we get error bar statistics?
Hi - Did some search but no luck. Is there anyway to retrieve the statistics of the plots, such as mean, error bar lower bound, error bar upper bound from seaborn.relplot on aggregated data? for example, [this](https://seaborn.pydata.org/tutorial/relational.html#aggregation-and-representing-uncertainty) plot ``` fmri = sns.load_dataset("fmri") sns.relplot(data=fmri, x="timepoint", y="signal", kind="line") ```
closed
2022-09-13T01:47:45Z
2022-09-13T02:27:53Z
https://github.com/mwaskom/seaborn/issues/3021
[]
galaxy139
1
errbotio/errbot
automation
1,646
Create a new release
Thanks for this project! It works almost out of the box. When I installed the Bot, I ran into #1624 which has been fixed long time ago. But the release is still missing. Any chance to get one? In case the problem is the missing GitHub workflow: Just ping me and I will submit a PR to automize it. This way a release can be created for every PR.
closed
2023-06-18T06:53:13Z
2024-01-02T07:27:02Z
https://github.com/errbotio/errbot/issues/1646
[]
kayman-mk
11
allure-framework/allure-python
pytest
749
How to use allure-python-commons? Any documentation?
Hi all, Does anyone know how to use allure-python-commons? Is there any documentation? My company has an in-house test framework that generates test results in json file. I want to convert the data in the json file to Allure results that can be used by Allure to generate Allure report. I tried to find a way to do it and finally I was redirected to the package allure-python-commons. But I could not find any document about how to use allure-python-commons. Any help will be appreciated. Thank you. Regards, Albert
closed
2023-06-07T23:30:40Z
2023-06-12T23:25:17Z
https://github.com/allure-framework/allure-python/issues/749
[]
albertwangnz
2
tflearn/tflearn
data-science
299
How to configure the LSTM sequence generator ?
Hello there ! I have a few questions and issues when running the seq generator scripts. When I try the shakespeare script, it runs for 24h with about 1,300,000 iterations without outputting anything. - Is it that too big for my small CPU ? - Is there any way to configure the stopping iteration when fitting the data ? - Is it possible to display intermediary outputs in the generation so that we can observe how far the RNN understands the data ? And small issues : - When I'm running the city names generator, I got an error of "list index out of range" preventing me from trying the script - Either examples (city and shakespeare) don't work if I enter the activation, loss and optimizer arguments in the last layer like in the script. It only works when I let the default parameters by commenting the arguments. Thanks a lot, Theo
open
2016-08-23T10:14:47Z
2017-04-05T17:41:59Z
https://github.com/tflearn/tflearn/issues/299
[]
TheoLvs
2
pyeve/eve
flask
1,248
eve crashes on malformed sort parameters
When sending a malformed sorting parameter to eve, it should return `400` for bad request. Instead, eve crashes and returns a `500`. This can be reproduces with the demo app: This works: ```sh curl "http://eve-demo.herokuapp.com/people?sort=firstname" ``` This crashes with `500`, instead of returning `400`: ```sh curl "http://eve-demo.herokuapp.com/people?sort='firstname'" ``` Is there any way to fix this?
closed
2019-03-30T21:40:35Z
2019-04-01T10:24:51Z
https://github.com/pyeve/eve/issues/1248
[]
NotSpecial
1
Esri/arcgis-python-api
jupyter
1,404
FeatureSet.from_geojson destroys MultiPolygons
**Describe the bug** Loading GeoJSON into a `FeatureSet` loses information about `MultiPolygons`. Converting back to geojson exports a `Polygon` type instead of a `MultiPolygon`. **To Reproduce** Steps to reproduce the behavior: ```python import json from arcgis.features import FeatureSet geojson_in = {'type': 'FeatureCollection', 'features': [{'type': 'Feature', 'geometry': {'type': 'MultiPolygon', 'coordinates': [[[[180.0, 40.0], [180.0, 50.0], [170.0, 50.0], [170.0, 40.0], [180.0, 40.0]]], [[[-170.0, 40.0], [-170.0, 50.0], [-180.0, 50.0], [-180.0, 40.0], [-170.0, 40.0]]]]}}]} fs = FeatureSet.from_geojson(geojson_in) geojson_out = json.loads(fs.to_geojson) # geojson_out object is printed below {'type': 'FeatureCollection', 'features': [{'type': 'Feature', 'geometry': {'type': 'Polygon', 'coordinates': [[[180.0, 40.0], [170.0, 40.0], [170.0, 50.0], [180.0, 50.0], [180.0, 40.0]], [[-170.0, 40.0], [-180.0, 40.0], [-180.0, 50.0], [-170.0, 50.0], [-170.0, 40.0]]]}, 'properties': {'OBJECTID': 1}}]} ``` **Expected behavior** I expect a GeoJSON object that is correct. Notice how it outputs a `Polygon` object but with the polygon's rings collapsed. This has a completely different meaning. **Platform (please complete the following information):** - OS: MacOS - Browser [e.g. chrome, safari]: n/a - Python API Version: 2.0.0 **Additional context** Add any other context about the problem here, attachments etc.
closed
2022-12-27T05:41:14Z
2024-05-18T22:29:00Z
https://github.com/Esri/arcgis-python-api/issues/1404
[ "bug" ]
gabeschine
7
piccolo-orm/piccolo
fastapi
948
Adding a self referencing foreign key to an existing table which has a custom primary key
This is a real edge case that I just came across. If you have this table: ```python class MyTable(Table): id = UUID(primary_key=True) ``` And modify it to this: ```python class MyTable(Table): id = UUID(primary_key=True) fk = ForeignKey("self") ``` The auto migration can fail, because it doesn't know to create the new foreign key column as a `UUID` type.
closed
2024-03-12T19:43:39Z
2024-03-12T20:21:56Z
https://github.com/piccolo-orm/piccolo/issues/948
[ "bug" ]
dantownsend
0
amdegroot/ssd.pytorch
computer-vision
129
how can i run test.py
when i run test.py, an error occurred,as follows, usage: _jb_nosetest_runner.py [-h] [--trained_model TRAINED_MODEL] [--save_folder SAVE_FOLDER] [--visual_threshold VISUAL_THRESHOLD] [--cuda CUDA] _jb_nosetest_runner.py: error: unrecognized arguments: /home/zjh/ssd.pytorch-master-nan/test.py Process finished with exit code 2 Empty test suite. Can someone help me?tks
open
2018-03-25T12:04:50Z
2019-07-02T07:57:10Z
https://github.com/amdegroot/ssd.pytorch/issues/129
[]
atrimage
3
ultralytics/ultralytics
deep-learning
19,509
How to generate a onxx and a nms file ?
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question from ultralytics import YOLO # Load a model model = YOLO("maskbest-ptbr.pt") # load an official model # Export the model model.export(format="onnx",nms=True) ### Additional i am need a nms-model.onxx for boxes and so on. How i can create these file
open
2025-03-04T02:33:57Z
2025-03-04T15:39:46Z
https://github.com/ultralytics/ultralytics/issues/19509
[ "question", "exports" ]
xmaxmex
4
google-research/bert
nlp
1,200
关于在2.0版本中没有找到stacked_embedding,这个类取消了吗
open
2021-02-23T11:42:30Z
2021-02-23T11:43:37Z
https://github.com/google-research/bert/issues/1200
[]
capella12
0