id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
2290938592
ependytes, chinanta This pull request was generated by the 'mq' tool [test] flake rate: 0.1 logical conflict every: 1000 sleep for: 2100s close stale after: 24 hours [pullrequest] requests per hour: 100 deps=[c,e] /trunk merge
gharchive/pull-request
2024-05-11T14:30:33
2025-04-01T04:36:08.595133
{ "authors": [ "EliSchleifer" ], "repo": "trunk-io/mergequeue", "url": "https://github.com/trunk-io/mergequeue/pull/49915", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2291164926
desilverization, evitable This pull request was generated by the 'mq' tool [test] flake rate: 0.1 logical conflict every: 1000 sleep for: 2100s close stale after: 24 hours [pullrequest] requests per hour: 100 deps=[d,e] /trunk merge
gharchive/pull-request
2024-05-12T04:46:04
2025-04-01T04:36:08.596732
{ "authors": [ "EliSchleifer" ], "repo": "trunk-io/mergequeue", "url": "https://github.com/trunk-io/mergequeue/pull/50121", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2293994004
catteries, forepaling This pull request was generated by the 'mq' tool [test] flake rate: 0.1 logical conflict every: 1000 sleep for: 2100s close stale after: 24 hours [pullrequest] requests per hour: 100 deps=[c,f] /trunk merge
gharchive/pull-request
2024-05-13T23:08:15
2025-04-01T04:36:08.598363
{ "authors": [ "EliSchleifer" ], "repo": "trunk-io/mergequeue", "url": "https://github.com/trunk-io/mergequeue/pull/51528", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2312242856
brawling, dismortgage This pull request was generated by the 'mq' tool [test] flake rate: 0.1 logical conflict every: 1000 sleep for: 2100s close stale after: 24 hours [pullrequest] requests per hour: 100 deps=[b,d] /trunk merge
gharchive/pull-request
2024-05-23T08:05:18
2025-04-01T04:36:08.600153
{ "authors": [ "EliSchleifer" ], "repo": "trunk-io/mergequeue", "url": "https://github.com/trunk-io/mergequeue/pull/57049", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2313887709
diogenes, cirripede This pull request was generated by the 'mq' tool [test] flake rate: 0.1 logical conflict every: 1000 sleep for: 2100s close stale after: 24 hours [pullrequest] requests per hour: 100 deps=[c,d] /trunk merge
gharchive/pull-request
2024-05-23T21:33:03
2025-04-01T04:36:08.601823
{ "authors": [ "EliSchleifer" ], "repo": "trunk-io/mergequeue", "url": "https://github.com/trunk-io/mergequeue/pull/57449", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2316671100
fastigiately, autopneumatic This pull request was generated by the 'mq' tool [test] flake rate: 0.1 logical conflict every: 1000 sleep for: 2100s close stale after: 24 hours [pullrequest] requests per hour: 100 deps=[a,f] /trunk merge
gharchive/pull-request
2024-05-25T04:01:50
2025-04-01T04:36:08.603379
{ "authors": [ "EliSchleifer" ], "repo": "trunk-io/mergequeue", "url": "https://github.com/trunk-io/mergequeue/pull/58539", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2317165371
aphidicolous, equisonant This pull request was generated by the 'mq' tool [test] flake rate: 0.1 logical conflict every: 1000 sleep for: 2100s close stale after: 24 hours [pullrequest] requests per hour: 100 deps=[a,e] /trunk merge
gharchive/pull-request
2024-05-25T18:20:09
2025-04-01T04:36:08.604976
{ "authors": [ "EliSchleifer" ], "repo": "trunk-io/mergequeue", "url": "https://github.com/trunk-io/mergequeue/pull/58973", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2317256969
calculus This pull request was generated by the 'mq' tool [test] flake rate: 0.1 logical conflict every: 1000 sleep for: 2100s close stale after: 24 hours [pullrequest] requests per hour: 100 deps=[c] /trunk merge
gharchive/pull-request
2024-05-25T21:30:53
2025-04-01T04:36:08.606566
{ "authors": [ "EliSchleifer" ], "repo": "trunk-io/mergequeue", "url": "https://github.com/trunk-io/mergequeue/pull/59038", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2317924002
euryalean, dag This pull request was generated by the 'mq' tool [test] flake rate: 0.1 logical conflict every: 1000 sleep for: 2100s close stale after: 24 hours [pullrequest] requests per hour: 100 deps=[d,e] /trunk merge
gharchive/pull-request
2024-05-26T20:38:34
2025-04-01T04:36:08.608198
{ "authors": [ "EliSchleifer" ], "repo": "trunk-io/mergequeue", "url": "https://github.com/trunk-io/mergequeue/pull/59540", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2324464011
flaxdrop, ganisters This pull request was generated by the 'mq' tool [test] flake rate: 0.1 logical conflict every: 1000 sleep for: 2100s close stale after: 24 hours [pullrequest] requests per hour: 100 deps=[f,g] /trunk merge
gharchive/pull-request
2024-05-30T01:03:01
2025-04-01T04:36:08.609996
{ "authors": [ "EliSchleifer" ], "repo": "trunk-io/mergequeue", "url": "https://github.com/trunk-io/mergequeue/pull/61448", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2327530685
berengarian, dysteleology This pull request was generated by the 'mq' tool [test] flake rate: 0.1 logical conflict every: 1000 sleep for: 2100s close stale after: 24 hours [pullrequest] requests per hour: 100 deps=[b,d] /trunk merge
gharchive/pull-request
2024-05-31T10:45:35
2025-04-01T04:36:08.611610
{ "authors": [ "EliSchleifer" ], "repo": "trunk-io/mergequeue", "url": "https://github.com/trunk-io/mergequeue/pull/62489", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2333125408
giftlike, fumes This pull request was generated by the 'mq' tool [test] flake rate: 0.1 logical conflict every: 1000 sleep for: 2100s close stale after: 24 hours [pullrequest] requests per hour: 20 deps=[f,g] /trunk merge
gharchive/pull-request
2024-06-04T10:12:32
2025-04-01T04:36:08.613205
{ "authors": [ "EliSchleifer" ], "repo": "trunk-io/mergequeue", "url": "https://github.com/trunk-io/mergequeue/pull/65635", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2358010232
benzole, fencing This pull request was generated by the 'mq' tool [test] flake rate: 0.1 logical conflict every: 1000 sleep for: 600s close stale after: 24 hours [pullrequest] requests per hour: 20 deps=[b,f] /trunk merge
gharchive/pull-request
2024-06-17T18:44:27
2025-04-01T04:36:08.614794
{ "authors": [ "EliSchleifer" ], "repo": "trunk-io/mergequeue", "url": "https://github.com/trunk-io/mergequeue/pull/78283", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2393374883
dimethylamine, flashcube This pull request was generated by the 'mq' tool [test] flake rate: 0.1 logical conflict every: 1000 sleep for: 600s close stale after: 24 hours [pullrequest] requests per hour: 20 deps=[d,f] /trunk merge
gharchive/pull-request
2024-07-06T04:06:14
2025-04-01T04:36:08.616405
{ "authors": [ "EliSchleifer" ], "repo": "trunk-io/mergequeue", "url": "https://github.com/trunk-io/mergequeue/pull/95751", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2400726564
copt, emirs This pull request was generated by the 'mq' tool [test] flake rate: 0.1 logical conflict every: 1000 sleep for: 600s close stale after: 24 hours [pullrequest] requests per hour: 20 deps=[c,e] /trunk merge
gharchive/pull-request
2024-07-10T13:11:36
2025-04-01T04:36:08.617983
{ "authors": [ "EliSchleifer" ], "repo": "trunk-io/mergequeue", "url": "https://github.com/trunk-io/mergequeue/pull/99018", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2402290783
batfowler, flirtling This pull request was generated by the 'mq' tool [test] flake rate: 0.1 logical conflict every: 1000 sleep for: 600s close stale after: 24 hours [pullrequest] requests per hour: 20 deps=[b,f] /trunk merge
gharchive/pull-request
2024-07-11T05:36:49
2025-04-01T04:36:08.619756
{ "authors": [ "EliSchleifer" ], "repo": "trunk-io/mergequeue", "url": "https://github.com/trunk-io/mergequeue/pull/99681", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1687550608
feat: Add validationStatus prop to Dropdown Summary Related Issues or PRs Resolves #2364 How To Test Unit tests test that this works. Storybook control for validation status can also be used Screenshots (optional) I'll need to come back and fix what I borked in storybook #2398 introduces another use of the now (in this PR) ValidationStatus type that should be updated in this PR once #2398 is merged it's merged
gharchive/pull-request
2023-04-27T21:05:49
2025-04-01T04:36:08.621811
{ "authors": [ "brandonlenz", "werdnanoslen" ], "repo": "trussworks/react-uswds", "url": "https://github.com/trussworks/react-uswds/pull/2365", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2294785197
Add Advisory metadata exposed through the API We need to define which additional metadata from an Advisory we need to expose. The image below is the metadata of an Advisory we expose in Trustification I acknowledge the fact that in Trustification we used to expose exclusively CSAF files as Advisories. On the other hand Trustify defines Advisories as CSAF + other files. So the metadata defined should be metadata common to all those files/formats/specs and not just CSAF . This point is just a though, feel free to disagree. Suggested metadata to add So I will start suggesting metadata to add to an Advisory and then we can add/remove more metadata as we discuss on this topic. Category: We could have a field that the specification/format to which the Advisory belongs to. Publisher: Is it possible to extract metadata of the publisher of the Advisory? Versioning: can we expose metadata that helps to understand the version of the Advisory? for instance, let's imagine we are dealing with a CSAF file "CVE-2023-44487"; then in one month that Advisory is updated, then I suppose the CSAF file itself should have a way of defining "v1", "v2", "v3", of the same file. We currently have already "date modified" which helps already the user to understand when the file has changed for the last time. Please suggest any other metadata we should expose, and feel free to discard the ones I suggested too. wrt Versioning, the spec actually has something for that: https://docs.oasis-open.org/csaf/csaf/v2.0/os/csaf-v2.0-os.html#32112-document-property---tracking … we should leverage such information. https://github.com/trustification/trustify/pull/300 is exposing now: "Advisory issuer" data: "issuer": { "id": 1, "name": "Red Hat Product Security", "cpe_key": null, "website": null }, This is the current set of metadata we have for an Advisory, as today: Closing this ticket as there is nothing in specific that I could ask to be exposed. If there are new specific fields engineering, UX, or PM might come up with, then we could always open new issues
gharchive/issue
2024-05-14T08:26:49
2025-04-01T04:36:08.629824
{ "authors": [ "carlosthe19916", "ctron" ], "repo": "trustification/trustify", "url": "https://github.com/trustification/trustify/issues/279", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
437434616
Implement NEW_ORDER, CANCEL_ORDER transaction type https://explorer.binance.org/api/v1/txs?address=bnb16ya67j7kvw8682kka09qujlw5u7lf4geqef0ku&page=1&rows=100 https://developer.trustwallet.com/blockatlas/transaction-format#any-action { "type": "any_action", "metadata": { "title": "Place Order", "key": "place_order", } } Part of https://github.com/trustwallet/blockatlas/issues/123
gharchive/issue
2019-04-25T23:13:16
2025-04-01T04:36:08.652519
{ "authors": [ "kolya182" ], "repo": "trustwallet/blockatlas", "url": "https://github.com/trustwallet/blockatlas/issues/85", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
606664538
[API] Add GetTokens handler Add new handler Now we can get list of tokens by map: coin -> [addresses] POST path - /v3/tokens body - {"coin": ["address"]} We need new architecture for all atlas plaform if we need batch requests. There is no sense to use batch with current design. My opinion is that we need to remove all batches and use another service that will aggregate requests if we need it. Each platform have its own problems. Sometimes it is not working, sometimes it is broken because api changed. We cannot guarantee that with batch request there will not be a common panic I think this problem can be solved in another PR And since we have a similar implementation for other services, and it works now, I do not see a problem in this https://github.com/trustwallet/blockatlas/blob/master/api/registry.go#L80 We need new architecture for all atlas plaform if we need batch requests. There is no sense to use batch with current design. My opinion is that we need to remove all batches and use another service that will aggregate requests if we need it. Each platform have its own problems. Sometimes it is not working, sometimes it is broken because api changed. We cannot guarantee that with batch request there will not be a common panic I think this problem can be solved in another PR And since we have a similar implementation for other services, and it works now, I do not see a problem in this https://github.com/trustwallet/blockatlas/blob/master/api/registry.go#L80 Firstly, we do not need to make anything worse than it is. Secondly, this issue is need to be solved on backend side Created a separate issue to improve fetching tokens at any point in the future: https://github.com/trustwallet/blockatlas/issues/1057
gharchive/pull-request
2020-04-25T01:54:28
2025-04-01T04:36:08.657692
{ "authors": [ "EnoRage", "prazd", "vikmeup" ], "repo": "trustwallet/blockatlas", "url": "https://github.com/trustwallet/blockatlas/pull/1056", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
659048318
Fix Xcode 12 warnings What changed: Fix quoted include warnings explicitly include <cassert> remove hardcoded iOS 13.3 Include cleanup is a welcome change!
gharchive/pull-request
2020-07-17T09:06:06
2025-04-01T04:36:08.661274
{ "authors": [ "catenocrypt", "hewigovens" ], "repo": "trustwallet/wallet-core", "url": "https://github.com/trustwallet/wallet-core/pull/1040", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
517546572
Harmony: Adding staking support Description This PR adds staking feature support for Harmony. Testing instructions Unit tests and iOS/Android integration tests added. Types of changes New feature (non-breaking change which adds functionality) Checklist [ ] Prefix PR title with [WIP] if necessary. [ ] Add tests to cover changes as needed. [ ] Update documentation as needed. Can you add your Harmony to AnySigner? We use AnySigner to sign JSON transactions from web applications. @gupadhyaya after you move the code from any signer to your signer, can you add a test to anysiger with the transaction in JSON format? We have tests there for other implementations. There is also a Codacy warning
gharchive/pull-request
2019-11-05T04:50:07
2025-04-01T04:36:08.664657
{ "authors": [ "gupadhyaya", "hewigovens", "leoneparise" ], "repo": "trustwallet/wallet-core", "url": "https://github.com/trustwallet/wallet-core/pull/711", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2100731453
Concept: Tools Define Abstractions Since we are having AgentTool class now, shouldn't this be closed?
gharchive/issue
2024-01-25T16:18:26
2025-04-01T04:36:08.665504
{ "authors": [ "HavenDV", "TesAnti" ], "repo": "tryAGI/LangChain", "url": "https://github.com/tryAGI/LangChain/issues/114", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2540495267
Make LIKE and NOT LIKE expressions non-nullable Fix column being incorrectly inferred as nullable: // 💥 Query has incorrect type annotation. // Expected: { is_guest: boolean; } // Actual: { is_guest: boolean | null; }[] await sql<{ is_guest: boolean }[]>` SELECT lecturers.id, users.email NOT LIKE '%@upleveled.io' AS is_guest FROM lecturers INNER JOIN users ON lecturers.user_id = users.id `; @Newbie012 this PR also good for merging? Or does it still need work? Thanks for the review and merge!
gharchive/pull-request
2024-09-21T20:16:50
2025-04-01T04:36:08.666838
{ "authors": [ "karlhorky" ], "repo": "ts-safeql/safeql", "url": "https://github.com/ts-safeql/safeql/pull/271", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1078576592
[BUG] DuplicateCharset: ARIB-STD-B24 Bug description: tsp ends with terminate called after throwing an instance of 'ts::Charset::DuplicateCharset' what(): DuplicateCharset: ARIB-STD-B24 Environment: OS: Linux, Raspbian, current Bullseye running at Pi4 Built from current git source. This is typically the result of some inconsistency during the build. Maybe an interrupted make, followed by a make command with different parameters. In practice, this means that the object module tsARIBCharset.o containing this character set is activated twice in the same process, resulting in registering the same character set twice. This can also happen if static and shared (default) build are mixed. The object tsARIBCharset.o is present in libtsduck.so and erroneously statically linked in the main executable or another .so. Just rebuild from a clean state: make clean; make
gharchive/issue
2021-12-13T14:09:02
2025-04-01T04:36:08.678861
{ "authors": [ "lelegard", "omikron88" ], "repo": "tsduck/tsduck", "url": "https://github.com/tsduck/tsduck/issues/909", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
2201837920
conformer encoder 模型 slim 后无法推理,slim 前正常 slim优化的截图: onnxruntime 推理结果截图: Hi, this is a bug on latest release, you can pass --skip_fusion_patterns EliminationSlice to avoid this, and I will fix it soon.
gharchive/issue
2024-03-22T07:05:10
2025-04-01T04:36:08.695186
{ "authors": [ "inisis", "sean-wade" ], "repo": "tsingmicro-toolchain/OnnxSlim", "url": "https://github.com/tsingmicro-toolchain/OnnxSlim/issues/13", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1581313569
GL issues on macOS 11.7.2 Caveats I don't write many graphical programs. Perhaps GL is always a pain and you don't want to support it. Given that build.sh contains a uname check for "Darwin" though, I'm giving it a shot. Problem I installed the following versions of the dependencies (latest in Homebrew): sdl2 2.26.3 freetype 2.13.0 glew 2.2.0_1 The build went fine without warnings or errors. Running the first build: ❯ ./ded GL version 3.3 ARB_draw_instanced is not supported; game may not work properly!! I added a YOLO environment variable to skip returning 1 for all those "game may not work properly" checks, to press on and see where I got. ❯ YOLO=1 ./ded GL version 3.3 ARB_draw_instanced is not supported; game may not work properly!! WARNING! GLEW_ARB_debug_output is not availableERROR: could not compile GL_FRAGMENT_SHADER ERROR: 0:6: Use of undeclared identifier 'gl_FragColor' ERROR: failed to compile `./shaders/simple_color.frag` shader file Is the program written for a different GL version than what macOS 11.7.2 provides perhaps? I edited each of the shaders/simple_*.frag programs to stop using gl_FragColor and instead declare out vec4 fragColor; and use that. Solved? It looks like everything is working for now, with two changes: make the "game may not work properly" warnings ignorable stop using gl_FragColor for shader output Happy to open a PR unless you prefer some other solution (or none). ty brooo i was looking how to make it works on macOS <3 maybe u know what are shortcuts to save file and go to the file manager ? I installed the following versions of the dependencies (latest in Homebrew): * sdl2 2.26.3 * freetype 2.13.0 * glew 2.2.0_1 The build went fine without warnings or errors. Running the first build: ❯ ./ded GL version 3.3 ARB_draw_instanced is not supported; game may not work properly!! Same thing here on macOS 13.1 maybe u know what are shortcuts to save file and go to the file manager ? "The source is the documentation." (or something like that 😉 ) -- tsoding @mattthhh Seems to be F2 to save and F3 to open the file browser. https://github.com/tsoding/ded/blob/759c47633d142ba37c0fc3620d472ffb40851cd7/src/main.c#L181-L350 dlangui also solved the gl_FragColor by declaring a normal out variable https://github.com/buggins/dlangui/commit/6cfe98a4f1f665887fc4c0cccd526a8ce9f3c19c Ah, the YOLO flag is not longer needed after the instancing check was removed from main.c ty brooo i was looking how to make it works on macOS <3 maybe u know what are shortcuts to save file and go to the file manager ? So it works for you? How exactly did you do it???? I can't seem to decipher what "I edited each of the shader files to stop using gl_color and instead use out vec4 color" means and how to implement the solution. Please help me because this looks like a really cool editor and I would like to use it. @ShazamHax see e.g. https://github.com/tsoding/ded/compare/master...fabjan:ded:big-sur-workarounds (a bit old now, not sure if it's up to date, but you should see a pattern for what is needed to be done about gl_FragColor in that diff).
gharchive/issue
2023-02-12T16:06:20
2025-04-01T04:36:08.733296
{ "authors": [ "ShazamHax", "Swonkie", "fabjan", "mattthhh" ], "repo": "tsoding/ded", "url": "https://github.com/tsoding/ded/issues/65", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
537924778
(#914) document level editor key bindings Solve #914 @kolumb looks good to me! :+1: Thanks for the contribution!
gharchive/pull-request
2019-12-14T15:04:52
2025-04-01T04:36:08.734481
{ "authors": [ "kolumb", "rexim" ], "repo": "tsoding/nothing", "url": "https://github.com/tsoding/nothing/pull/1199", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
367598627
implement GUI no new functionality would be introduced per se, but a GUI would provide a simpler, harder-to-mess-up interface to allow additional users to utilize the functionality afforded by sc2simulator. See #13
gharchive/issue
2018-10-07T22:06:47
2025-04-01T04:36:08.766387
{ "authors": [ "ttinies" ], "repo": "ttinies/sc2simulator", "url": "https://github.com/ttinies/sc2simulator/issues/20", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2433990405
3.9.10.19-v1的 群@功能是不能用了么? 还是我错了?? 3.9.10.19-v1
gharchive/issue
2024-07-28T15:23:29
2025-04-01T04:36:08.775516
{ "authors": [ "qianqian1530" ], "repo": "ttttupup/wxhelper", "url": "https://github.com/ttttupup/wxhelper/issues/447", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
892567350
[bug] Count shows on limited user channels with cameras active Need I say more? ^ Thanks, pointy Thanks for reporting, I'll look into it @pointydev 8e006c0fef158c16d4cd1fa6621b4d0dbe6a2466 It should be fixed now. Could you give it a test? Thanks
gharchive/issue
2021-05-16T00:41:32
2025-04-01T04:36:08.795046
{ "authors": [ "pointydev", "tuanbinhtran" ], "repo": "tuanbinhtran/voice-user-count", "url": "https://github.com/tuanbinhtran/voice-user-count/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2197252195
Fixed error in save_to_wav function execution Signed-off-by: Tsogtgerel Amar [email protected] Hi everyone, I encountered an error in the save_to_wav function execution while running the MongolianTTS.ipynb notebook. Upon investigation, I found that the issue was related to the data type error caused by using NumPy version 1.25. After testing, I confirmed that switching to NumPy version 1.22 resolved the error. Therefore, I've updated the requirements.txt file to specify NumPy version 1.22 to ensure compatibility with the project. Thank you for your attention, and happy coding! Best regards, Specified NumPy version to ensure compatibility with the project. Additionally, all occurrences of np.long type have been replaced with np.longlong. Best regards,
gharchive/pull-request
2024-03-20T11:01:51
2025-04-01T04:36:08.799970
{ "authors": [ "tsogoo" ], "repo": "tugstugi/pytorch-dc-tts", "url": "https://github.com/tugstugi/pytorch-dc-tts/pull/26", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1497754519
kafka.Producer.connect() always tries 5 times, even when retries are set to lower numbers Node version: 16.13.2 KafkaJs version: 2.2.3 code snippet: const kafka = new Kafka({ brokers: ['some.random.site:9092'], }); const producer = kafka.producer({ allowAutoTopicCreation: false, retry: { retries: 0, }, }); await producer.connect(); When running this, if broker is not reachable, it always shows this output with 5 retries: {"level":"ERROR","timestamp":"2022-12-15T04:04:20.879Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Connection timeout","retryCount":1,"retryTime":642} {"level":"ERROR","timestamp":"2022-12-15T04:04:22.525Z","logger":"kafkajs","message":"[Connection] Connection timeout","broker":"xxxx.com:9092","clientId":"kafkajs"} {"level":"ERROR","timestamp":"2022-12-15T04:04:22.526Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Connection timeout","retryCount":2,"retryTime":1264} {"level":"ERROR","timestamp":"2022-12-15T04:04:24.793Z","logger":"kafkajs","message":"[Connection] Connection timeout","broker":"xxxx.com:9092","clientId":"kafkajs"} {"level":"ERROR","timestamp":"2022-12-15T04:04:24.795Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Connection timeout","retryCount":3,"retryTime":2740} {"level":"ERROR","timestamp":"2022-12-15T04:04:28.540Z","logger":"kafkajs","message":"[Connection] Connection timeout","broker":"xxxx.com:9092","clientId":"kafkajs"} {"level":"ERROR","timestamp":"2022-12-15T04:04:28.541Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Connection timeout","retryCount":4,"retryTime":5398} {"level":"ERROR","timestamp":"2022-12-15T04:04:34.947Z","logger":"kafkajs","message":"[Connection] Connection timeout","broker":"xxxx.com:9092","clientId":"kafkajs"} {"level":"ERROR","timestamp":"2022-12-15T04:04:34.949Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Connection timeout","retryCount":5,"retryTime":12884} This happens even if retries are set to 0 or 1 in kafka.Producer() call. from kafka docs: The retry option can be used to set the configuration of the retry mechanism, which is used to retry connections and API calls to Kafka (when using producers or consumers). Am I misunderstanding retry? I think because this is a connection error rather than a producer error, you need to set this on the Kafka object: const kafka = new Kafka({ brokers: ['some.random.site:9092'], retry: { retries: 1 } }); As an aside though, I can't seem to set retries on a producer at all (it always uses the default values). It seems like on this line of retry/index.js: const configs = Object.assign({}, RETRY_DEFAULT, opts) opts is never populated with the values passed through.
gharchive/issue
2022-12-15T04:12:19
2025-04-01T04:36:08.833036
{ "authors": [ "anishsekh", "danielnitsche" ], "repo": "tulios/kafkajs", "url": "https://github.com/tulios/kafkajs/issues/1506", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1348380164
🛑 tgdk.se is down In 504f482, tgdk.se (https://tgdk.se) was down: HTTP code: 0 Response time: 0 ms Resolved: tgdk.se is back up in c4e55b0.
gharchive/issue
2022-08-23T18:29:50
2025-04-01T04:36:08.844715
{ "authors": [ "vilhelmprytz" ], "repo": "tullingedk/service-status", "url": "https://github.com/tullingedk/service-status/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
578652444
Exception: device not connected after renaming controller_params.py into params.py I'm sorry to bother you again but it's still not quite working out. After copying and renaming controller_params.py into params.py I still run into the following error: Is the params.py still causing this? The ev3 is properly connected with a xubuntu 18.04 LTS PC via USB and goes online. I've already checked the connection to the sensors. Thanks in advance Hi Jmller, This happens when the connections to the two motors are missing (cable not fully plugged) or in the wrong ports (https://github.com/ev3dev/ev3dev-lang-python/issues/257). Make sure that your physical connections are the same of the ones in the code: motorLeft = ev3.LargeMotor('outC') motorRight = ev3.LargeMotor('outB')
gharchive/issue
2020-03-10T15:10:57
2025-04-01T04:36:08.848466
{ "authors": [ "jmller", "szoppi" ], "repo": "tum-lkn/NCSbench", "url": "https://github.com/tum-lkn/NCSbench/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1606360980
Some improvements to the OData service Added ability to query for the count of objects matching a filter. Added ability to query for nested fields. Added progress information (using rich library) for long duration operations .. like the processing metadata. Added function to show all [simple] values of an object. Reworked described method to use rich panels and tables instead of using custom built ones. Added poetry environment support (supersedes setup.py but still present for now) I tried to change the email address i used in the commits and in the process I changed 2 of your commits as well :( But it looks as though there's very little activity in this repo, not sure it'll ever be looked at 😄 From what I can tell this project is no longer maintained. I've forked and and started publishing it as python-odata on PyPi, you can reach me for any changes you'd like to make. I'm going to close this pull request now.
gharchive/pull-request
2023-03-02T08:50:33
2025-04-01T04:36:08.882014
{ "authors": [ "eblis" ], "repo": "tuomur/python-odata", "url": "https://github.com/tuomur/python-odata/pull/54", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
553060173
Page loads twice along with scripts when visiting cached page I've noticed that the pages that takes a little extra length of time and is cached, it loads the page twice. Is there any work around for this? Hi @masudhossain, it sounds like you're seeing Turbolinks previews. When revisiting a page, Turbolinks displays a cached version of the page, providing immediate feedback and the impression of instantaneous loads. Once the fresh version has loaded, the preview is replaced. You can read more about this in Understanding Caching section of the readme. If the preview behaviour is undesirable, you can disabled previews using the "no-preview" cache directive: <head> ... <meta name="turbolinks-cache-control" content="no-preview"> </head> Hope that helps.
gharchive/issue
2020-01-21T18:45:44
2025-04-01T04:36:08.884014
{ "authors": [ "domchristie", "masudhossain" ], "repo": "turbolinks/turbolinks", "url": "https://github.com/turbolinks/turbolinks/issues/515", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1558731195
Test plan for plugin after major credentials, regions, caching and SDK v5 updates We've updated a lot of key features in the core sections of the AWS plugin, so need to be sure it's working well before release. This issue will outline a set of tests that should be completed. Credentials: [ ] SSO credentials - 1 account [ ] SSO credentials - 3 accounts with aggregator [ ] EC2 IAM role - 1 account [ ] EC2 IAM role - 3 accounts with aggregator [ ] Access & Secret key pair in config - 1 account [ ] Access & Secret key & Session token in config - 1 account [ ] Other methods? [ ] china account [ ] localstack Regions: [ ] regions not set - should give results for default region only [ ] regions = [ "eu-west-2" ] - should give results for eu-west-2 only [ ] regions = [ "*" ] - should give results for all regions in partition [ ] regions = [] - should give results for ??? [ ] regions = [], AWS_REGION=ap-south-1 - should give results for ap-south-1 only [ ] regions = [], AWS_REGION=eu-west-1 - should give results for eu-west-1 only [ ] regions = [ "us-*" ], AWS_REGION=eu-west-1 - should give results for eu-west-1 only [ ] govcloud account, regions = [ "*" ] - should give results for all regions in partition [ ] govcloud account, regions = [ "us-*" ] Service regions: [ ] govcloud - select * from aws_vpc should give results for all regions [ ] govcloud - select * from <service_not_in_partition> should give zero results (no error) Mods (run multi-account, multi-region, compare with results from old plugin): [ ] thrifty mod [ ] compliance mod [ ] tagging mod [ ] insights dashboards I would add... ap-southeast-4 (Melbourne - the newest region that requires opt-in) ap-southeast-4 enabled in one account in the aggregate, but opted out in another account Disable the STS interface for some regions in an account. STS Endpoints are enabled via the Account Page. Enable some, disable some. See what chaos ensues. When you enable the opt-in regions there is a setting that has to occur to expand STS tokens. This impact the ability of Account A in us-east-1 to assume role into Account B in ap-southeast-4. I've added support for ap-southeast-4 to the branch. The default_region will be the main decision around STS tokens. The STS token is obtained using the default_region and then used in all query regions. I assume this will require the STS endpoint to be enabled for your default_region and the setting to expand tokens may also be important. Testing completed for https://github.com/turbot/steampipe-plugin-aws/commit/50f05e9458fc4d19c5de707d357e325af53063b1
gharchive/issue
2023-01-26T20:24:17
2025-04-01T04:36:08.894355
{ "authors": [ "cbruno10", "e-gineer", "jchrisfarris" ], "repo": "turbot/steampipe-plugin-aws", "url": "https://github.com/turbot/steampipe-plugin-aws/issues/1560", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1848636521
Add/Update support for Network Load Balancer security group association [Aug 10, 2023 announcement] Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] Earlier, NLB was not supporting the association of security group; the same was removed recently by https://github.com/turbot/steampipe-plugin-aws/pull/1869 But Aug 10, 2023, AWS announced the support for the security group by Network Load Balancer. We may revisit the same in future to add the sec-grp back to the NLB table, however, need to check GO SDK support for it. Describe the solution you'd like A clear and concise description of what you want to happen. Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. Additional context Add any other context or screenshots about the feature request here. @ParthaI After adding secgrp to the NLB the current table still resulting null ], "name": "rk-delete-nlb", "partition": "aws", "region": "us-east-1", "scheme": "internet-facing", "security_groups": null, "state_code": "active", "state_reason": null, "tags": {}, "tags_src": [], "title": "rk-delete-nlb", "type": "network", "vpc_id": "vpc-02dsfdsfdsfdseba09d9" } ] @rajlearner17, I've reviewed the matter, but regrettably, I wasn't able to reproduce the reported issue. On my end, I've successfully retrieved the security group information for the NLB. > select name, arn, security_groups from aws_ec2_network_load_balancer +--------------+----------------------------------------------------------------------------------------------------+--------------------------+ | name | arn | security_groups | +--------------+----------------------------------------------------------------------------------------------------+--------------------------+ | test-network | arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxxx:loadbalancer/net/test-network/4ea4edb8c6c95df6 | ["sg-0ad2b2a8d29715554"] | +--------------+----------------------------------------------------------------------------------------------------+--------------------------+ Time: 1.8s. Rows fetched: 1. Hydrate calls: 0. > select name, arn, security_groups from aws_ec2_network_load_balancer where arn = 'arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxxx:loadbalancer/net/test-network/4ea4edb8c6c95df6' +--------------+----------------------------------------------------------------------------------------------------+--------------------------+ | name | arn | security_groups | +--------------+----------------------------------------------------------------------------------------------------+--------------------------+ | test-network | arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxxx:loadbalancer/net/test-network/4ea4edb8c6c95df6 | ["sg-0ad2b2a8d29715554"] | +--------------+----------------------------------------------------------------------------------------------------+--------------------------+ Time: 1.8s. Rows fetched: 1. Hydrate calls: 0. Is there any particular scenario in which case we are getting the security_groups column value as null? It's working
gharchive/issue
2023-08-13T15:26:50
2025-04-01T04:36:08.900971
{ "authors": [ "ParthaI", "rajlearner17" ], "repo": "turbot/steampipe-plugin-aws", "url": "https://github.com/turbot/steampipe-plugin-aws/issues/1875", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
178758476
The groupby operation uses up whole system disk space I ran a groupby on a large SFrame table (i.e. foo.shape=(1183747, 3110)). After a while, my system disk / is full and pops up the following errors: Traceback (most recent call last): File "test.py", line 134, in <module> 'bs':gl.aggregate.CONCAT('b'), File "/home/.../anaconda2/lib/python2.7/site-packages/graphlab/data_structures/sframe.py", line 4651, in groupby group_ops)) File "/home/.../anaconda2/lib/python2.7/site-packages/graphlab/cython/context.py", line 49, in __exit__ raise exc_type(exc_value) IOError: Fail to write. Disk may be full.: unspecified iostream_category error: unspecified iostream_category error After the program die, the disk space usage backs to normal. Can I change the SFrame's cache location to somewhere else? (I have larger disks besides the system disk.) +1, I am facing this right now.
gharchive/issue
2016-09-23T01:00:20
2025-04-01T04:36:08.903184
{ "authors": [ "jonbakerfish", "korbonits" ], "repo": "turi-code/SFrame", "url": "https://github.com/turi-code/SFrame/issues/378", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
717111000
Include installation via CDN in readme With #37 the user can install the library via CDN. This should be mentioned in the installation section of the readme. Example CDN link: https://unpkg.com/@turingpointde/cvss.js@latest/dist/production.min.js Hi! 👋 I'd like to help, can you assign to me this issue? Sure @MrDoomy! Hello again @Fubinator 👋 I just create at PR right now. Let me know if it doesn't fit, I'll change that quickly 😉
gharchive/issue
2020-10-08T07:50:52
2025-04-01T04:36:08.905404
{ "authors": [ "Fubinator", "MrDoomy" ], "repo": "turingpointde/cvss.js", "url": "https://github.com/turingpointde/cvss.js/issues/39", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2760956379
Fix sqlite_version() out of bound panics #560 Changes to translate_expr function: core/translate/expr.rs: Changed the amount parameter in the Insn::Copy instruction from 1 to 0. Enhancements to the testing framework: testing/scalar-functions.test: Added a new test do_execsql_test_regex to validate that the sqlite_version function returns a valid output. testing/tester.tcl: Introduced a new procedure do_execsql_test_regex to support regex-based validation of SQL outputs. Thanks @diegoreis42!
gharchive/pull-request
2024-12-27T14:34:59
2025-04-01T04:36:08.973046
{ "authors": [ "diegoreis42", "penberg" ], "repo": "tursodatabase/limbo", "url": "https://github.com/tursodatabase/limbo/pull/561", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
315095305
Detect and report out of order pongs Related: https://github.com/turt2live/matrix-monitor-bot/issues/10 We no longer have pongs
gharchive/issue
2018-04-17T14:47:23
2025-04-01T04:36:08.976077
{ "authors": [ "turt2live" ], "repo": "turt2live/matrix-monitor-bot", "url": "https://github.com/turt2live/matrix-monitor-bot/issues/12", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1141055547
如何在重新展开某个节点时,重新加载刷新子节点 ? This function solves the problem (这个功能解决的问题) 第一次点击展开图标,tree 调用 load 从服务器加载子节点, 第二次点击展开图标,tree 调用 load 再次从服务器加载子节点(期望)。 而现在,第二次点击不能重新加载子节点(目前不能达成)。 Expected API (期望的 API) treeOption.shallowLoaded = false or rawNode.canReload = true This function solves the problem (这个功能解决的问题) 第一次点击展开图标,tree 调用 load 从服务器加载子节点, 第二次点击展开图标,tree 调用 load 再次从服务器加载子节点(期望)。 而现在,第二次点击不能重新加载子节点(目前不能达成)。 Expected API (期望的 API) treeOption.shallowLoaded = false or rawNode.canReload = true 目前的话你可以在折叠之后把 children 干掉 我暂时还没想到啥特别好的办法完成这个效果,或许增加一个 load-on-expand 我试试 已经验证可以通过 把 children 干掉(delete 或者 设置为 undefined ),然后进行重新加载,前提是必须先通过 expandKeys 将父节点收缩(设置为非展开状态) 已经验证可以通过 把 children 干掉(delete 或者 设置为 undefined ),然后进行重新加载,前提是必须先通过 expandKeys 将父节点收缩(设置为非展开状态),否则会报错。 收缩是必须的,不然都不知道展示啥 是的 related to https://github.com/tusen-ai/naive-ui/issues/2848
gharchive/issue
2022-02-17T09:06:07
2025-04-01T04:36:08.983734
{ "authors": [ "07akioni", "equt", "poerlang" ], "repo": "tusen-ai/naive-ui", "url": "https://github.com/tusen-ai/naive-ui/issues/2436", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1864484166
「Upload」上传组件上传多张图片时希望支持可拖拽排序 This function solves the problem (这个功能解决的问题) 在上传多张图片后想调整上传图片的顺序 Expected API (期望的 API) 新增一个draggable属性来控制表示是否可拖拽,在上传多张图片时生效 确实,有这个属性应该能方便不少,我也有这么个需求,然后自己用别的组件和上传组件拼起来实现了一个。。
gharchive/issue
2023-08-24T06:22:09
2025-04-01T04:36:08.985132
{ "authors": [ "DogeLasVegas", "Lgowen" ], "repo": "tusen-ai/naive-ui", "url": "https://github.com/tusen-ai/naive-ui/issues/5174", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1088824006
Support for OEM-built apps Hello, I'm an owner of a Visortech garagedooropener which has to be connected via a different application. The app-name is "ELESION" and it looks almost the same like the app "Tuya Smart". Even the QR-Code-button (for registering the Devices from the app to Tuya IoT Platform-Project) looks the same. Tuya App ELESION When I try to scan the QR-Code it says that this is not the designated app. I wonder if it can be possible to get it working with the ELESION app or if the only possibility is to use the other app with the devices (I'm not sure if the Visortech-device is working with the other app). Hi @dominikjas , thanks for the feedbacks! We are planning to support Tuya OEM app login in Tuya HA integration (OEM Tuya HA Integration Service), which can probably satisfy your requirements. Could you help to provide the following details and email to [email protected] for further discussion? Your company brief introduction (Website, products, etc) The product list you would like to support in the OEM Tuya HA integration, you check the Supported Device Category for reference. The scenario of using Tuya HA integration The customization requirements for Tuya HA integration (Login interface, integration features, etc) Looking forward to your feedbacks!
gharchive/issue
2021-12-26T18:40:50
2025-04-01T04:36:09.041216
{ "authors": [ "dominikjas", "zlinoliver" ], "repo": "tuya/tuya-home-assistant", "url": "https://github.com/tuya/tuya-home-assistant/issues/751", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2173606932
Device Pairing (iOS) From previous issue, where i said, that we successfully init Tuya SDK (at clear project created via Xcode), then we go further to Device Pairing. Now we stack on Device Pairing, we tried EZ and AP modes, in both situations, we get timeout from Tuya SDK, but lamp is changing its behaviour, so for example for .AP mode, behaviour and code looks like this: When lamp starts blinking iPhone sits on router wifi (SSID: flat_463) We runs getToken(), providing previously created homeId (with addHome function) token is successfully gathered, and when it is gathered, we connect to access-point that created by lamp, then we do next: 4.1) sit to router wifi -> get timeout 4.2) sit to lamp wifi -> get timeout 4.3) sit to lamp wifi, wait 50 secs, then switch to router wifi (for get internet connection) -> get timeout We use this code (where i mark ✅ - this code is running): func addHome() { let latitude: CLLocationDegrees = 0 let longitude: CLLocationDegrees = 0 let homeManager = ThingSmartHomeManager() homeManager.addHome(withName: "you_home_name", geoName: "city_name", rooms: ["room_name"], latitude: latitude, longitude: longitude, success: { (homeId) in print("add home success, homeId: \(homeId)") self.homeId = homeId }) { (error) in if let error { print("add home failure: \(error)") } } } func getToken(homeId: Int64) { let ssid = "flat_463" // provide router ssid let password = "love_infinity" // provide router password apActivator.getTokenWithHomeId(homeId, success: { token in print("getToken success: \(String(describing: token))") // ✅ self.pairingToken = token if let token { self.startConfigWiFi( withSsid: ssid, password: password, token: token ) } }, failure: { error in print("getToken failure: \(String(describing: error?.localizedDescription))") }) } func startConfigWiFi(withSsid ssid: String, password: String, token: String) { apActivator.delegate = self apActivator.startConfigWiFi(.AP, ssid: ssid, password: password, token: token, timeout: 100) } // also how it said in docs, we use ThingSmartActivatorDelegate, and override activator functions extension SmartLightManager: ThingSmartActivatorDelegate { func activator(_ activator: ThingSmartActivator!, didReceiveDevice deviceModel: ThingSmartDeviceModel!, error: Error!) { print("\(TAG) activator()") if deviceModel != nil && error == nil { print("\(TAG) devModel.") initDevice(devId: deviceModel.devId) print("\(TAG) The device is paired.") } if let e = error { // ✅ print("\(TAG) Failed to pair the device.") print("\(TAG) \(e)") } } func activator(_ activator: ThingSmartActivator!, didPassWIFIToSecurityLevelDeviceWithUUID uuid: String!) { print("\(TAG) didPassWIFIToSecurityLevelDeviceWithUUID: \(String(describing: uuid))") self.apActivator.continueConfigSecurityLevelDevice() } } When we use Tuya SDK, and connect to wifi of lamp, lamp is stop blinking, when we not use Tuya SDK, lamp is blinking more than 3 minutes. This behaviour, in my opinion, seems like something is happening (so some lamp is get some connection with Tuya SDK), but Tuya SDK just gives timeout always anyway :( We tried it out by this docs: https://developer.tuya.com/en/docs/app-development/activator?id=Ka5cgmlzpfig4 @b0r1ngx Timeout means device get wifi ssid and password but can not connect the internet 1.Please check if your Wi Fi account and password are correct 2.Confirm that your Wi Fi is 2.4G 3.You can use the Tuya Smart app for network configuration to confirm that Wi Fi is unobstructed If there are any issues, please let us know I will close this issue. If there are any issues, you can continue to provide feedback
gharchive/issue
2024-03-07T11:09:02
2025-04-01T04:36:09.047365
{ "authors": [ "b0r1ngx", "taojingGino" ], "repo": "tuya/tuya-home-ios-sdk-sample-swift", "url": "https://github.com/tuya/tuya-home-ios-sdk-sample-swift/issues/22", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
981774783
feat: add singleTimePicker component Owner [ ] Nickname(Tuya): Sophia Description tuya-panel-lamp-sdk add SingleTimePicker component Package [X] tuya-panel-lamp-sdk Type of change [X] New feature (non-breaking change which adds functionality) CheckList Don't edit this section [X] ESLint lint Codecov Report Merging #84 (48482de) into main (44b665d) will increase coverage by 0.07%. The diff coverage is 71.73%. @@ Coverage Diff @@ ## main #84 +/- ## ========================================== + Coverage 61.13% 61.21% +0.07% ========================================== Files 164 165 +1 Lines 6286 6332 +46 Branches 1424 1436 +12 ========================================== + Hits 3843 3876 +33 - Misses 2439 2452 +13 Partials 4 4 Impacted Files Coverage Δ ...k/src/components/time/single-time-picker/index.tsx 71.73% <71.73%> (ø) Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 44b665d...48482de. Read the comment docs.
gharchive/pull-request
2021-08-28T08:30:07
2025-04-01T04:36:09.057238
{ "authors": [ "codecov-commenter", "sunny-ali" ], "repo": "tuya/tuya-panel-sdk", "url": "https://github.com/tuya/tuya-panel-sdk/pull/84", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1746775873
🛑 UrbanAssets-BackOffice is down In d58dd4b, UrbanAssets-BackOffice (https://urbanassets.company) was down: HTTP code: 0 Response time: 0 ms Resolved: UrbanAssets-BackOffice is back up in dc11e7c.
gharchive/issue
2023-06-07T22:32:40
2025-04-01T04:36:09.060184
{ "authors": [ "tuyencaovn" ], "repo": "tuyencaovn/uauptime", "url": "https://github.com/tuyencaovn/uauptime/issues/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
118433922
[Reboot] Support for transparent button backgrounds in firefox Hi, SuitCSS's base module contains the following snippet to support focus styles in firefox when the background is transparent: /** * Work around a Firefox/IE bug where the transparent `button` background * results in a loss of the default `button` focus styles. */ button:focus { outline: 1px dotted; outline: 5px auto -webkit-focus-ring-color; } Would this be appropriate for bootstrap to include? Cheers, Ole SuitCSS Base also includes this: /** * Suppress the focus outline on elements that cannot be accessed via keyboard. * This prevents an unwanted focus outline from appearing around elements that * might still respond to pointer events. */ [tabindex="-1"]:focus { outline: none !important; } @oleersoy Open a separate issue please. Done - I deleted the original - here's the link to the new one: https://github.com/twbs/bootstrap/issues/18330 Relevant git blame: https://github.com/suitcss/base/commit/88853f550c7cb421852f9ad0aa70e896e7e92f82 Is this still applicable to the versions of IE and FF we support? I see no details on what that note applies to after a quick look around SUIT. Short answer yes. Here's how I tested it using firefox version 42 via the SUIT base test case: git clone https://github.com/suitcss/base cd base npm run setup npm run build npm run build-test firefox test/index.html Look at test 6.2 has focus styles (in Firefox and IE). Open the firefox developer tools (ctrl-shift-I). Click on the style tab. Delete the following css (lines 568 - 579): button:focus { outline: 1px dotted; outline: 5px auto -webkit-focus-ring-color; } Now attempt to focus the test button. Confirmed in Firefox 44.0a2 (2015-12-06). Plunk: http://plnkr.co/edit/lCcQce46VJE1NWVMHJV6?p=preview Hi @cvrebert! You appear to have posted a live example (http://run.plnkr.co/plunks/lCcQce46VJE1NWVMHJV6/), which is always a good first step. However, according to Bootlint, your example has some Bootstrap usage errors, which might potentially be causing your issue: W001: <head> is missing UTF-8 charset <meta> tag W002: <head> is missing X-UA-Compatible <meta> tag that disables old IE compatibility modes W003: <head> is missing viewport <meta> tag that enables responsiveness line 138, column 5: W007: Found one or more <button>s missing a type attribute. You'll need to fix these errors and post a revised example before we can proceed further. Thanks! (Please note that this is a fully automated comment.) Fixed in c9eb483.
gharchive/issue
2015-11-23T17:44:40
2025-04-01T04:36:09.082457
{ "authors": [ "cvrebert", "mdo", "oleersoy", "twbs-lmvtfy" ], "repo": "twbs/bootstrap", "url": "https://github.com/twbs/bootstrap/issues/18320", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
228142779
docs: have a "jump to top" icon I've been using the docs a lot lately and for most part think they work well except for all the manual scrolling back to the top. I'd like to request a "jump to top" icon to help save time in this activity. @Johann-S I was thinking of taking on this issue, but I noticed the docs have a "Back to top" button in the sidebar when scrolling down. Does this solve the issue or should I improve it by changing it to an icon or have it floating? I don't see any "back to top" link on our v4 documentation, see for example : http://v4-alpha.getbootstrap.com/components/alerts/ Closing out, no plans for this. I have some design updates in mind for the docs to help.
gharchive/issue
2017-05-11T22:48:50
2025-04-01T04:36:09.084996
{ "authors": [ "Johann-S", "ScotsScripts", "edanbarak", "mdo" ], "repo": "twbs/bootstrap", "url": "https://github.com/twbs/bootstrap/issues/22604", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
815532391
Bootstrap Reboot 4.x CSS contains duplicate text-decoration style for abbr[title] The generated reboot css file contains a duplicate property: https://github.com/twbs/bootstrap/blob/056216a3bd2fd1f28ba9ec6f2797aa2aaec5c6f0/scss/_reboot.scss#L145-L146 CSS allows duplicate property names but only the last instance of a duplicated name determines the actual value that will be used for it. Therefore, changing values of other occurrences of a duplicated name will have no effect and may cause misunderstandings and bugs. Operating system and version: All/Unrelated Browser and version: All/Unrelated Suggested Fix: Remove the duplicate that doesn't align with the vision of the product Hi there, thanks for reporting! It was done on purpose, since text-decoration as a shorthand had very poor support back in the days: the first declaration was a fallback value for non-supporting browsers (which is a very common pattern for progressively enhance some properties allowing modern values). However after checking support on MDN, it appears that our current targets all support the shorthand value. I think it's now safe to drop the first one in v5 then. Feel free to suggest a patch, or we'll take of it when possible.
gharchive/issue
2021-02-24T14:30:21
2025-04-01T04:36:09.088532
{ "authors": [ "ffoodd", "matosconsulting" ], "repo": "twbs/bootstrap", "url": "https://github.com/twbs/bootstrap/issues/33197", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1011130136
Wrong parameter constant in append() function breaking certain SCSS compiler (easy fix) Describe the issue The mixin _box-shadow.scss using quotes for the 3rd argument "comma" inside the append() function. This is breaking certain SCSS compiler such as ScssPhp resulting in an invalid CSS statement. Solution: Removing the double-quotes resolving the issue (the specs define comma, space and auto always without double-quotes for this function). What version of Bootstrap are you using? Bootstrap v5.1.1 libsass v3.5.5 (works) scssphp v1.7.0 (resulting in an invalid CSS statement) You don't need to "make changes to support other compilers", as in the current code-base you're using the append() function already WITHOUT double-quotes around the string, for example here: https://github.com/twbs/bootstrap/blob/b991a6b8510216ba25cff02e0cc6c338f9f76113/scss/_functions.scss#L71 So the fix above is only make the function calls equal, using the 3rd parameter without double-quotes everywhere. Please re-open!
gharchive/issue
2021-09-29T15:48:24
2025-04-01T04:36:09.092527
{ "authors": [ "Moongazer" ], "repo": "twbs/bootstrap", "url": "https://github.com/twbs/bootstrap/issues/35080", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
438755262
Add 19.03 AMIs This PR adds 19.03 AMIs, since they have been released. It also uses the opportunity to fix broken example. good stuff, thanks!
gharchive/pull-request
2019-04-30T12:35:39
2025-04-01T04:36:09.104751
{ "authors": [ "knl", "zimbatm" ], "repo": "tweag/terraform-nixos", "url": "https://github.com/tweag/terraform-nixos/pull/5", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
908648946
twig.js contains a reference to the file "path". When upgrading the package @symfony/webpack-encore from 0.28.2 to 1.0.0 I get the following error: "./node_modules/twig/twig.js" contains a reference to the file "path". Apparently some shared library is used on webpack-encore 0.28.2 but on webpack-encore 1.0.0 the library is not required breaking twig.js. Hi, Just in case, you can import he minified version. import Twig from 'twig/twig.min'; Regards I do not work at the company anymore. But I'll ask someone to post more details here. It may take a while though. Ok, understood. I'll close for now, if it's still an issue for anybody, it can be re-opened. Hi all, I'm encountering this issue with Encore as well: Module build failed: Module not found: "./node_modules/twig/twig.js" contains a reference to the file "path". This file can not be found, please check it for typos or update it if the file got moved. Looks like line 435 of the .node_modules/twig/twig.js file: module.exports = require("path"); Sure, I'm using @symfony/webpack-encore version 1.8.2 with symfony, and yarn as my package manager. Let me know if there's anything more specifically you'd need to help recreate it. I can provide my yarn.lock file if that helps. Here is the section of my webpack.config.js I added to load twig js: // Adds TwigJs loader. .addLoader({ test: /\.twig$/, exclude: /templates/, use: [ { loader: "twig-loader", options: {}, }, ], }) Here's the yarn.lock file yarn.lock.txt From your descriptions this seems like an issue external to twig.js. The library is built to work primarily with node, which has the path builtin. If you are compiling using something like webpack, you need to make sure that things like path are shimmed. Overall though, I'm not sure you are using things correctly, since the twig.js and twig.min.js are already built using webpack and not meant to be imported and compiled again. I would double-check the docs for the loader, my guess is that a different file is supposed to be imported. Thanks, I'll check those things out. I noticed that TwigJs is actually working despite the error, so you're probably right about that. Hi, In my case, the environment is React-Native, where path is not built-in. Using the minified version is OK, but probably not a stable way to handle it. Regards @Dallas62 interesting, as far as I know this library has never been tested with React-Native, so compatibility with it would be unknown.
gharchive/issue
2021-06-01T20:08:18
2025-04-01T04:36:09.117520
{ "authors": [ "Dallas62", "hoffmanmc", "leonardola", "willrowe" ], "repo": "twigjs/twig.js", "url": "https://github.com/twigjs/twig.js/issues/787", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
728607782
model: add Option for CurrentUser::verified The response from a current_user request (with the identify scope) is as follows: { "avatar": "a_05dcf66dd5961e3147bfdbef64282d1a", "discriminator": "4191", "flags": 768, "id": "168827261682843648", "locale": "en-US", "mfa_enabled": true, "premium_type": 2, "public_flags": 768, "username": "DusterTheFirst" } (also discord docs for more info) But the CurrentUser struct in the model requires a verified member which is only avaliable with the email scope. I have chosen to make it an Option<bool> versus adding #[serde(default)] like with the bot bool since verified: false has a very different meaning than verified: None. What is the difference between false and None? What is the difference between false and None? Basically what Vivian said in the discord, but none in this case would be that the http client doesn't have permission to access that data versus false meaning it does have access and the user is not verified. If it was only a bool, any request without the email scope would show the user as not verified, even if they are.
gharchive/pull-request
2020-10-24T00:06:32
2025-04-01T04:36:09.125257
{ "authors": [ "7596ff", "DusterTheFirst" ], "repo": "twilight-rs/twilight", "url": "https://github.com/twilight-rs/twilight/pull/564", "license": "ISC", "license_type": "permissive", "license_source": "github-api" }
651227010
feat(pixels): add my new pixel (49, 49) Checklist [x] I ran npm test locally and it passed without errors. [x] I only edited the _data/pixels.json file. [x] I entered the username in the pixels.json that I'm also using to create this pull request. [x] I acknowledge that all my contributions will be made under the project's license. Hi @AdamWGrise it seems like you already contributed a pixel so I'm closing this PR. Thanks!
gharchive/pull-request
2020-07-06T03:50:11
2025-04-01T04:36:09.128140
{ "authors": [ "AdamWGrise", "dkundel" ], "repo": "twilio-labs/open-pixel-art", "url": "https://github.com/twilio-labs/open-pixel-art/pull/2309", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2518803918
Fix 'Num Predict Fim' for LMStudio and Oobabooga API OpenAI-like APIs expect 'max_tokens' parameter instead of 'n_predict' Current behavior is: twinny sends: Request body: { "prompt": "<fim_prefix>\nexample:<fim_suffix><fim_middle>", "stream": true, "temperature": 0.2, "n_predict": 128 } Oobabooga starts inference with 'max_new_tokens': 16, Expected behavior is: 2. Oobabooga starts inference with 'max_new_tokens': 128, I'm not sure if the error applies to LMStudio, but I couldn't find any mentions of 'n_predict' in it's APIs. P.S. I haven't tested this MR 😉 LGTM, thanks!
gharchive/pull-request
2024-09-11T07:26:12
2025-04-01T04:36:09.164698
{ "authors": [ "AndrewRocky", "rjmacarthy" ], "repo": "twinnydotdev/twinny", "url": "https://github.com/twinnydotdev/twinny/pull/311", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
260388676
Click previous or next months date in current view messes up calendar In the Month view, if you click on a grey previous month day or next month day from the current calendar month view, it take you to that month, but the selected day starts going off. Then keep trying it for a previous month's day and it seems to get further out out of sync. @abouttimeruss I can't reproduce this issue. Could you provide more details? Have you made any customization of the calendar? Do you meet the same issue in the Demo page? http://twinssbc.github.io/Ionic-Calendar/demo/#/tab/home Works. I needed range-changed="reloadSource(startTime, endTime)" declared in my calendar tag.
gharchive/issue
2017-09-25T19:28:31
2025-04-01T04:36:09.166547
{ "authors": [ "abouttimeruss", "twinssbc" ], "repo": "twinssbc/Ionic-Calendar", "url": "https://github.com/twinssbc/Ionic-Calendar/issues/160", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1072027476
Search for tweets in language ? Issue Template Please use this template! Initial Check If the issue is a request please specify that it is a request in the title (Example: [REQUEST] more features). If this is a question regarding 'twint' please specify that it's a question in the title (Example: [QUESTION] What is x?). Please only submit issues related to 'twint'. Thanks. Make sure you've checked the following: [] Python version is 3.6; [] Updated Twint with pip3 install --user --upgrade -e git+https://github.com/twintproject/twint.git@origin/master#egg=twint; [] I have searched the issues and there are no duplicates of this issue/question/request. Command Ran Please provide the exact command ran including the username/search/code so I may reproduce the issue. c = twint.Config() c.TwitterSearch = True c.Limit = 1000 c.Lang = 'ar' c.Since = '2016-01-01' c.Search = '' Description of Issue Please use as much detail as possible. I just need to use the search for tweets in the language without any keyword, I need any tweet related to some language not based on some key words, what I should do with c.Search ! Environment Details Using Windows, Linux? What OS version? Running this in Anaconda? Jupyter Notebook? Terminal? Hey, I have workaround by applying characters of the language like this: c.Search = 'أ OR إ OR ه OR' But if there is another solution it will be great to know. Thanks.
gharchive/issue
2021-12-06T11:14:25
2025-04-01T04:36:09.171563
{ "authors": [ "Abdelrahmanrezk" ], "repo": "twintproject/twint", "url": "https://github.com/twintproject/twint/issues/1308", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
545406373
problem in fetching tweets Command Ran c = twint.Config() path='/user/test/' c.Output=path+'tweets.csv' c.Limit=1000 c.Lang='en' c.Store_csv=True c.Search='#hashtag' twint.run.Search(c) Description of Issue after running the above code to get 1000 tweets with a particular hashtag, I am getting only few number of tweets (60 tweets or less than that). Error which i am getting is - CRITICAL:root:twint.run:Twint:Feed:Tweets_known_error:Expecting value: line 1 column 1 (char 0) Expecting value: line 1 column 1 (char 0) [x] run.Feed [!] if get this error but you know for sure that more tweets exist, please open an issue and we will investigate it! Environment Details Using Windows 8.1 pro, 64 bit OS ,Running this in Anaconda, spyder. #604 The already reported issue is even pinned
gharchive/issue
2020-01-05T11:41:58
2025-04-01T04:36:09.174806
{ "authors": [ "deepalidhaka", "pielco11" ], "repo": "twintproject/twint", "url": "https://github.com/twintproject/twint/issues/632", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
619734143
Is it possible to capture and log the amount of data transfer through the proxy? I'm setting up a proxy to enable routing client requests to particular bucket/path in AWS S3. Is it possible to log the amount of data transfer happening through the proxy? If so, are there any examples of how this might be done? Hi @bnssoftware, you can use WithAfterReceive on the HttpProxyOptions to inspect the response as it comes through. I'm not sure how you would want to design it, since reading from the response stream requires that you repopulate it (since the stream is gone once you read it). However, that is the best place to inspect the response or size of the response. An example of WithAfterReceive is in the README.
gharchive/issue
2020-05-17T15:36:12
2025-04-01T04:36:09.202999
{ "authors": [ "bnssoftware", "twitchax" ], "repo": "twitchax/AspNetCore.Proxy", "url": "https://github.com/twitchax/AspNetCore.Proxy/issues/50", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
615933867
Support for custom HTTP transcoding Would Twirp consider supporting custom HTTP transcoding via Google's HttpRule in protobuf? Reason: We use Twirp internally for microservices, but some of these services need to expose custom, strictly REST, interfaces for external customers. For example, REST endpoints like: POST /users GET /users/:id GET /users to map to Proto: service Users { rpc Create(..) returns (..); rpc Get(..) returns (..); rpc List(..) returns (..); } As discussed in #239, Twirp will likely not support mechanisms for this, but folks at Twitch have written code to translate REST-style HTTP requests to Twirp requests, and Twirp responses back to REST-style HTTP responses. The complexity of presenting a Twirp service for consumption over REST-style HTTP seems best implemented as a standalone package or tool.
gharchive/issue
2020-05-11T14:34:34
2025-04-01T04:36:09.207532
{ "authors": [ "amitmahbubani", "dpolansky" ], "repo": "twitchtv/twirp", "url": "https://github.com/twitchtv/twirp/issues/238", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1084468672
Add an enum for Keyboard Keycodes like there is for MediaKey and SystemControlKey Hi, I think it would be more straightforward of an API to have an enum for the the different keyboard scancodes. I am happy to contribute such an enum - I would just populate it with the contents of https://usb.org/sites/default/files/hut1_22.pdf#page=83 right? This is a somewhat complicated problem because what the labels are depend on the locale of the computer (yes you can just say that US is the default and the rest of the locales have to do their own lookups), but it's rather confusing. The way I handle this is a bit different, where I have this library https://github.com/hid-io/layouts-rs that pulls in from a database of json files and constructs the lookups on demand. My use case is mainly for a compiler though and isn't directly used with usbd-hid. I don't really see a good way of handling this directly with enums beyond a bunch of helper functions that can convert the enums to the appropriate locales. Should be resolved now at least for US keyboards (which is the default per https://usb.org/sites/default/files/hut1_3_0.pdf).
gharchive/issue
2021-12-20T07:53:30
2025-04-01T04:36:09.210613
{ "authors": [ "TheButlah", "haata" ], "repo": "twitchyliquid64/usbd-hid", "url": "https://github.com/twitchyliquid64/usbd-hid/issues/35", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
3002978
[2.0-wip] Quote attribute values in CSS Some attribute values are quotes whereas others are not - type=checkbox vs type="search". Apparently it's safest to quote by default to avoid any potential for suffering. Safest in what way? This post by Mathias has some good details on the specifics - http://mathiasbynens.be/notes/unquoted-attribute-values Done deal, I'm sold—just updated all *=* selectors to be *="*". I've added this to the wiki on contributing to Bootstrap as well: https://github.com/twitter/bootstrap/wiki/Contributing-to-Bootstrap Thanks, @necolas!
gharchive/issue
2012-01-28T02:34:39
2025-04-01T04:36:09.213575
{ "authors": [ "markdotto", "necolas" ], "repo": "twitter/bootstrap", "url": "https://github.com/twitter/bootstrap/issues/1325", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
3066350
Package Docs in Download For Bootstrap v1.4 you were very kind and packaged all the documentation along with the framework in one easy zip file. I thought you might do the same for v2.0, so downloaded the zip file again but this time it contained no documentation. Could you please package it again? I can't be the only one that works offline enough that they need to be able to download it with the rest of the resources. If you download the ZIP from GitHub directly, you'll get the docs. If you click the download button on our docs or use the Customize page, you'll get just the assets. I'll talk to @fat about handling this better in 2.1. Fantastic, thanks for the swift reply! Even just adding a link to "download with documentation" to the homepage and a link to the github download url would be good for other people
gharchive/issue
2012-02-02T11:19:02
2025-04-01T04:36:09.215530
{ "authors": [ "lenary", "markdotto" ], "repo": "twitter/bootstrap", "url": "https://github.com/twitter/bootstrap/issues/1591", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
165212855
Heron native mesos scheduler Add heron native mesos scheduler and unit tests. This scheduler aims to help people get started with heron on mesos; it is not for production. It has a lot of concerns, for instance, not fully fault tolerant, not scalable, etc. In this implementation, heron would start the MesosFramework as a background process on the host doing submission, communicate with mesos master directly and do the actual scheduling. So: It assumes the host for submission is in the same network as mesos cluster. It assumes the heron mesos scheduler is long running. The heron mesos scheduler would re-schedule any failed executors. But if the heron mesos scheduler fails due to any unexpected reasons, it would not get recovered and the whole topology would halt. @maosongfu - why the claim that it is not scalable? I understand the fault tolerance aspect of it but not sure the scalability part of it. @kramasamy Scalability can be considered from different point of views. For instance, this scheduler would not cache offer, which means it may take a long time to schedule a big topology, since the tasks need to be scheduled one by one. @maosongfu - however when the job is scheduled, it knows upfront the number of containers and the resources it requires. As Mesos keeps giving those offers, we could grab those resources and schedule containers. @kramasamy Consider Mesos keeps giving offers slowly, it will take a long time to receive enough offers to fully schedule a big topology. Also, it is just one case. There are still a lot of potential improvement for scalability. Sounds good to me. Let us wait until @ashvina takes a look at it. LGTM 👍 @maosongfu - can we move this scheduler to contrib directory since it is experimental and also does not cover fault tolerance? @kramasamy It is not experimental; it is a scheduler aiming at getting started, similar to LocalScheduler. Should we also move all schedulers to contrib/ directory? Or we may create a new repo called: heron-schedulers, similar to storm or presto? since they are all on top of heron. one of the model that we discussed was the possibility of using a contrib directory - the code is allowed to check into the directory especially those are experimental / getting started. once they attain production quality, we move that code from contrib to main directories. This will allow for faster iteration in contrib rather than the PR waiting to be merged for a long time. This is a model that is followed in other open source projects such as Hadoop. Looks like a good idea IMO. @kramasamy On the one hand, Hadoop may not be a good example to follow; people always complain its huge mono-repo. For more current github projects, for instance, presto, they would like to split them into small sub repos, making each one clear. On the other hand, looking at some popular open source projects, I could not see “contrib” folder: jQuery, MongoDB, Redis, Puppet, Ruby on Rails, Jenkins. Django has a contrib folder and the role of folder is already explained here: https://docs.djangoproject.com/en/dev/ref/contrib/ Django aims to follow Python’s “batteries included” philosophy. It ships with a variety of extra, optional tools that solve common Web-development problems. This code lives in django/contrib in the Django distribution. This document gives a rundown of the packages in contrib, along with any dependencies those packages have. So, I am not convinced by the point that contrib folder is for experimental / getting started. I think it is more for tools building on top of the core part. To summarize, it can be a good idea to move all scheduler-implementations and similar things to an individual repo (to an individual folder is debatable). But whether to move them bases on the nature of those projects, rather than the quality of those projects. @kramasamy @maosongfu I agree with @maosongfu now. I had a couple of conversations with some OSS developers here. A contrib directory is primarily used for "add-on" features. For e.g. scripts and tools for debugging. There are a few projects where the contrib directory is used for un-important and un-supported code. However that may result in hard to manage "trash" and may be discouraging for new contributors. A scheduler implementation is core to Heron. So I think the Mesos scheduler should be added to core. However, I am not sure how to manage "not-for-production" contributions. A experimental feature may starve for users if left in its own branch. So I would prefer merging them in master. But how will a user know if he is using a experimental feature? Should we print a warning on the console. With respect to this patch, I think a few minor issues are pending in addition to fault tolerance cases. Also, I may not be able to deploy and test this on a mesos cluster. The scheduler code seems fine now. There are a few TODO items. Are you planning to create issues to help track them? @kramasamy @maosongfu I have one orthogonal question related to the Mesos scheduler. Apache REEF seems to support Mesos also. The YARN scheduler is based on REEF. I think a lot of code can be reused if we developed a Mesos Scheduler based on REEF. What do you think about it? I can give it a shot if you'd like. @ashvina Issue created: #1077 I also heard about that Apache REEF supports Mesos also. But I could not find enough documentation or status on that. It would be great if we can make heron run on Mesos basing on REEF at production quality. Just don't know how hard it is or how long it will take. @ashvina @kramasamy @nlu90 Any more concerns I need to handle beforing merging this pull request? @maosongfu @ashvina - I am fine with getting Mesos to the core rather than being contrib. As a policy we can adapt the following (LMK if this is ok) contrib directory will contains features/debugging tools/other software that are addons to heron core software will continue to be developed in the master and can reside on their own branches/forks until it attains quality before merging into master Any other items that is not covered by these two? @ashvina - Regarding Mesos on Apache REEF, we are good if the project supports it and to some extend proved in production. The key aspect with Mesos is the fault tolerance - Mesos has one way of doing this.
gharchive/pull-request
2016-07-13T00:41:25
2025-04-01T04:36:09.228276
{ "authors": [ "ashvina", "kramasamy", "maosongfu", "nlu90" ], "repo": "twitter/heron", "url": "https://github.com/twitter/heron/pull/1067", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
75251130
Remove '.js' suffix from AMD module ID. Fixes #1211. :+1: Can confirm that this fixes #1211. Would like to see this merged!
gharchive/pull-request
2015-05-11T14:59:13
2025-04-01T04:36:09.274477
{ "authors": [ "sirianni", "venyii" ], "repo": "twitter/typeahead.js", "url": "https://github.com/twitter/typeahead.js/pull/1227", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2053918862
Non-fatal bug, just letting you know Hi @twocolors This does not break anything, it just sends a message into debug. When I deploy a flow, I get an error message in the debug window, for example: Entity (2974081917) not found on device However, this does not stop the ESPHome node from working, and my automations still work correctly despite this message. Thank you for your continued work on this. show screen from node OUT and you use full deploy ? or modify deploy? Hi, sorry I am slow to respond, there's a lot going on. This is the output in debug: 2023/12/29, 07:20:57node: Front Highmsg : string[39] "Entity (2974081917) not found on device" 2023/12/29, 07:20:57node: Rear Highmsg : string[39] "Entity (2680563962) not found on device" 2023/12/29, 07:20:57node: Front Lowmsg : string[39] "Entity (1088152393) not found on device" 2023/12/29, 07:20:57node: Rear Lowmsg : string[38] "Entity (883961372) not found on device So it is one ESPHome device that controls 4 sets of floodlights: Front High, Rear High, Front Low, and Rear Low. They are circuits that use different amounts of power. This node works as expected. But the error messages appear when I press Deploy. Also, I upgraded now to 0.25, and I had to delete and re-add my ESPhome devices. It is working again 100%, but, just telling you. Thank you for your work on this project. you use full deploy ? or modify deploy? This is happening to me too. It happens on Full Deploy or When Node-Red starts, but not on Modify Deploy. https://github.com/twocolors/node-red-contrib-esphome/issues/15#issuecomment-1493763057 read this answer https://github.com/twocolors/node-red-contrib-esphome/issues/15#issuecomment-1493763057 Hi @twocolors, Again, sorry I am slow to respond. I did the test you asked. It happens only on full deploy, not on modified nodes. Again, it has no effect on actual running code - everything works 100%. It's just some debug code. However, I saw something interesting. I have now many ESPHome devices using your node. This error only happens on ONE ESPHome device. The other ones do not produce this output on startup. If I figure out why, I will tell you. If it matters, it is the first device I added. Also, please remember I am happy to help with English documentation. read this answer #15 (comment) What about it, that says that you corrected the issue 8 months ago. I'm still having this issue in 0.2.6 today. This is the correction that you see warnings while there is no connection to the device... everything used to break and the node would crash Hi @twocolors, Again, sorry I am slow to respond. I did the test you asked. It happens only on full deploy, not on modified nodes. Again, it has no effect on actual running code - everything works 100%. It's just some debug code. However, I saw something interesting. I have now many ESPHome devices using your node. This error only happens on ONE ESPHome device. The other ones do not produce this output on startup. If I figure out why, I will tell you. If it matters, it is the first device I added. Also, please remember I am happy to help with English documentation. Interesting, the same for me. I have 9 out nodes, only one of them is having this problem. It also just so happens to be the last device I have listed in ESPHome. @tethlah @DeeBeeKay this error occurs when, after restarting node-red, you immediately send data to esphome (set a delay because node-red does not yet have time to connect to esphome and receive its capabilities) @tethlah @DeeBeeKay this error occurs when, after restarting node-red, you immediately send data to esphome (set a delay because node-red does not yet have time to connect to esphome and receive its capabilities) It's not just after restarting, it's also when redeploying. And based on timestamps on other flows, there are other nodes that are instantiating prior to this node throwing the error. It's literally just one node throwing the error. There are 2 nodes being triggered at the same time, one throws the error, the other doesn't. oh full deploy / restart flow / restart nod-red all this method reconect to esphome , show screen where you see this error I'll have to wait, i can force it to pop up during all those instances, but it'll randomly pop up at some point during the day in the debug, when it pops up again I'll look and see if something happened before it pops up.
gharchive/issue
2023-12-22T13:23:05
2025-04-01T04:36:09.289162
{ "authors": [ "DeeBeeKay", "tethlah", "twocolors" ], "repo": "twocolors/node-red-contrib-esphome", "url": "https://github.com/twocolors/node-red-contrib-esphome/issues/31", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2605819495
incorrect importing of nan in squeeze_pro Which version are you running? The lastest version is on Github. Pip is for major releases. import pandas_ta as ta print(ta.version) import pandas_ta Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.11/dist-packages/pandas_ta/init.py", line 116, in from pandas_ta.core import * File "/usr/local/lib/python3.11/dist-packages/pandas_ta/core.py", line 18, in from pandas_ta.momentum import * File "/usr/local/lib/python3.11/dist-packages/pandas_ta/momentum/init.py", line 34, in from .squeeze_pro import squeeze_pro File "/usr/local/lib/python3.11/dist-packages/pandas_ta/momentum/squeeze_pro.py", line 2, in from numpy import NaN as npNaN ImportError: cannot import name 'NaN' from 'numpy' (/usr/local/lib/python3.11/dist-packages/numpy/init.py) Do you have TA Lib also installed in your environment? $ pip list Have you tried the development version? Did it resolve the issue? $ pip install -U git+https://github.com/twopirllc/pandas-ta.git@development Describe the bug A clear and concise description of what the bug is. There is no NaN in the numpy package and for all the indicators except squeeze_pro, the variable nan is correctly imported. In squeeze_pro the variable NaN is imported. The statement from numpy import NaN as npNaN should have been from numpy import nan as npNaN After fixing this error the importing of pandas_ta was successful import pandas_ta as ta print(ta.version) 0.3.14b0 To Reproduce Provide sample code. import pandas_ta as ta causes this error Expected behavior A clear and concise description of what you expected to happen. The import of pandas_ta shouldn't cause an error Screenshots If applicable, add screenshots to help explain your problem. Additional context Add any other context about the problem here. Thanks for using Pandas TA! Hello @skanduru, Again... Have you tried the development version? Did it resolve the issue? I had the same problem: . But I managed to fix it by replacing this line: from numpy import NaN as npNaN for this ones: import numpy as np npNaN = np.nan in the squeeze_pro.py This issue seems to be fixed on the development version. However, withe the development version I am getting the following error: An error occurred while loading the file: No module named 'numpy._core.numeric' Updating to numpy==2.0.0 seemed to resolve this issue. I also had to install setuptools to make this package work I had the same problem: . But I managed to fix it by replacing this line: from numpy import NaN as npNaN for this ones: import numpy as np npNaN = np.nan in the squeeze_pro.py wait what? You changed "from numpy import NaN as npNaN" to import numpy as np npNaN = np.nan , ???? what am I missing here?
gharchive/issue
2024-10-22T15:38:14
2025-04-01T04:36:09.304463
{ "authors": [ "Ascensao", "dmike23", "ewan777", "skanduru", "twopirllc" ], "repo": "twopirllc/pandas-ta", "url": "https://github.com/twopirllc/pandas-ta/issues/840", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1990902571
Add ability to draw an Image into another Image gdImageCopy previously wasn't exposed to consumers of the SwiftGD library. This PR adds the method drawImage(_ image: Image, at: Point) on Image to draw another image into it. Additional parameters of gdImageCopy have been omitted because it's simpler to just use the existing cropped function. Perfect – thank you!
gharchive/pull-request
2023-11-13T15:31:45
2025-04-01T04:36:09.315231
{ "authors": [ "andreasley", "twostraws" ], "repo": "twostraws/SwiftGD", "url": "https://github.com/twostraws/SwiftGD/pull/41", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1171820060
Support existingSecrets It would be nice to allow chart users to specify an existingSecret for htpasswd as well as a handful of other Secrets that are currently either auto-generated or required to supply a value directly. (Admittedly, htpasswd is hashed, but it is still not ideal to keep the hashed value in Git.) See some great examples in the various bitnami helm charts. I'm looking for this too. I want the password to be randomly generated on deployment using a Job, but with the actual file contents being required at chart compile time, this is not possible. Me also, it would be nice to use real secrets instead of hard-coding values for the chart. it would be nice to provide a Secret resource with the kubernetes.io/basic-auth type.
gharchive/issue
2022-03-17T01:47:07
2025-04-01T04:36:09.351740
{ "authors": [ "WoodyWoodsta", "brsolomon-deloitte", "leemeichin" ], "repo": "twuni/docker-registry.helm", "url": "https://github.com/twuni/docker-registry.helm/issues/58", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2678939333
Regression with Option[Record] results in 1.0.0-RC6 I experienced a regression in the latest 1.0.0-RC6. I don't yet have a fully tested self contained example though. But I think it boils down to this code: case class Foo(a: Int, b: String) case class Bar(c: Int, d: Option[String]) sql"SELECT a, b, c, d FROM foo LEFT JOIN bar ON a = c".query[(Foo, Option[Bar])] With these records in the DB: | a | b | ------------- | 1 | 'abc' | | c | d | ------------ | 1 | NULL | I used to get (Foo(1, "abc"), Some(Bar(1, None))) as the result of that query, and now I get (Foo(1, "abc"), None). Thanks for the report. I believe this is a long running issue in doobie and should be fixed by #2136. See the changes I made for the test scenario "Read should read correct columns for instances with Option (None)" in ReadSuite.scala I'll add more tests to verify that left joining a table with not-null and null columns are handled correctly. I believe in RC5 the behaviour works for your case but not in the general case. Up until 1.0.0-RC5 I didn't notice issues with it. But I also haven't used that feature prolifically.
gharchive/issue
2024-11-21T10:37:53
2025-04-01T04:36:09.427147
{ "authors": [ "Jasper-M", "jatcwang" ], "repo": "typelevel/doobie", "url": "https://github.com/typelevel/doobie/issues/2144", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2131809115
Trace SDK: implement logging span exporter Reference Link Java implementation LoggingSpanExporter.java The logging exporter can be used for debugging purposes. 1) Implement LoggingSpanExporter The draft implementation is available here https://github.com/typelevel/otel4s/blob/615e4b91f2381f743b43d4abdcfbd59e9ee3f87b/sdk/trace/src/main/scala/org/typelevel/otel4s/sdk/trace/exporter/LoggingSpanExporter.scala and perhaps can be helpful. 2) Add LoggingSpanExporter to the SpanExportersAutoConfigure That way, the exporter can be autoconfigured. Caveats cats.effect.std.Console is not designed to be a logging interface. However, we don't depend on https://github.com/typelevel/log4cats (at least yet). Perhaps we can print the following statement when allocating the exporter: 'You are using the logging exporter. It may drastically affect the performance of the application.'. Hi, if no one else is currently tackling this, could I have a shot at it? Hi @scott-thomson239! No one is working on it, as far as I know. I can assign it to you. We can use cats.effect.std.Console as a 'logging interface' for now. Eventually, we will switch to log4cats. Hi, I'm having a bit of difficulty with writing tests for this since logs are just written to stdout with cats.effect.std.Console so it's hard to extract the written logs. I think I could extract them by replacing stdout similar to what is done in the cats-effect Console tests although this seems like overkill and I'm not sure if this will interfere with other running tests. Does this seem okay? Since we pass Console implicitly, we can provide a custom implementation that keeps records in memory: class InMemoryConsole[F[_]: Sync](queue: Queue[F, InMemoryConsole.Entry]) extends Console[F] { import InMemoryConsole.Entry import InMemoryConsole.Op def readLineWithCharset(charset: Charset): F[String] = Sync[F].delay(sys.error("not implemented")) def entries: F[List[Entry]] = queue.tryTakeN(None) def print[A](a: A)(implicit S: Show[A]): F[Unit] = queue.offer(Entry(Op.Print, S.show(a))) def println[A](a: A)(implicit S: Show[A]): F[Unit] = queue.offer(Entry(Op.Println, S.show(a))) def error[A](a: A)(implicit S: Show[A]): F[Unit] = queue.offer(Entry(Op.Error, S.show(a))) def errorln[A](a: A)(implicit S: Show[A]): F[Unit] = queue.offer(Entry(Op.Errorln, S.show(a))) } object InMemoryConsole { sealed trait Op object Op { case object Print extends Op case object Println extends Op case object Error extends Op case object Errorln extends Op } final case class Entry(operation: Op, value: String) def create[F[_]: Async]: F[InMemoryConsole[F]] = Queue.unbounded[F, Entry].map { queue => new InMemoryConsole[F](queue) } }
gharchive/issue
2024-02-13T08:53:59
2025-04-01T04:36:09.433910
{ "authors": [ "iRevive", "scott-thomson239" ], "repo": "typelevel/otel4s", "url": "https://github.com/typelevel/otel4s/issues/496", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1619344927
Add example for fs2-data-csv Anybody else know something more exciting to do than printing out the parsed lines of a CSV? Anybody else know something more exciting to do than printing out the parsed lines of a CSV? Adding/removing a field and printing that out? Calculating and printing the max/min/mean for a specific column? Anybody else know something more exciting to do than printing out the parsed lines of a CSV? Adding/removing a field and printing that out? Calculating and printing the max/min/mean for a specific column? Well this is awkward, I don't know how to do aggregation on columns, I want to print out the mean of the ages column using fs2. Do I use a fold? :P Anybody else know something more exciting to do than printing out the parsed lines of a CSV? Adding/removing a field and printing that out? Calculating and printing the max/min/mean for a specific column? Well this is awkward, I don't know how to do aggregation on columns, I want to print out the mean of the ages column using fs2. Do I use a fold? :P I'll say something like val (a,b) = listOfNumbers.foldLeft((0,0)){ case ((s, n), x) => (s + x, n + 1) } a/b
gharchive/pull-request
2023-03-10T17:32:52
2025-04-01T04:36:09.439406
{ "authors": [ "TonioGela", "zetashift" ], "repo": "typelevel/toolkit", "url": "https://github.com/typelevel/toolkit/pull/16", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2176520578
ifsc-sj-articled:0.1.0 I am submitting [x] a new package [ ] an update for a package A Typst template for articles of the Federal Institute of Santa Catarina. I have read and followed the submission guidelines and, in particular, I [x] selected a name that isn't the most obvious or canonical name for what the package does [x] added a typst.toml file with all required keys [x] added a README.md with documentation for my package [x] have chosen a license and added a LICENSE file or linked one in my README.md [x] tested my package locally on my system and it worked [x] excluded PDFs or README images, if any, but not the LICENSE [x] ensured that my package is licensed such that users can use and distribute the contents of its template directory without restriction, after modifying them through normal use. Hey and thanks for the submission! Before we can get this merged, you'll have to rename the package. ifsc-sj is descriptive for whom this template is for (which is good, keep it as part of the name!) but articled is also descriptive. Template names must contain at least one non-descriptive / creative name component so discoverability is a level playing field. We recommend the naming schema <adjective>-ifsc-sj, see the README for more info. Hi! Thanks for the heads up. Updated the new name! Thank you!
gharchive/pull-request
2024-03-08T18:03:31
2025-04-01T04:36:09.526507
{ "authors": [ "gabrielluizep", "reknih" ], "repo": "typst/packages", "url": "https://github.com/typst/packages/pull/409", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1659067363
For loop with indexing does not work anymore Steps to reproduce: Compile the following #let elements = lorem(5).split() #for i, el in elements [ #i #el \ ] Expected output 0 Lorem 1 ipsum 2 dolor 3 sit 4 amet. Logs error: expected keyword `in`. did you mean to use a destructuring pattern? ┌─ test.typ:2:6 │ 2 │ #for i, el in elements [ │ ^ error: expected keyword `in` ┌─ test.typ:2:6 │ 2 │ #for i, el in elements [ │ ^ Version typst 0.1.0 (94e052b8) compiled on a Mac M1 ARM64 (Could that be why? It'd be nice if someone could reproduce on another machine) It works with the online version: Deployed on 2023-04-06T11:25:05.540Z Typst compiler version: 4f4af02acea0022a5c1966d9b7b4150b35749edd Destructuring was merged to main (see https://github.com/typst/typst/pull/532), so you will need to either #for (i, el) in elements.enumerate() [ ... ], or use the latest released (non-main) version. Thanks! It works now!
gharchive/issue
2023-04-07T18:22:06
2025-04-01T04:36:09.529682
{ "authors": [ "PgBiel", "npielawski" ], "repo": "typst/typst", "url": "https://github.com/typst/typst/issues/654", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1326942131
display error messages from TabNine fix #58 superseded by 29402a2
gharchive/pull-request
2022-08-03T09:34:07
2025-04-01T04:36:09.532712
{ "authors": [ "tzachar", "zhyu" ], "repo": "tzachar/cmp-tabnine", "url": "https://github.com/tzachar/cmp-tabnine/pull/59", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2253047582
add script to link images to products by name adds an anon apex script that can be executed in the developer console the script steps are: look for products that don't have images linked to them and collect their names look for CMS content (images) that match the product names create ProductMedia records to link the CMS content records to the products for both detail and list media Merging PR from Daniel. Merging PR from Daniel. Hi Tom My current work email is @.*** On Mon, Jul 22, 2024, 12:05 p.m. Tom Zarr @.***> wrote: @.**** approved this pull request. Thanks for the contribution Daniel! I couldn't seem to locate your current email address and the Partner Community had you listed under Studio Science. Is that still current? Please email me so I can stay in touch. — Reply to this email directly, view it on GitHub https://github.com/tzarrsf/b2b-commerce-gtk-admin/pull/1#pullrequestreview-2191980188, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAQWHDQGAK2F72EZKN5O2TZNUUUJAVCNFSM6AAAAABGPGYZLWVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDCOJRHE4DAMJYHA . You are receiving this because you authored the thread.Message ID: @.***>
gharchive/pull-request
2024-04-19T13:38:54
2025-04-01T04:36:09.545724
{ "authors": [ "dangt85", "tzarrsf" ], "repo": "tzarrsf/b2b-commerce-gtk-admin", "url": "https://github.com/tzarrsf/b2b-commerce-gtk-admin/pull/1", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1650738168
🛑 InfluxDB is down In 1ba7cbe, InfluxDB (https://influx.uvvu.pw) was down: HTTP code: 0 Response time: 0 ms Resolved: InfluxDB is back up in 5460c7b.
gharchive/issue
2023-04-01T23:52:23
2025-04-01T04:36:09.556370
{ "authors": [ "ArtieFuzzz" ], "repo": "u-v-v-u/status", "url": "https://github.com/u-v-v-u/status/issues/23", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
420049317
HTTP routing not working after using .ws('/*') I came across a weird bug: If I do this const app = uWS.App(); app.ws('/*', { /* Options */ maxPayloadLength: 16 * 1024 * 1024, idleTimeout: 120, /* Handlers */ open: (ws, req) => { ws.send('ok'); }, message: (ws, message) => { ok = ws.send(message); }, drain: (ws) => { global.console.log('WebSocket backpressure: ' + ws.getBufferedAmount()); }, close: (ws, code, message) => { global.console.log(`WebSocket closed.`); } }).get('/ok', (res) => { res.writeHeader('Content-Type', 'application/json'); res.end('ok'); }).listen(1111, (socket) => { if (socket) { app.socket = socket; global.console.log('Listening to port ' + 1111); } else { global.console.log('Failed to listen to port ' + 1111); } }); get request on /ok fails. but if I change /* in ws to something else like /websocket, get request works. Also, if I do get('/*'), get requests work. I haven't tested it with other type of requests though, only get ones. The latest binaries should fix this Fixed 👍 Thanks!
gharchive/issue
2019-03-12T15:26:02
2025-04-01T04:36:09.664155
{ "authors": [ "aadityataparia", "alexhultman" ], "repo": "uNetworking/uWebSockets.js", "url": "https://github.com/uNetworking/uWebSockets.js/issues/98", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
413267621
can we have a golang library? I hope that there is a golang library exported. Any plan to make a go wrapper for this library that is ready to deploy and no need to do CGO. Hello - I think that's not the way to go(lang). There are high performance alternatives for Go already, see graph. For Node.js and Python and Lua and whatnot, where you have a lacking language, it makes sense to offload things to C++ but for Golang you already have a decent language as can be shown with gobwas/ws and valyala/fasthttp. thank you!, I was comparing to the chart alone and think if that can be also wrap this uwebsock into go because there are still distinguish differences from both language implementations. It's not going to fit. Golang is very peculiar and does not have a standardized C base you can link to.
gharchive/issue
2019-02-22T06:51:45
2025-04-01T04:36:09.665935
{ "authors": [ "alexhultman", "jjhesk" ], "repo": "uNetworking/uWebSockets", "url": "https://github.com/uNetworking/uWebSockets/issues/841", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
808545230
Some personalization tokens not being caught Describe the bug There are some flows in uPortal where non-json requests are not honoring the personalization tokens. To Reproduce Steps to reproduce the behavior: Add a portlet with personalization tokens in the title via the Customize drawer. Note in the Customize drawer, the personalization tokens are honored. Note the title of the newly added portlet does not honor the personalization tokens. Expected behavior All non-admin, user-facing areas of uPortal should honor the personalization tokens Screenshots Platform: uPortal Version: uPortal tip of develop (5.9.1-SNAPSHOT) OS: Ubuntu Browser Agnostic This fix is in 5.11.0.
gharchive/issue
2021-02-15T13:34:35
2025-04-01T04:36:09.669917
{ "authors": [ "cbeach47" ], "repo": "uPortal-Project/uPortal", "url": "https://github.com/uPortal-Project/uPortal/issues/2283", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
271580065
Analytics MUMUP-3023: "As a MyUW stakeholder, I would like to know how many times a given notification was dismissed/undismissed, so that I can better understand how effective the notification is and how users engage with it." MUMUP-3024: "As a stakeholder, I would like to know how many times a given notification has been rendered, so that I better understand how effective MyUW notifications are." In this PR: Track notifications rendered by clicking the notification bell Track dismissal from bell menu Track priority notification renders Track dismiss from priority notification Track dismiss/restore from notifications page Track clicks on mobile side nav notification bell (with and without priority indicator) Remove unused directive mode (notifications-bell mode="mobile-menu">) Fix a bunch of JSDoc warnings that I'm tired of seeing from eslint every time I commit something. I realize that this should have been a separate PR, but I got carried away...so I'll note the files that have changes pertinent to analytics: components/portal/main/controllers.js components/portal/main/partials/main-menu.html components/portal/messages/controllers.js components/portal/messages/partials/notifications-bell.html components/portal/messages/partials/view_notifications.html Contributor License Agreement adherence: [x] This Contribution is under the terms of Individual Contributor License Agreements (and also Corporate Contributor License Agreements to the extent applicable) appearing in the Apereo CLA roster. AppVeyor is on the fritz :checkered_flag: :building_construction: :no_entry: npm ERR! path C:\projects\uportal-app-framework\node_modules\requirejs\bin\r.js.1908498016 npm ERR! code EPERM npm ERR! errno -4048 npm ERR! syscall rename npm ERR! Error: EPERM: operation not permitted, rename 'C:\projects\uportal-app-framework\node_modules\requirejs\bin\r.js.1908498016' -> 'C:\projects\uportal-app-framework\node_modules\requirejs\bin\r.js' npm ERR! { Error: EPERM: operation not permitted, rename 'C:\projects\uportal-app-framework\node_modules\requirejs\bin\r.js.1908498016' -> 'C:\projects\uportal-app-framework\node_modules\requirejs\bin\r.js' npm ERR! cause: npm ERR! { Error: EPERM: operation not permitted, rename 'C:\projects\uportal-app-framework\node_modules\requirejs\bin\r.js.1908498016' -> 'C:\projects\uportal-app-framework\node_modules\requirejs\bin\r.js' npm ERR! errno: -4048, npm ERR! code: 'EPERM', npm ERR! syscall: 'rename', npm ERR! path: 'C:\\projects\\uportal-app-framework\\node_modules\\requirejs\\bin\\r.js.1908498016', npm ERR! dest: 'C:\\projects\\uportal-app-framework\\node_modules\\requirejs\\bin\\r.js' }, npm ERR! stack: 'Error: EPERM: operation not permitted, rename \'C:\\projects\\uportal-app-framework\\node_modules\\requirejs\\bin\\r.js.1908498016\' -> \'C:\\projects\\uportal-app-framework\\node_modules\\requirejs\\bin\\r.js\'', npm ERR! errno: -4048, npm ERR! code: 'EPERM', npm ERR! syscall: 'rename', npm ERR! path: 'C:\\projects\\uportal-app-framework\\node_modules\\requirejs\\bin\\r.js.1908498016', npm ERR! dest: 'C:\\projects\\uportal-app-framework\\node_modules\\requirejs\\bin\\r.js', npm ERR! parent: '@uportal/app-framework' } npm ERR! npm ERR! Please try running this command again as root/Administrator. :recycle: This appears to be unrelated to changes, closing and re-opening PR to retrigger AppVeyor CI.
gharchive/pull-request
2017-11-06T18:47:26
2025-04-01T04:36:09.678045
{ "authors": [ "ChristianMurphy", "thevoiceofzeke" ], "repo": "uPortal-Project/uportal-app-framework", "url": "https://github.com/uPortal-Project/uportal-app-framework/pull/598", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
125435533
explain to users what's happening when google presents a captcha while proxying I'm having trouble reproducing right now but the general issue that has come up through user testing is that some websites, Google included, will throw a captcha at the user when they notice their IP has jumped due to using uProxy. We might want to explain to users what's happening here. @jab, perhaps one for you...any thoughts? Do you only see this with cloud? I've only seen it with cloud. Currently, I'm assuming it's because I don't routinely test with real uProxy users in other countries -- seems unlikely Google, for example, would be singling out the IPs of cloud providers. My expectation is the reverse: it seems likely to me that Google might flag client access from a cloud IP range as suspicious. it seems likely to me that Google might flag client access from a cloud IP range as suspicious. I've seen that before too, but I think it can also be triggered by per-IP rate limiting; i.e. even if it's a home IP address, if it's proxying for enough users all hitting Google (or YouTube, etc.) at the same time, it triggers the CAPTCHA for future requests from that IP until the rate limit clears. Not sure if that's what likely happened when @trevj saw this in the past though. (btw CloudFlare does the same thing for uproxy.org now too as a DDoS protection; configurable, of course) Re UX, I think it'd be great if uProxy could detect this and explain what's happening to the user. Even better would be if the upstream CAPTCHA pages themselves had clearer explanations, if they don't currently. ("We're seeing a lot of traffic from your address and need to make sure you're not an evil robot. This can happen when too many people are all trying to access Google at the same time from the same place (or through the same proxy).") Even doubly better would be if rate limiting servers and distributed proxies wishing to respect rate limiting could coordinate to intelligently avoid this when possible. Does uProxy currently do anything with 429 responses, assuming that could help? (@trevj do you happen to know such details of the responses you saw? If they were 429s, would be interesting to know if there were Retry-After headers too.)
gharchive/issue
2016-01-07T16:51:56
2025-04-01T04:36:09.683283
{ "authors": [ "bemasc", "jab", "trevj" ], "repo": "uProxy/uproxy", "url": "https://github.com/uProxy/uproxy/issues/2164", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
136191560
Improve integration tests: add type checking, cleanup instance ids, refactor some tests Fixes https://github.com/uProxy/uproxy/issues/2245 (type checking) by calling methods on a CoreConnector object, instead of directly using on and emit Fixes https://github.com/uProxy/uproxy/issues/2246 (local instanceId now returned by login) Splits log back in and check permissions test case into multiple smaller test cases that test the same functionality Few questions, mostly on race conditions Reviewed 5 of 5 files at r1. Review status: all files reviewed at latest revision, 7 unresolved discussions, some commit checks failed. src/generic_core/uproxy_core.ts, line 153 [r1] (raw file): I'm a bit worried about the convention here, this feels like it should probably be returned from within the saving process or the method should have a very explicit comment around it that we will not have finished saving the connected objects when the promise resolves and that race conditions could come out of this. src/integration/core.spec.ts, line 58 [r1] (raw file): Just to double-check: is this unit test being used to set up shared state for the rest of the unit tests? If so, can you add a TODO for switching to actual unit tests here? Thanks. src/integration/core.spec.ts, line 63 [r1] (raw file): Instead of having the typing in the variable, any chance we could have it be more descriptive of what the condition represents (e.g. aliceInitialized?) src/integration/core.spec.ts, line 166 [r1] (raw file): Unused variable src/integration/core.spec.ts, line 176 [r1] (raw file): Why not just call bob.modifyConsent here? src/integration/core.spec.ts, line 265 [r1] (raw file): What is having bob log back in accomplishing here? src/integration/core.spec.ts, line 289 [r1] (raw file): Should we be worrying about not waiting for this to finish? Comments from the review on Reviewable.io Review status: 4 of 5 files reviewed at latest revision, 7 unresolved discussions. src/generic_core/uproxy_core.ts, line 153 [r1] (raw file): Not 100% sure what you mean here. Are you thinking that I should wait until the this.connectedNetworks_.set(networks); call resolves before returning? login now returns a Promise<uproxy_core_api.LoginResult> so I'm fine delaying this Promise until you think everything is finished saving src/integration/core.spec.ts, line 58 [r1] (raw file): We've already had https://github.com/uProxy/uproxy/issues/2250 for this, I'll add it to a comment src/integration/core.spec.ts, line 176 [r1] (raw file): We call fulfill so that the bob.modifyConsent call at line 173 is only ever executed once. If we just call bob.modifyConsent directly here, every time there is a ..USER_FRIEND update for Alice with the specified consent, bob will call modifyConsent again. I'm not 100% sure if doing that will break this test case, but there were other test cases where this type of behavior caused a problem. I think a better thing to do at some point would be to add an off(..) method to the CoreConnector so we could stop listening once we got the update we wanted (the Freedom module actually has this method and it was used before in the tests) - but adding that method is a bit of a pain, and it would only be used in these integration tests, not in actual uProxy. src/integration/core.spec.ts, line 265 [r1] (raw file): Bob was logged out in the previous test cases. This is the same as the old logic so I haven't really changed any test behavior here, but https://github.com/uProxy/uproxy/issues/2250 is the issue to clean it up src/integration/core.spec.ts, line 289 [r1] (raw file): Currently modifyConsent doesn't return any Promise in the CoreConnector, so we don't easily know when it's finished. I'd rather leave this as-is for now, but maybe at some point we should go in and make all the CoreConnector methods return Promises (generally the old ones which were written before PromiseCommand still don't return promises) Comments from the review on Reviewable.io Reviewed 1 of 1 files at r2. Review status: all files reviewed at latest revision, 5 unresolved discussions, some commit checks failed. src/generic_core/uproxy_core.ts, line 153 [r1] (raw file): Yes, I think that would be reasonable (either do that or have a comment explaining why that is not happening) src/integration/core.spec.ts, line 176 [r1] (raw file): Instead of using promises for this (really not what they were intended for), would you mind switching to using _.once? Otherwise, can you add a comment explaining why we are doing it this way? Thanks. src/integration/core.spec.ts, line 265 [r1] (raw file): Ah, forgot about that. Thanks. src/integration/core.spec.ts, line 289 [r1] (raw file): Ah, okay, I thought this was from the social API. That's fine. Comments from the review on Reviewable.io All done, PTAL Review status: 3 of 5 files reviewed at latest revision, 3 unresolved discussions. src/generic_core/uproxy_core.ts, line 153 [r1] (raw file): Done. src/integration/core.spec.ts, line 166 [r1] (raw file): Done. src/integration/core.spec.ts, line 176 [r1] (raw file): Done. Comments from the review on Reviewable.io :+1: :thumbsup: Reviewed 2 of 2 files at r3. Review status: all files reviewed at latest revision, all discussions resolved. Comments from the review on Reviewable.io
gharchive/pull-request
2016-02-24T21:28:50
2025-04-01T04:36:09.711237
{ "authors": [ "dborkan", "jpevarnek" ], "repo": "uProxy/uproxy", "url": "https://github.com/uProxy/uproxy/pull/2275", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
285251483
False positive with variable to field In this case, a @Nullable field is stored to a local variable. When the field is used in a null condition an error occurs, but if the variable is used then it does not. It seems the data flow analysis does not realize they are the same. error: [NullAway] returning @Nullable expression from method with @NonNull return type return (writer == null) ? CacheWriter.disabledWriter() : castedWriter; ^ <K1 extends K, V1 extends V> CacheWriter<K1, V1> getCacheWriter() { @SuppressWarnings("unchecked") CacheWriter<K1, V1> castedWriter = (CacheWriter<K1, V1>) writer; return (writer == null) ? CacheWriter.disabledWriter() : castedWriter; } Yeah, NullAway does not track local variable aliases so it misses this case. Like #35 I think we could support this eventually by doing a more precise analysis in cases where we are going to report an error. Going to mark this as low priority for now but it would be a good project to take on eventually BTW thanks for all the reports @ben-manes! Please keep them coming 😄 Unfortunately I think that I have to give up on the target project since it is non-idiomatic for performance reasons. It uses a lot of nullable fields that are enabled in certain configurations, as indicated by code generated subclasses. Since those usage paths can only occur when non-null, and NullAway cannot track deeply enough, many false positives are hard to easily resolve without turning the whole thing off. But it was a nice experiment to try this out! Ok no problem! One thing is if you control the code generator, you can tweak it to generate downcasts or other warning suppressions so you can still use NullAway on other more idiomatic parts of the code base. (The downcasts may not be tolerable if the code is super performance critical.) We'll keep working on improving NullAway in the meantime. Yes, though the codegen is only for minimizing fields and the logic is in the base class. I would expect inlining to remove the downcasts, so it would be free. But the project has exceeded its complexity budget and this would add obscurity when already challenging. But this is a good trick to know if I decide to introduce NullAway to a work project. Thanks! Making progress using your trick, so we'll see how it comes out :) Another case where this issue occurs in the pattern, boolean create = (foo == null); if (!create) { foo.bar() } This scenario is about readability of the code by naming the condition when used multiple times in a method. Using the explicit check each time would be okay, but convey less when maintaining and need reparsing of what null means. But due to not tracking the invocation on foo is a false positive. Making progress using your trick, so we'll see how it comes out :) Awesome! :) Regarding your create example, yeah, this would be good to handle too. If you have any rough categorization of how often this case happens vs. the first cast example that could be helpful for prioritization. The solutions are different: the cast example requires tracking equalities between variables / fields, while the create example requires tracking conditional (non-)nullness, facts like "if create is false then foo is not null. In this case it was only once, but that is because I usually extract those out to query methods. Since method tracking isn't supported, that has been a largest case of false positives. My present goal is to have it pass in order to review the benefits. There are a lot more suppressions than I'd like since inference is too locally scoped. But I also think it may have found one or two possible errors in a JSR adapter, so it will be worthwhile regardless. Since this requires more extensive annotations just like the Checker Framework does, I'm curious to run their analysis afterwards to see how it compares. Everything works nicely! I had forgotten why the CheckerFramework is painful, so this is a nice compromise. :) Great, so glad you got things working! And thanks again for all the reports. We'll dig through them more in the new year. Regarding the query methods, I think we can actually support those more easily with a new annotation, rather than requiring a library model. I'll re-open another issue specifically around that one. Joy, integrated into my project. There are 50 targeted suppressions and neutral impact to code readability. Thanks for all of the hard work :) Joy, integrated into my project. There are 50 targeted suppressions and neutral impact to code readability. Thanks for all of the hard work :) Awesome! Thanks for all your reports and feedback! @msridhar, what are the chances of this being picked up? We at Canva are rolling out NullAway in our code base, and there's been pushback from some engineers finding it cumbersome to work around NullAway's limitation. Here's an example that's similar to the create example above: String potentiallyNull = // ... boolean definitelyNotNull = potentiallyNull != null; if (definitelyNotNull) { int length = potentiallyNull.length(); // <-- NullAway warning } if (potentiallyNull != null) { int length = potentiallyNull.length(); // <-- No warning } We'd also be happy to contribute if that's the quickest way forward, just let me know how to help. @wbadam exciting to hear you are working on rolling out NullAway at Canva! Given the number of times this has come up, I would be open to trying to address cases like your example. Unfortunately it may not be a super-quick thing to implement. An over-simplified view is that right now, NullAway keeps track of a set of variables that are @Nullable and a set of variables that are @NonNull before and after each line of code in a method. We would need to change this representation so that we could track that a variable is @NonNull if some other condition holds, like some boolean variable is true. I was always hesitant to add such support since the dataflow analysis that computes this information is performance critical for NullAway, and we've always tried to keep the compile-time overhead of NullAway as low as possible, so like other Error Prone checks it can be run on every build. I think it would take some careful design and measurement to add this extra reasoning without compromising performance too much (it's ok to compromise a bit). Maybe we could use a flag to control whether the extra reasoning is enabled, but that might make the code excessively ugly, so hopefully we wouldn't need that. Do you or one of your teammates feel willing / able to dig into this a bit? Having more help would definitely make the exploration go faster. I am slammed for at least the next couple of days and won't have time to look more closely. But I can try to give pointers to the relevant code. @msridhar certainly happy to look into it, a direction to the relevant code would be much appreciated! I have renamed this issue to focus on the false positives stemming from cases like those in https://github.com/uber/NullAway/issues/98#issuecomment-354630644. @msridhar certainly happy to look into it, a direction to the relevant code would be much appreciated! Thanks, @wbadam, I appreciate it. I still need to do some thinking and exploration as to the best way to support this. Right now, the state tracked by NullAway during local dataflow analysis is a NullnessStore, which maps access paths to their nullness state. To support this use case we need to instead track something like "conditional nullness", i.e., facts of the form, "if this variable is true/false, then this access path is nullable / non-null." This would involve updating the store data structure to be able to hold such conditional nullness facts (and to properly compute least upper bounds on the stores), updating the transfer functions in AccessPathNullnessPropagation to generate the facts, and also updating relevant APIs that query the dataflow analysis for nullness information at different program points. And, I would like to do this in a way that does not hurt NullAway performance too much in cases where conditional nullness facts do not need to be tracked. I have some other deadlines and an upcoming holiday, so I won't be able to dig into this more until after mid-April. If anyone has cycles and wants to attempt some prototyping based on the above, feel free. Otherwise, I will take a look when I have some time available. Our C++ colleagues at Google have done some work to handle variables like the definitelyNotNull example above. My understanding is that they're currently using satisfiability checking but that they are considering alternative approaches that might perform better, including essentially "inlining" potentiallyNull != null in place of usages of definitelyNotNull (potentially inlining further recursively). (I'm sure there would be caveats around cases in which potentiallyNull might change value, for example.) I'm told that one approach (similar but not identical to "inlining") is being demoed in https://github.com/llvm/llvm-project/pull/82950. Thanks a lot @cpovirk! That's very interesting. The inlining approach is appealing in terms of trying to make the results of dataflow not depend on whether you store certain types of conditions in an (effectively final?) local variable or check them directly. I'm still unsure of the best way to proceed for NullAway. I hope to be able to put in more time on this soon. Hitting this a reasonable amount as well for Chrome code. Same examples as above. E.g.: boolean hostInformationProvided = providedHostPackageName != null && providedHostPackageLabel != null && providedHostVersionCode != null && providedPackageName != null && providedPackageVersionName != null; if (hostInformationProvided) { ... } I've been meaning to look more at this for a while, just haven't had the cycles. I'll see if I can prioritize it. Thank you @agrieve for the additional example.
gharchive/issue
2017-12-31T10:47:47
2025-04-01T04:36:09.769031
{ "authors": [ "agrieve", "ben-manes", "cpovirk", "msridhar", "wbadam" ], "repo": "uber/NullAway", "url": "https://github.com/uber/NullAway/issues/98", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
474759036
Array support 3 array archiving and backfill support Codecov Report Merging #260 into master will decrease coverage by 2.33%. The diff coverage is 30.25%. @@ Coverage Diff @@ ## master #260 +/- ## ========================================== - Coverage 68.81% 66.47% -2.34% ========================================== Files 163 166 +3 Lines 22567 22770 +203 ========================================== - Hits 15529 15137 -392 - Misses 5820 6455 +635 + Partials 1218 1178 -40 Impacted Files Coverage Δ query/aql_processor.go 79.88% <ø> (ø) :arrow_up: memstore/list/vector_party.go 62.5% <0%> (-4.81%) :arrow_down: memstore/backfill.go 0% <0%> (-74.51%) :arrow_down: memstore/common/test_factory_base.go 0% <0%> (ø) memstore/common/vector_party.go 0% <0%> (ø) memstore/common/pinnable.go 0% <0%> (ø) memstore/common/data_value.go 77.4% <0%> (-2.64%) :arrow_down: memstore/common/batch.go 0% <0%> (ø) memstore/common/vector_party_serializer.go 0% <0%> (ø) memstore/snapshot.go 91.8% <100%> (+0.13%) :arrow_up: ... and 30 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 51d1f0a...9696552. Read the comment docs.
gharchive/pull-request
2019-07-30T18:57:53
2025-04-01T04:36:09.782942
{ "authors": [ "codecov-io", "voyager-dw" ], "repo": "uber/aresdb", "url": "https://github.com/uber/aresdb/pull/260", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
688251205
Enable processing queue split policy by domainID What changed? Change feature flag for enabling different kinds of split policy to be ones filtered by domainID. Why? so that we can only enable split policy for certain domains, in case most of them get split How did you test it? Will test on staging2 Potential risks N/A, disabled by default Coverage decreased (-0.4%) to 66.95% when pulling e92284f73d42f88fd61d1e102c6fb00720f62187 on yycptt:enable-split-by-domain into 63e8bacf5e9811801dc8d94098b341553c242ad5 on uber:master.
gharchive/pull-request
2020-08-28T17:51:38
2025-04-01T04:36:09.786645
{ "authors": [ "coveralls", "yycptt" ], "repo": "uber/cadence", "url": "https://github.com/uber/cadence/pull/3486", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
369394107
Hoodie Delta Streamer Features : Transformation and Hoodie Incremental Source with Hive integration New Features in DeltaStreamer : (1) Apply transformation when using delta-streamer to ingest data. (2) Add Hudi Incremental Source for Delta Streamer (3) Hive Integration (4) Allow delta-streamer config-property to be passed as command-line @n3nash @vinothchandar : Ready for review @n3nash : This PR allows user-defined transformation function to be applied for delta-streamer sources. I had to also refactor "Source" hierarchy to avoid extra data-format transformation. A naive implementation would have resulted in the following format changes before writing : Row (Source) -> Avro -> Row -> Row (transformed) -> Avro (for writing) when both a Row Source and a transformer is configured. With the changes, the chain becomes Row (Source) -> Row (transformed) -> Avro (writing). @bvaradar Left some comments. @vinothchandar : Addressed all the comments including schemaProvider handling for RowSource. Please take a look when you get a chance. @bvaradar fyi.. I made a code change to comment out the datestr handling you put in.. It causes significant performance overhead. @n3nash @vinothchandar : Ready for review. Can you make one pass at it when you get a chance
gharchive/pull-request
2018-10-12T04:23:23
2025-04-01T04:36:09.789789
{ "authors": [ "bvaradar", "n3nash", "vinothchandar" ], "repo": "uber/hudi", "url": "https://github.com/uber/hudi/pull/485", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
193850867
Add client info to connection https://github.com/uber/vertica-python/issues/100 Tested in Python 2 and Python 3. Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution. Updates on this?
gharchive/pull-request
2016-12-06T18:38:58
2025-04-01T04:36:09.792081
{ "authors": [ "CLAassistant", "sevagh" ], "repo": "uber/vertica-python", "url": "https://github.com/uber/vertica-python/pull/135", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
234524735
Country Restriction Hi, I would like to use "components=country:nl" as a restriction. When I place it in the code like this: <script src="https://maps.googleapis.com/maps/api/js?libraries=places&components=country:nl&language=nl&key=API_KEY"></script> It doesn't work, what am I doing wrong or is it not possible yet? BR, Nick Solved: $("#addr").geocomplete({ details: "form", detailsAttribute: "from", country: ["nl"], types: ["geocode", "establishment"], });
gharchive/issue
2017-06-08T13:19:31
2025-04-01T04:36:09.810754
{ "authors": [ "nduijvelshoff" ], "repo": "ubilabs/geocomplete", "url": "https://github.com/ubilabs/geocomplete/issues/331", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1114877154
Wrong Requirements dependency declaration?! Documentation notes that "You need to have React 16.8.0 or later installed to use the Hooks API." package.json dependency declaration is: "peerDependencies": { "react": ">=16.18.0 <18.0.0" }, Is 16.18.0 typo? This package is not working with react 16.14.0 Thank you @mucic for pointing this out! It should be >=16.8.0. A fix for this will be in the next version.
gharchive/issue
2022-01-26T10:46:28
2025-04-01T04:36:09.812744
{ "authors": [ "mucic", "plumdumpling" ], "repo": "ubilabs/google-maps-react-hooks", "url": "https://github.com/ubilabs/google-maps-react-hooks/issues/38", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1400611022
Import @Uniswap/smart-order-router to accurately predict uCR cash out amounts Im reading this now. The return value looks familiar. https://github.com/Uniswap/smart-order-router Probably a bad idea but the first thing that comes to mind is to browserify (this transpiler exists) the node project. We could make a separate Git fork that handles this and import the submodule into the component that needs it. That would probably have to be a new bounty because that seems like it's own project. Any other ideas? Originally posted by @pavlovcik in https://github.com/ubiquity/ubiquity-dollar/issues/279#issuecomment-1271037058 I just realized that Next has server side (Node) rendering/processing capabilities. We might be able to directly include the package in our UI without having to "browserify" I really hope that Tenderly is not a mandatory dependency because we only get 50 free simulations per month, unless we pay a lot of money for a "pro" account. Hey, @pavlovcik did you try to run build smart order router repo? I have some errors.. hope you to try and help me to run the project Try generating the types by running typechain Yeah, if execute "yarn run build", it generated automatically, but the typechain path has some issue? or I'm not familiar with it yet I also had to remove some path setting, but not sure I was right Just copy their CI. It builds there. Cool, thanks Their tests work so the code builds, and functions without errors. Learn how to use their code by seeing how the tests use the code. Oh.. it was not, I got same error when running in Ubuntu 😢 @ubiquity/development any ideas for @sunny0714 @0xcodercrane Thanks for your try. Yeah, I have already found it at that time. I used yarn, so when installing packages, package-lock.json was not reflected. I have almost finished this feature, going to finish by today or tomorrow Hey @pavlovcik @0xcodercrane The uCR cash out amount is being shown accurately on this commit Fortunately, it's not using tenderly for integration Please review it when you are available Uniswap widget was reflected on this commit npm run build should generate types for each package. Anyway I tried locally on macOS but 3 commands (npm, npm run build, npm run test) were successful. We should always be using yarn why are you @0xcodercrane talking about npm? Otherwise, then we have issues with the lockfile as @sunny0714 mentioned I discussed with @sunny0714 and he makes a claim about how both bounties are done. I discussed with the @ubiquity/bounty-masters and there was pushback on releasing both bounties because we are unable to test the cash out UI flow (many related systems seem to have broken @0xcodercrane hopefully you have updates on the workaround) But this router seems to work and I think it's fine to consider this bounty complete and to release funds. @sunny0714 if you can post your wallet address. @sergfeldman if you can release funds - I'll be away from my computer for the evening. Thanks! okay @pavlovcik thanks Here is my wallet address 0xc6fa133f3290e14Ad91C7449f8D8101A6f894E25 We have been struggling to test the cashout UI but unfortunately no way so far. its really difficult without fixing our broken system but codebase looks good, router seems to work properly. We appreciate doing good job @sunny0714. Right now we are unable to test because of the blocker. but we want to trust it will absolutely work once we resolve our blocker, but in worse case (e.g. not working), if we ping you with the problem , you will definitely help us right? @sunny0714 I discussed with @sunny0714 and he makes a claim about how both bounties are done. I discussed with the @ubiquity/bounty-masters and there was pushback on releasing both bounties because we are unable to test the cash out UI flow (many related systems seem to have broken @0xcodercrane hopefully you have updates on the workaround) But this router seems to work and I think it's fine to consider this bounty complete and to release funds. @sunny0714 if you can post your wallet address. @sergfeldman if you can release funds - I'll be away from my computer for the evening. Thanks! We have been struggling to test the cashout UI but unfortunately no way so far. its really difficult without fixing our broken system but codebase looks good, router seems to work properly. We appreciate doing good job @sunny0714. Right now we are unable to test because of the blocker. but we want to trust it will absolutely work once we resolve our blocker, but in worse case (e.g. not working), if we ping you with the problem , you will definitely help us right? @sunny0714 Thanks for saying so @0xcodercrane I will definitely fix issues if there is any problems sunny0714 @sunny0714 Thank you for the completed bounty https://etherscan.io/tx/0xdd2ef921a5e7272bb379adc12b259a924ffc7f92570686642208bb8a272cd22e @sunny0714 Thank you for the completed bounty https://etherscan.io/tx/0xdd2ef921a5e7272bb379adc12b259a924ffc7f92570686642208bb8a272cd22e In the future @sergfeldman please be sure to Include any applicable UBQ bonuses according to the calculations listed here. Close the issue @sunny0714 Thank you for the completed bounty https://etherscan.io/tx/0xdd2ef921a5e7272bb379adc12b259a924ffc7f92570686642208bb8a272cd22e In the future @sergfeldman please be sure to Include any applicable UBQ bonuses according to the calculations listed here. Close the issue Update: looks like it took two weeks to finish this bounty, so there are no applicable UBQ bonuses! Note for the future. This issue has not a usual resolution. The bounty considered as completed, but the issue is not tested. The dedicated issue for testing is created https://github.com/ubiquity/ubiquity-dollar/issues/297 This issue is closed. This issue has not a usual resolution. The bounty considered as completed, but the issue is not tested. The dedicated issue for testing is created #297 Technically, the Uniswap router does apparently work (it displays the estimated swap values already in the UI) the other issue is what needs testing as there is functionality dependent on those numbers returned by the router.
gharchive/issue
2022-10-07T04:42:38
2025-04-01T04:36:09.862759
{ "authors": [ "0xcodercrane", "pavlovcik", "sergfeldman", "sunny0714" ], "repo": "ubiquity/ubiquity-dollar", "url": "https://github.com/ubiquity/ubiquity-dollar/issues/282", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
161069520
Název sloupce Html::el() ... Pokud do názvu předám instance \Nette\Utils\Hml .... tak by bylo dobré vypnout traslátor. Column::setHeaderEscaping()
gharchive/issue
2016-06-19T11:59:16
2025-04-01T04:36:09.867551
{ "authors": [ "daihousl", "paveljanda" ], "repo": "ublaboo/datagrid", "url": "https://github.com/ublaboo/datagrid/issues/273", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2669854051
Geolocation doesn't work by default Describe the bug When I visit google maps I am unable to get precise location. Running the following and reloading the tab fixes the issue: sudo systemctl enable geoclue What did you expect to happen? I expected location services to work by default, or to have at least been clearly asked during setup. Output of rpm-ostree status State: idle Deployments: ● ostree-image-signed:docker://ghcr.io/ublue-os/bazzite:stable Digest: sha256:568eaa6f01398dbe91c5740fbd8804bee5297ce2fa74720aed87a1a8acd0329b Version: 41.20241112.1 (2024-11-12T23:57:36Z) LayeredPackages: kdepim-addons kmail ostree-image-signed:docker://ghcr.io/ublue-os/bazzite:stable Digest: sha256:568eaa6f01398dbe91c5740fbd8804bee5297ce2fa74720aed87a1a8acd0329b Version: 41.20241112.1 (2024-11-12T23:57:36Z) LayeredPackages: kdepim-addons Hardware Framework 13 AMD Extra information or context No response It could be because Mozilla geolocation server has shutdown & geoclue uses it to determine location. It seems that geolocation doesn't work since then & Fedora doesn't want to switch to Google servers. You can see more here: https://bugzilla.redhat.com/show_bug.cgi?id=2284621
gharchive/issue
2024-11-18T20:37:18
2025-04-01T04:36:09.870897
{ "authors": [ "MattMcDonald", "fiftydinar" ], "repo": "ublue-os/bazzite", "url": "https://github.com/ublue-os/bazzite/issues/1897", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
516782318
Upgrade to support Rust 2018 This PR resolves https://github.com/ubnt-intrepid/dot/issues/18 changes were mostly created by running cargo fix --editon use dirs crate to resolve a warning about using env to determine the home directory upgrade dependency error-chain due to compile errors with previous version Thanks for your contribution!
gharchive/pull-request
2019-11-03T05:10:08
2025-04-01T04:36:09.872850
{ "authors": [ "ingorichter", "ubnt-intrepid" ], "repo": "ubnt-intrepid/dot", "url": "https://github.com/ubnt-intrepid/dot/pull/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
406222156
question - memory footprint Hello! I have an idea, and I have some questions, maybe you can help me. There is a project called YunoHost and it is aimed at helping home users self host services like WordPress or Nextcloud on an arm computer at home. It is nice, but based on bash let's say, and as I love k8s api, I'm wondering about k8s mem footprint on such small devices. In a context of a one node cluster, what is the mem footprint of: - etcd ~ [22Mb I guess](https://coreos.com/etcd/docs/latest/benchmarks/etcd-storage-memory-benchmark.html) - kube-api - controller - scheduler - kubelet I think our main constraint here is mem and not cpu. Another question, do we need also a network plugin? Or as there is only one node, it is enough? And then, what about optimization, I think there are many rooms for improvements in this context: - remove lot of unecessary code at compilation (like aws integration and so on) - remove the scheduler, or replace it with a dummy one - tune the controller for home user usage instead of thousands nodes, millions pods - tune etcd - reimplement some functions like cronjobs? - socket activation for all the services I'm just thinking out loud here, if you have any ideas, please share them here, and do not hesitate to close the issue once you answered! Thanks again and have a nice day :) Relates to https://github.com/alexellis/k8s-on-raspbian/issues/10 This is also relevant to the discussion: https://github.com/solo-io/unik/issues/182 Hi @pierreozoux Thank you for sharing your thoughts. You bring up a very interesting topic. Having MicroK8s extending into IoT is in our interest. MicroK8s being a snap, arm support and the move to containerd work towards this direction. Any PRs that would improve the memory footprint without compromising functionality will be gladly accepted. Would you be willing to offer any cycles? Thank you.
gharchive/issue
2019-02-04T08:22:15
2025-04-01T04:36:09.931340
{ "authors": [ "ktsakalozos", "pierreozoux" ], "repo": "ubuntu/microk8s", "url": "https://github.com/ubuntu/microk8s/issues/303", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
432723183
Use EC2 Spot Instance for GPU Testing Now we have #669 in place, we need a way to test it. We can use ec2 spot instance to test it. To do: [ ] Prepare an AMI that have GPU driver, nvidia-docker, and minikube with GPU support. Deliverable: prepare_ami.sh that should be run at a clean instance and result in the exactly the same ami Deliverable: ami-{id} and make it public. [ ] Inject aws crendential to jenkins, test that we are able to start & stop spot instance, pipe output. [ ] Connect GPU tests in ./bin/shipyard.sh Replacing with #677 and closing.
gharchive/issue
2019-04-12T20:02:12
2025-04-01T04:36:09.950304
{ "authors": [ "RehanSD", "simon-mo" ], "repo": "ucbrise/clipper", "url": "https://github.com/ucbrise/clipper/issues/670", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
229853283
[CLIPPER-155] Restart containers, persistent & remote redis Adds support for automatic restarting of containers in the case of failure. Additionally, modifies Clipper manager to support specification of a remote redis server or a path for redis data persistence (if redis was started in a docker container by the manager). Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/Clipper-PRB/289/ Test FAILed. jenkins test this please Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/Clipper-PRB/292/ Test FAILed. jenkins test this please @dcrankshaw Addressed your comments. Still unable to reproduce the unkillable container issue. Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/Clipper-PRB/297/ Test FAILed. Test PASSed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/Clipper-PRB/311/ Test PASSed.
gharchive/pull-request
2017-05-19T02:23:28
2025-04-01T04:36:09.955348
{ "authors": [ "AmplabJenkins", "Corey-Zumar", "dcrankshaw" ], "repo": "ucbrise/clipper", "url": "https://github.com/ucbrise/clipper/pull/161", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }